Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Generative Adversarial Networks for Image Synthesis and Style Transfer in Videosgenerative Adversarial Networks for Image Synthesis and Style Transfer in Videos


Affiliations
1 Department of Computer Science and Applications, Vivekanandha College of Arts and Sciences for Women, India
2 Department of Computer Science and Engineering, JNTUA College of Engineering, India
3 Department of Computer Science and Engineering, Dhanalakshmi Srinivasan College of Engineering, India
4 Department of Electronics and Communication Engineering, Government Engineering College, Jhalawar, India
5 Department of Chemical Engineering, College of Engineering, University of Bahrain, Bahrain
     

   Subscribe/Renew Journal


In computer vision and artistic expression, the synthesis of visually compelling images and the transfer of artistic styles onto videos have gained significant attention. This research addresses the challenges in achieving realistic image synthesis and style transfer in the dynamic context of videos. Existing methods often struggle to maintain temporal coherence and fail to capture intricate details, prompting the need for innovative approaches. The conventional methods for image synthesis and style transfer in videos encounter difficulties in preserving the natural flow of motion and consistency across frames. This research aims to bridge this gap by leveraging the power of Generative Adversarial Networks (GANs) to enhance the quality and temporal coherence of synthesized images in video sequences. While GANs have demonstrated success in image generation, their application to video synthesis and style transfer remains an underexplored domain. The research seeks to address this gap by proposing a novel methodology that optimizes GANs for video-challenges, aiming for realistic, high-quality, and temporally consistent results. Our approach involves the development of a specialized GAN architecture tailored for video synthesis, incorporating temporal-aware modules to ensure smooth transitions between frames. Additionally, a style transfer mechanism is integrated, enabling the transfer of artistic styles onto videos seamlessly. The model is trained on diverse datasets to enhance its generalization capabilities. Experimental results showcase the efficacy of the proposed methodology in generating lifelike images and seamlessly transferring styles across video frames. Comparative analyses demonstrate the superiority of our approach over existing methods, highlighting its ability to address the temporal challenges inherent in video synthesis and style transfer.

Keywords

Generative Adversarial Networks, Image Synthesis, Style Transfer, Video Processing, Temporal Coherence
Subscription Login to verify subscription
User
Notifications
Font Size

  • M.Y. Liu and A. Mallya, “Generative Adversarial Networks for Image and Video Synthesis: Algorithms and Applications”, Proceedings of the IEEE, Vol. 109, No. 5, pp. 839-862, 2021.
  • L. Wang, F. Bi and F.R. Yu, “A State-of-the-Art Review on Image Synthesis with Generative Adversarial Networks”, IEEE Access, Vol. 8, pp. 63514-63537, 2020.
  • K. Asha and M. Rizvana, “Human Vision System's Region of Interest Based Video Coding”, Compusoft, Vol. 2, No. 5, pp. 127-134, 2013.
  • M. Mohseni and S.J. Priya, “The Role of Parallel Computing Towards Implementation of Enhanced and Effective Industrial Internet of Things (IOT) Through Manova Approach”, Proceedings of International Conference on Advance Computing and Innovative Technologies in Engineering, pp. 160-164, 2022.
  • M. Madiajagan and B. Pattanaik, “IoT-based Blockchain Intrusion Detection using Optimized Recurrent Neural Network”, Multimedia Tools and Applications, Vol. 89, pp. 1-22, 2023.
  • T. Karras, S. Laine and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019.
  • R. Li, “Image Style Transfer with Generative Adversarial Networks”, Proceedings of ACM International Conference on Multimedia, pp. 2950-2954, 2021.
  • K. Sakthisudhan, B. Murugesan and P.N.S. Sailaja, “Textile EF Shaped Antenna based on Reinforced Epoxy for Breast Cancer Detection by Composite Materials”, Materials Today: Proceedings, Vol. 45, pp. 6142-6148, 2021.
  • N. Kousik, S. Kallam, R. Patan and A.H. Gandomi, “Improved Salient Object Detection using Hybrid Convolution Recurrent Neural Network”, Expert Systems with Applications, Vol. 166, pp. 114064-114073, 2021.
  • N.V. Kousik, P. Johri and M.J. Divan, “Analysis on the Prediction of Central Line-Associated Bloodstream Infections (CLABSI) using Deep Neural Network Classification”, Academic Press, 2020.
  • A. Solanki, A. Nayyar and M. Naved, “Generative Adversarial Networks for Image-to-Image Translation”, Academic Press, 2021.
  • F. Zhang and C. Wang, “MSGAN: Generative Adversarial Networks for Image Seasonal Style Transfer”, IEEE Access, Vol. 8, pp. 104830-104840, 2020.
  • A. Shukla, R. Manikandan and M. Ramkumar, “Improved Recognition Rate of Different Material Category using Convolutional Neural Networks”, Materials Today: Proceedings, pp. 1-6, 2021.
  • P. Shamsolmoali, R. Wang and J. Yang, “Image Synthesis with Adversarial Networks: A Comprehensive Survey and Case Studies”, Information Fusion, Vol. 72, pp. 126-146, 2022.
  • N. Aldausari, N. Marcus and G. Mohammadi, “Video Generative Adversarial Networks: A Review”, ACM Computing Surveys, Vol. 55, No. 2, pp. 1-25, 2022.
  • R. Shesayar, A. Agarwal and S. Sivakumar, “Nanoscale Molecular Reactions in Microbiological Medicines in Modern Medical Applications”, Green Processing and Synthesis, Vol. 12, No. 1, pp. 1-13, 2023.
  • G. Kiruthiga, “Improved Object Detection in Video Surveillance using Deep Convolutional Neural Network Learning”, International Journal for Modern Trends in Science and Technology, Vol. 7, No. 11, pp. 104-108, 2021.
  • K. Praghash, S. Chidambaram and D. Shreecharan, “Hyperspectral Image Classification using Denoised Stacked Auto Encoder-Based Restricted Boltzmann Machine Classifier”, Proceedings of International Conference on Hybrid Intelligent Systems, pp. 213-221, 2022.
  • R. Li, G. Liu and B. Zeng, “SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer”, IEEE Transactions on Image Processing, Vol. 30, pp. 374-385, 2020.
  • G. Lv, S.M. Israr and S. Qi, “Multi-Style Unsupervised Image Synthesis using Generative Adversarial Nets”, IEEE Access, Vol. 9, pp. 86025-86036, 2021.

Abstract Views: 153

PDF Views: 1




  • Generative Adversarial Networks for Image Synthesis and Style Transfer in Videosgenerative Adversarial Networks for Image Synthesis and Style Transfer in Videos

Abstract Views: 153  |  PDF Views: 1

Authors

K. Ramesh
Department of Computer Science and Applications, Vivekanandha College of Arts and Sciences for Women, India
B. Muni Lavanya
Department of Computer Science and Engineering, JNTUA College of Engineering, India
B. Rajesh Kumar
Department of Computer Science and Engineering, Dhanalakshmi Srinivasan College of Engineering, India
Narayan Krishan Vyas
Department of Electronics and Communication Engineering, Government Engineering College, Jhalawar, India
Mohammed Saleh Al Ansari
Department of Chemical Engineering, College of Engineering, University of Bahrain, Bahrain

Abstract


In computer vision and artistic expression, the synthesis of visually compelling images and the transfer of artistic styles onto videos have gained significant attention. This research addresses the challenges in achieving realistic image synthesis and style transfer in the dynamic context of videos. Existing methods often struggle to maintain temporal coherence and fail to capture intricate details, prompting the need for innovative approaches. The conventional methods for image synthesis and style transfer in videos encounter difficulties in preserving the natural flow of motion and consistency across frames. This research aims to bridge this gap by leveraging the power of Generative Adversarial Networks (GANs) to enhance the quality and temporal coherence of synthesized images in video sequences. While GANs have demonstrated success in image generation, their application to video synthesis and style transfer remains an underexplored domain. The research seeks to address this gap by proposing a novel methodology that optimizes GANs for video-challenges, aiming for realistic, high-quality, and temporally consistent results. Our approach involves the development of a specialized GAN architecture tailored for video synthesis, incorporating temporal-aware modules to ensure smooth transitions between frames. Additionally, a style transfer mechanism is integrated, enabling the transfer of artistic styles onto videos seamlessly. The model is trained on diverse datasets to enhance its generalization capabilities. Experimental results showcase the efficacy of the proposed methodology in generating lifelike images and seamlessly transferring styles across video frames. Comparative analyses demonstrate the superiority of our approach over existing methods, highlighting its ability to address the temporal challenges inherent in video synthesis and style transfer.

Keywords


Generative Adversarial Networks, Image Synthesis, Style Transfer, Video Processing, Temporal Coherence

References