Open Access Open Access  Restricted Access Subscription Access

A Fast-Dehazing Technique Using Generative Adversarial Network Model for Illumination Adjustment in Hazy Videos


Affiliations
1 Department of ECE, University College of Engineering, Osmania University, Hyderabad 500 007, India
 

Haze significantly lowers the quality of the photos and videos that are taken. This might potentially be dangerous in addition to having an impact on the monitoring equipment' dependability. Recent years have seen an increase in issues brought on by foggy settings, necessitating the development of real-time dehazing techniques. Intelligent vision systems, such as surveillance and monitoring systems, rely fundamentally on the characteristics of the input pictures having a significant impact on the accuracy of the object detection. This paper presents a fast video dehazing technique using Generative Adversarial Network (GAN) model. The haze in the input video is estimated using depth in the scene extracted using a pre trained monocular depth ResNet model. Based on the amount of haze, an appropriate model is selected which is trained for specific haze conditions. The novelty of the proposed work is that the generator model is kept simple to get faster results in real-time. The discriminator is kept complex to make the generator more efficient. The traditional loss function is replaced with Visual Geometry Group (VGG) feature loss for better dehazing. The proposed model produced better results when compared to existing models. The Peak Signal to Noise Ratio (PSNR) obtained for most of the frames is above 32. The execution time is less than 60 milli seconds which makes the proposed model suited for video dehazing.

Keywords

Depth Estimation, Discriminator Model, Generative Adversarial Networks, Generator Model, ResNet.
User
Notifications
Font Size

  • Ullah H & Mehmood I, Real-time video dehazing for industrial image processing, 13th Int Conf Softwar Knowl Inform Manag Appl (IEEE) 2019) 19–24.
  • Wang Z, Video Dehazing Based on Convolutional Neural Network Driven Using Our Collected Dataset, J Phys Conf Ser, 1544(1) (2020) 012156, doi: 101088/1742-6596/1544/1/012156
  • Crebolder J M & Sloan R B, Determining the effects of eyewear fogging on visual task performance, Appl Ergon, 35(4) (2004) 371–381, doi: 101016/japergo200402005
  • Soma P & Jatoth R K, An efficient and contrast-enhanced video de-hazing based on transmission estimation using HSL color model, Vis Comput, 38(7) (2021) 2569–2580, doi: 101007/s00371-021-02132-3
  • Singh D, Garg D & Pannu H S, Efficient Lands at image fusion using fuzzy and stationary discrete wavelet transform, 65(2) (2017) 108–114, doi: 101080/1368219920171289629
  • Sakaridis C, Dengxin D & Luc V G, Semantic foggy scene understanding with synthetic data, Int J of Comput Vis, 126(9) (2018) 973–992.
  • Singh D, A Comprehensive Review of Computational Dehazing Techniques, Arch Comput Methods Eng, 26 (2019) 1395-1413.
  • Li B, Peng Xi, Zhangyang W, Jizheng Xu & Dan F, Aod-net: All-in-one dehazing network, in Proc IEEE Int Conf Comput Vis (IEEE) 2017, 4770–4778, doi: 101109/ICCV2017511
  • Kaiming He, Sun J & Xiaoou T, Single image haze removal using dark channel prior, IEEE Trans Pattern Anal Mach Intel,l 33(12) (2010) 2341–2353.
  • NishinoKo, Louis K & Stephen L, Bayesian defogging, Int J Comput Vis, 98(3) (2012) 263–278, doi: 101007/s11263-011-0508-1
  • Sharma N, Kumar V & Singla S K, Single Image Defogging using Deep Learning Techniques: Past, Present and Future, Arch Comput Methods Eng, 28(7) (2021) 4449– 4469.
  • Peng S J, Zhang H, Liu X, Fan W, Zhong B & Du J X, Real-time video dehazing via incremental transmission learning and spatial-temporally coherent regularization, Neurocom puting, 458 (2021) 602–614, doi: 101016/jneuco m202002134
  • Ren W, Zhang J, Xu X, Ma L, Cao X, Meng G & Liu W, Deep Video Dehazing with Semantic Segmentation, IEEE Trans Image Process, 28(4) (2019) 1895–1908, doi: 101109/TIP20182876178
  • Runde Li, Progressive deep video dehazing without explicit alignment estimation, arXiv, (2021).
  • Zhong L, Hao Z, Yuanyuan S, Zhuhong S & Hui D, Fast video dehazing using per-pixel minimum adjustment, Math Probl Eng (2018) 1–8.
  • Boyi Li, Xiulian P, Zhangyang W, Jizheng Xu & Dan F, End-to-end united video dehazing and detection, in Proc AAAI Conf Artif Intell, 32(1) (2018).
  • Zhang J, Liang Li, Yi Z, Guoqiang Y & Xiaochun C, Video dehazing with spatial and temporal coherence, Vis Comput, 27(6) (2011) 749–757, doi: 101007/s00371-011-0569-8
  • CaiB, Xiangmin Xu, Kui Jia, Chunmei Q & Dacheng T, Dehazenet: An end-to-end system for single image haze removal, IEEE Trans Image Process, 25(11) (2016) 5187–5198.
  • Xie Li, Hao W, Zhuowei W & Lianglun C, DHD-Net: A Novel Deep-Learning-based Dehazing Network, In Int Joint Conf on Neural Networks (IEEE) 2020, 1–7.
  • GoodfellowI, Jean P-A, Mehdi M, Bing Xu, David W-F, Sherjil O, Aaron C&Yoshua B, Generative adversarial networks, Commun ACM, 63(11) (2020) 139–144.
  • Feng Y, A Survey on Video Dehazing Using Deep Learning, J Phys Conf Series, 1487(1) (2020) 012018, doi: 101088/1742-6596/1487/1/012018
  • Gonog, Li & Yimin Z, A review: generative adversarial networks, 14th IEEE Conf Ind Electron Appl (IEEE) 2019, 505–510.
  • Mathieu M, Camille C & Yann L, Deep multi-scale video prediction beyond mean square error, arXiv preprint arXiv, (2015).
  • Xiong W, Wenhan L, Lin Ma, Wei L & Jiebo L, Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks, Proc IEEE Conf Comput Vis Pattern Recognit (IEEE) 2018) 2364–2373.
  • GuiJ, Zhenan S, Yonggang W, Dacheng T & Jieping Ye, A review on generative adversarial networks: Algorithms theory and applications, IEEE Trans Knowl Data Eng (IEEE) 2021, 23 November, doi: 101109/TKDE20213130191
  • Nuha A, Arcot S, Nadine M & Andgelareh M, Video Generative Adversarial Networks: A Review, ACM Comput Surv, 55(2) (2022) 1–25.
  • Godard C, Oisin M A, Michael F & Gabriel J B, Digging into self-supervised monocular depth estimation, Proc IEEE/CVF Int Conf Comput Vis (IEEE) 2019, 3828–3838.
  • Isola P, Zhu J-Y, Zhou T & Alexei A E, Image-to-image translation with conditional adversarial network, In Proc IEEE Conf Comput Vis Pattern Recognit (IEEE) 2017, 1125–1134.
  • Chen J, Wu C, Hu C & Peng C, Unsupervised dark-channel attention-guided cyclegan for single-image dehazing, Sensors, 20(21) (2020) 6000.
  • WuM, Xin J, Qian J, Shin-jye L, Wentao Li, Guo L & Shaowen Y, Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space, The Vis Comput, 37(7) (2021) 1707–1729.
  • Tianyang D, Guoqing Z, Jiamin W, Yang Y & Ying S, Efficient traffic video dehazing using adaptive dark channel prior and spatial–temporal correlations, Sensors, 19(7) (2019) 1593

Abstract Views: 72

PDF Views: 53




  • A Fast-Dehazing Technique Using Generative Adversarial Network Model for Illumination Adjustment in Hazy Videos

Abstract Views: 72  |  PDF Views: 53

Authors

T M Praneeth Naidu
Department of ECE, University College of Engineering, Osmania University, Hyderabad 500 007, India
P Chandra Sekhar
Department of ECE, University College of Engineering, Osmania University, Hyderabad 500 007, India

Abstract


Haze significantly lowers the quality of the photos and videos that are taken. This might potentially be dangerous in addition to having an impact on the monitoring equipment' dependability. Recent years have seen an increase in issues brought on by foggy settings, necessitating the development of real-time dehazing techniques. Intelligent vision systems, such as surveillance and monitoring systems, rely fundamentally on the characteristics of the input pictures having a significant impact on the accuracy of the object detection. This paper presents a fast video dehazing technique using Generative Adversarial Network (GAN) model. The haze in the input video is estimated using depth in the scene extracted using a pre trained monocular depth ResNet model. Based on the amount of haze, an appropriate model is selected which is trained for specific haze conditions. The novelty of the proposed work is that the generator model is kept simple to get faster results in real-time. The discriminator is kept complex to make the generator more efficient. The traditional loss function is replaced with Visual Geometry Group (VGG) feature loss for better dehazing. The proposed model produced better results when compared to existing models. The Peak Signal to Noise Ratio (PSNR) obtained for most of the frames is above 32. The execution time is less than 60 milli seconds which makes the proposed model suited for video dehazing.

Keywords


Depth Estimation, Discriminator Model, Generative Adversarial Networks, Generator Model, ResNet.

References