Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85587
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor貝蘇章(Soo-Chang Pei)
dc.contributor.authorYu-Wei Chenen
dc.contributor.author陳昱瑋zh_TW
dc.date.accessioned2023-03-19T23:19:09Z-
dc.date.copyright2022-07-28
dc.date.issued2022
dc.date.submitted2022-07-03
dc.identifier.citation[1] L.-W. Wang, Z.-S. Liu, W.-C. Siu, and D. P. Lun,“Lightening network for lowlight image enhancement,” IEEE Transactions on Image Processing, vol. 29, pp. 7984–7996, 2020. [2] S.Wang,W. Cho, J. Jang, M. A. Abidi, and J. Paik,“Contrast-dependent saturation adjustment for outdoor image enhancement,” JOSA A, vol. 34, no. 1, pp. 7–17, 2017. [3] X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on image processing, vol. 26, no. 2, pp. 982–993, 2016. [4] C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” arXiv preprint arXiv:1808.04560, 2018. [5] R. Quan, X. Yu, Y. Liang, and Y. Yang, “Removing raindrops and rain streaks in one go,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 9143–9152. [6] Z. Shi, Y. Feng, M. Zhao, and L. He, “A joint deep neural networks-based method for single nighttime rainy image enhancement,” Neural Computing and Applications, vol. 32, no. 7, pp. 1913–1926, 2020. [7] Y. Wan, Y. Cheng, and M. Shao, “Rain removal and illumination enhancement done in one go,” arXiv preprint arXiv:2108.03873, 2021. [8] Q. Jiang, Y. Zhang, F. Bao, X. Zhao, C. Zhang, and P. Liu, “Two-step domain adaptation for underwater image enhancement,” Pattern Recognition, vol. 122, p. 108324, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320321005045 [9] F. Lv, Y. Li, and F. Lu, “Attention guided low-light image enhancement with a large scale low-light simulation dataset,” International Journal of Computer Vision, vol.129, no. 7, pp. 2175–2193, 2021. [10] C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp.3291–3300. [11] C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, vol. 98, p.107038, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320319303401 [12] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232. [13] M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,”Advances in neural information processing systems, vol. 30, 2017. [14] X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 172–189. [15] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8789–8797. [16] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha, “Stargan v2: Diverse image synthesis for multiple domains,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8188–8197. [17] Y. Liu, M. De Nadai, J. Yao, N. Sebe, B. Lepri, and X. Alameda-Pineda, “Gmmunit: Unsupervised multi-domain and multi-modal image-to-image translation via attribute gaussian mixture modeling,” arXiv preprint arXiv:2003.06788, 2020. [18] Y. Shen, J. Gu, X. Tang, and B. Zhou, “Interpreting the latent space of gans for semantic face editing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9243–9252. [19] R. Zhang, S. Tang, Y. Li, J. Guo, Y. Zhang, J. Li, and S. Yan, “Style separation and synthesis via generative adversarial networks,” in Proceedings of the 26th ACM international conference on Multimedia, 2018, pp. 183–191. [20] A. H. Liu, Y.-C. Liu, Y.-Y. Yeh, and Y.-C. F. Wang, “A unified feature disentangler for multi-domain image translation and manipulation,” Advances in neural information processing systems, vol. 31, 2018. [21] P.-W. Wu, Y.-J. Lin, C.-H. Chang, E. Y. Chang, and S.-W. Liao, “Relgan: Multi-domain image-to-image translation via relative attributes,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5914–5922. [22] T.-C. Wang, M.-Y. Liu, A. Tao, G. Liu, J. Kautz, and B. Catanzaro, “Few-shot video-to-video synthesis,” in Advances in Neural Information Processing Systems (NeurIPS), 2019. [23] M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz,“Few-shot unsupervised image-to-image translation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 10 551–10 560. [24] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “Highresolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8798–8807. [25] Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: A simple and strong anchor-free object detector,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. [26] B. Wu, F. Iandola, P. H. Jin, and K. Keutzer, “Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 129–137. [27] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020. [28] X. Wang, T. Kong, C. Shen, Y. Jiang, and L. Li, “Solo: Segmenting objects by locations,” in European Conference on Computer Vision. Springer, 2020, pp. 649–665. [29] W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang, “Image-adaptive yolo for object detection in adverse weather conditions,” arXiv preprint arXiv:2112.08088, 2021. [30] S.-W. Huang, C.-T. Lin, S.-P. Chen, Y.-Y. Wu, P.-H. Hsu, and S.-H. Lai, “Auggan: Cross domain adaptation with gan-based data augmentation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 718–731. [31] C.-T. Lin, S.-W. Huang, Y.-Y. Wu, and S.-H. Lai, “Gan-based day-to-night image style transfer for nighttime vehicle detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 2, pp. 951–963, 2020. [32] D. Dai and L. Van Gool, “Dark model adaptation: Semantic image segmentation from daytime to nighttime,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018, pp. 3819–3824. [33] J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Transactions on image processing, vol. 9, no. 5, pp. 889–896, 2000. [34] K. Zuiderveld, Contrast Limited Adaptive Histogram Equalization. USA: Academic Press Professional, Inc., 1994, p. 474–485. [35] K.Wei, Y. Fu, J. Yang, and H. Huang, “A physics-based noise formation model for extreme low-light raw denoising,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2758–2767. [36] M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1664–1673. [37] K. G. Lore, A. Akintayo, and S. Sarkar, “Llnet: A deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017. [38] Y.-W. Chen, S.-C. Pei, and C.-S. Fuh, “Dtln: A deep two-branch lightening network with saturation adjustment for low-light enhancement,” in 34th IPPR Conference on Computer Vision, Graphics and Image Processing (CVGIP 2021), 2021. [39] H. Ibrahim and N. S. P. Kong, “Brightness preserving dynamic histogram equalization for image contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1752–1758, 2007. [40] D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE transactions on image processing, vol. 6, no. 3, pp. 451–462, 1997. [41] Z.-u. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-scale retinex for color image enhancement,” in Proceedings of 3rd IEEE international conference on image processing, vol. 3. IEEE, 1996, pp. 1003–1006. [42] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2782–2790. [43] R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia, “Underexposed photo enhancement using deep illumination estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6849–6857. [44] K. Lu and L. Zhang, “Tbefn: A two-branch exposure-fusion network for low-light image enhancement,” IEEE Transactions on Multimedia, vol. 23, pp. 4093–4105, 2020. [45] F. Lv, F. Lu, J. Wu, and C. Lim, “Mbllen: Low-light image/video enhancement using cnns.” in BMVC, vol. 220, no. 1, 2018, p. 4. [46] C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.1780–1789. [47] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,” IEEE Transactions on Image Processing, vol. 30, pp. 2340–2349, 2021. [48] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, “Noise2noise: Learning image restoration without clean data,” arXiv preprint arXiv:1803.04189, 2018. [49] A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void-learning denoising from single noisy images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2129–2137. [50] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1712–1722. [51] A. A. Michelson, Studies in optics. Courier Corporation, 1995. [52] E. Peli, “Contrast in complex images,” JOSA A, vol. 7, no. 10, pp. 2032–2040, 1990. [53] G. Seif and D. Androutsos, “Edge-based loss function for single image superresolution,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 1468–1472. [54] H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on computational imaging, vol. 3, no. 1, pp. 47–57, 2016. [55] J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018. [56] Y. Yan, W. Ren, Y. Guo, R. Wang, and X. Cao, “Image deblurring via extreme channels prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4003–4011. [57] H. Yamashita, D. Sugimura, and T. Hamamoto, “Low-light color image enhancement via iterative noise reduction using rgb/nir sensor,” Journal of Electronic Imaging, vol. 26, no. 4, p. 043017, 2017. [58] Q. Tang, J. Yang, H. Liu, and Z. Guo, “A modified syn2real network for nighttime rainy image restoration,” in International Symposium on Visual Computing. Springer, 2020, pp. 344–356. [59] G. Wang, C. Sun, and A. Sowmya, “Attentive feature refinement network for single rainy image restoration,” IEEE Transactions on Image Processing, vol. 30, pp.3734–3747, 2021. [60] S. Deng, M. Wei, J. Wang, Y. Feng, L. Liang, H. Xie, F. L. Wang, and M. Wang, “Detail-recovery image deraining via context aggregation networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 14 560–14 569. [61] D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3937–3946. [62] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2482–2491. [63] X. Yan and Y. R. Loke, “Raingan: Unsupervised raindrop removal via decomposition and composition,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 14–23. [64] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3855–3863. [65] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1357–1366. [66] H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multistream dense network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 695–704. [67] X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, “Recurrent squeeze-and-excitation context aggregation net for single image deraining,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 254–269. [68] R. Li, L.-F. Cheong, and R. T. Tan, “Heavy rain image restoration: Integrating physics model and conditional adversarial learning,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 1633–1642. [69] R. Yasarla, V. A. Sindagi, and V. M. Patel, “Syn2real transfer learning for image deraining using gaussian processes,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2726–2736. [70] W. Yang, R. T. Tan, S. Wang, Y. Fang, and J. Liu, “Single image deraining: From model-based to data-driven and beyond,” IEEE Transactions on pattern analysis and machine intelligence, vol. 43, no. 11, pp. 4059–4077, 2020. [71] J. M. J. Valanarasu, R. Yasarla, and V. M. Patel, “Transweather: Transformer-based restoration of images degraded by adverse weather conditions,” arXiv preprint arXiv:2111.14813, 2021. [72] R. Li, R. T. Tan, and L.-F. Cheong, “All in one bad weather removal using architectural search,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3175–3185. [73] Z. Zheng, Y. Wu, X. Han, and J. Shi, “Forkgan: Seeing into the rainy night,” in European Conference on Computer Vision. Springer, 2020, pp. 155–170. [74] M. Roser, J. Kurz, and A. Geiger, “Realistic modeling of water droplets for monocular adherent raindrop recognition using bezier curves,” in Asian conference on computer vision. Springer, 2010, pp. 235–244. [75] Z. Hao, S. You, Y. Li, K. Li, and F. Lu, “Learning from synthetic photorealistic raindrop for single image raindrop removal,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0. [76] J. Li, K. A. Skinner, R. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” IEEE Robotics and Automation Letters (RA-L), 2017, accepted. [77] J. Lu, J. Li, Z. Yan, F. Mei, and C. Zhang, “Attribute-based synthetic network (absnet): Learning more from pseudo feature representations,” Pattern Recognition, vol. 80, pp. 129–142, 2018. [78] B. Zhao, B. Wu, T. Wu, and Y. Wang, “Zero-shot learning posed as a missing data problem,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 2616–2622. [79] Y. Chen, X. Yu, S. Liu, and G. Li, “Toward zero-shot unsupervised image-to-image translation,” arXiv preprint arXiv:2007.14050, 2020. [80] B. Li, Y. Gou, J. Z. Liu, H. Zhu, J. T. Zhou, and X. Peng, “Zero-shot image dehazing,” IEEE Transactions on Image Processing, vol. 29, pp. 8457–8466, 2020. [81] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4105–4113. [82] T. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu, “Semantic image synthesis with spatially-adaptive normalization,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 2337–2346. [83] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1510–1519. [84] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. [85] P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. Montenegro Campos, “Underwater depth estimation and image restoration based on single images,” IEEE Computer Graphics and Applications, vol. 36, no. 2, pp. 24–35, 2016. [86] Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2856–2868, 2018. [87] A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” IEEE Signal Processing Letters, vol. 27, pp. 675–679, 2020. [88] X. Fu and X. Cao, “Underwater image enhancement with global-local networks and compressed-histogram equalization,” Signal Processing: Image Communication, vol. 86, p. 115892, 05 2020. [89] N. Wang, Y. Zhou, F. Han, H. Zhu, and Y. Zheng, “Uwgan: Underwater gan for real-world underwater color restoration and dehazing,” 2019. [90] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 7159–7165. [91] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3227–3234, 2020. [92] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Transactions on Image Processing, vol. 29, pp. 4376–4389, 2020. [93] P. M. Uplavikar, Z. Wu, and Z. Wang, “All-in-one underwater image enhancement using domain-adversarial learning.” in CVPR Workshops, 2019, pp. 1–8. [94] Y. Zhou, K. Yan, and X. Li, “Underwater image enhancement via physicalfeedback adversarial transfer learning,” IEEE Journal of Oceanic Engineering, pp. 1–11, 2021. [95] Z. Wang, L. Shen, M. Yu, K. Wang, Y. Lin, and M. Xu, “Domain adaptation for underwater image enhancement,” 2021. [96] Y.-W. Chen and S.-C. Pei, “Domain adaptation for underwater image enhancement via content and style separation,” arXiv preprint arXiv:2202.08537, 2022. [97] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 81–88. [98] K. Iqbal, M. Odetayo, A. James, R. A. Salam, and A. Z. H. Talib, “Enhancing the low quality images using unsupervised colour correction method,” in 2010 IEEE International Conference on Systems, Man and Cybernetics, 2010, pp. 1703–1709. [99] A. S. Abdul Ghani and N. A. Mat Isa, “Underwater image quality enhancement through integrated color model with rayleigh distribution,” Applied Soft Computing, vol. 27, pp. 219–230, 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1568494614005821 [100] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-based enhancing approach for single underwater image,” in 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 4572–4576. [101] B. L. McGlamery, “A Computer Model For Underwater Camera Systems,” in Ocean Optics VI, S. Q. Duntley, Ed., vol. 0208, International Society for Optics and Photonics. SPIE, 1980, pp. 221 – 231. [Online]. Available: https://doi.org/10.1117/12.958279 [102] J. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE Journal of Oceanic Engineering, vol. 15, no. 2, pp. 101–111, 1990. [103] D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [104] ——, “Sea-thru: A method for removing water from underwater images,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 1682–1691. [105] Y. Shao, L. Li,W. Ren, C. Gao, and N. Sang, “Domain adaptation for image dehazing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2808–2817. [106] X. Liu, P. Sanchez, S. Thermos, A. Q. O’Neil, and S. A. Tsaftaris, “Learning disentangled representations in the imaging domain,” 2021. [107] S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz, “Mocogan: Decomposing motion and content for video generation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [108] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan, “Domain separation networks,” Advances in neural information processing systems, vol. 29, pp. 343–351, 2016. [109] Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1579–1594, 2017. [110] Y. Wang, W. Song, G. Fortino, L.-Z. Qi, W. Zhang, and A. Liotta, “An experimental-based review of image enhancement and image restoration methods for underwater imaging,” IEEE Access, 2019. [111] eriklindernoren, “Pytorch-gan,” https://github.com/eriklindernoren/ PyTorch-GAN, 2019. [112] K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE Journal of Oceanic Engineering, vol. 41, no. 3, pp. 541–551, 2016. [113] M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 6062–6071, 2015. [114] D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. [115] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. [116] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492–505, 2018. [117] A. Rasouli, I. Kotseruba, T. Kunic, and J. K. Tsotsos, “Pie: A large-scale dataset and models for pedestrian intention estimation and trajectory prediction,” in International Conference on Computer Vision (ICCV), 2019. [118] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [119] bilityniu, “underimage-fusion-enhancement,” https://github.com/bilityniu/ underimage-fusion-enhancement, 2019. [120] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “Attgan: Facial attribute editing by only changing what you want,” IEEE transactions on image processing, vol. 28, no. 11, pp. 5464–5478, 2019.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85587-
dc.description.abstract真實世界的影像經常受到惡劣氣候或不同光照環境影響,使得單張影像中包含多種退化因素,舉例來說,低光影像時常伴隨著圖像噪聲;水下影像時常具有色彩偏移及煙霧效果的退化因素;夜雨影像時常包含雨紋、雨滴、低光效果、光暈效果等。然而目前多數的研究一次僅能處理一種影像退化因素。由於近年深度學習相關方法於電腦視覺展現快速進展,一些研究亦開始致力於處理多重退化影像增強。而對於開發深度學習模型,有兩項重要因素需要考量,其一為訓練資料,因真實世界退化影像與其對應乾淨影像難以同時取得,許多影像增強的演算法因此透過合成資料來訓練模型,然而這會導致訓練的模型在合成影像與真實世界影像間產生域偏移的問題。第二個重要因素為模型設計,目前已有許多模型設計的典範被提出,例如捲積神經網路、生成對抗網路、Transformer、圖像轉換。在本篇論文中,我們將對於多重退化影像增強透過案例討論的方式進行分析與討論,並基於不同的模型設計典範,針對各案例提出我們的解決方案,其中將涵蓋精細設計捲積神經網路及多重領域圖像轉換兩種設計典範。我們也將針對合成影像與真實世界影像之間的域偏移問題進行研究,並透過域適應提供解決方案。最終我們將透過夜雨影像的物件偵測及實例分割,來討論不同模型設計典範間的可應用性及實用性。zh_TW
dc.description.abstractA real-world image suffer from adverse weather or lighting condition usually contain multiple degradation at once, for example, low-light image are usually accompanied by noise; underwater image usually accompanied by hazy effect; rainy night image usually contain rain streak, raindrop, low-light and halo effect, etc. However, most of existing works only tackle one degradation once. Thanks to recent learning-based method demonstrate significant advancement in computer vision, some of prior works are devoted to the problem of multiple degradation image enhancement. There are two crucial factor for developing a learning-based method. First one is training data, since the real-world distortion and clear image pairs are hard to obtain for image enhancement, many existing image enhancement algorithms use synthesis data for training, which caused the domain gap for synthesis and real-world images. The second one is model designing, many scheme are proposed to design a deep learning model, for example, CNN, GAN, Transformer, and Image-to-Image Translation. In this paper, we conduct an analysis on multiple degradation image enhancement through case study and provide our solution for each case study based on different model designing paradigm, include carefully design a CNN model, multi-domain Image-to-Image Translation, we also study the problem of domain gap caused by real-world and synthesis images by domain adaptation. We finally provide the discussion and practicality study for different model designing paradigm on single nighttime rainy image object detection and instance segmentation.en
dc.description.provenanceMade available in DSpace on 2023-03-19T23:19:09Z (GMT). No. of bitstreams: 1
U0001-2206202213544600.pdf: 57071803 bytes, checksum: 76d6ced772ebd1710dbc2256a24949c1 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i Acknowledgements iii 摘要 v Abstract vii Contents ix List of Figures xiii List of Tables xvii Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 Multiple Degradation Image Enhancement . . . . . . . . . . . . . . 5 2.2 Image-to-Image Translation . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Nighttime Object Detection . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 3 Methodology Study: Carefully Design CNN Model 9 3.1 Low-light Enhancement with Denoising . . . . . . . . . . . . . . . . 9 3.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.1.1 Low-light enhancement . . . . . . . . . . . . . . . . . 12 3.1.1.2 Image Denosing . . . . . . . . . . . . . . . . . . . . . 13 3.1.1.3 Back projection . . . . . . . . . . . . . . . . . . . . . 14 3.1.2 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.2.1 Image Preprocessing . . . . . . . . . . . . . . . . . . . 15 3.1.2.2 Feature Selection . . . . . . . . . . . . . . . . . . . . 16 3.1.2.3 Denosing Lightening Back Projection Block . . . . . . 17 3.1.2.4 Saturation Adjustment Module . . . . . . . . . . . . . 18 3.1.2.5 Loss Function . . . . . . . . . . . . . . . . . . . . . . 21 3.1.2.6 Remove Unsuitable Ground Truth . . . . . . . . . . . . 23 3.1.2.7 Low-light Image Synthesis . . . . . . . . . . . . . . . 24 3.1.3 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.1.3.1 Implementation Detail . . . . . . . . . . . . . . . . . . 26 3.1.3.2 Result on Synthesis Dataset . . . . . . . . . . . . . . . 26 3.1.3.3 Result on Real Dataset . . . . . . . . . . . . . . . . . . 27 3.1.3.4 Ablation Study . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Nighttime Rainy Image Deraining . . . . . . . . . . . . . . . . . . . 29 3.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.1.1 Image Deraining . . . . . . . . . . . . . . . . . . . . . 31 3.2.1.2 Nighttime Rainy Image Enhancement . . . . . . . . . . 33 3.2.2 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . 34 3.2.2.2 Synthesis Dataset . . . . . . . . . . . . . . . . . . . . 35 3.2.2.3 Image Enhancement Model . . . . . . . . . . . . . . . 37 3.2.3 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.3.1 Quality Evaluation on Synthesis Dataset . . . . . . . . 38 3.2.3.2 Quantity Evaluation . . . . . . . . . . . . . . . . . . . 39 3.2.3.3 Remaining Problem . . . . . . . . . . . . . . . . . . . 41 Chapter 4 Methodology Study: Multi-domain Image-to-Image translation 43 4.1 Nighttime Rainy Image Deraining . . . . . . . . . . . . . . . . . . . 43 4.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1.1.1 Multi-domain Image-to-Image Translation . . . . . . . 45 4.1.1.2 Zero-shot Learning . . . . . . . . . . . . . . . . . . . 46 4.1.1.3 Multi-tasking Image Enhancement . . . . . . . . . . . 48 4.1.2 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.1.2.1 Training Strategy . . . . . . . . . . . . . . . . . . . . 52 4.1.2.2 Model Architecture . . . . . . . . . . . . . . . . . . . 55 4.1.2.3 Training Data . . . . . . . . . . . . . . . . . . . . . . 57 4.1.2.4 Training Loss . . . . . . . . . . . . . . . . . . . . . . 58 4.1.3 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.1.4 Implementation Detail . . . . . . . . . . . . . . . . . . . . . . . . 62 4.1.4.1 Quality Evaluation . . . . . . . . . . . . . . . . . . . . 63 4.1.4.2 Quantity Evaluation . . . . . . . . . . . . . . . . . . . 64 4.1.4.3 Synthesis Multiple Degradation Image . . . . . . . . . 67 4.1.5 Discussion and Future Improvement . . . . . . . . . . . . . . . . . 70 Chapter 5 Domain Adaptation for Image Enhancement 73 5.1 Domain Adaptation for Underwater Image Enhancement . . . . . . . 73 5.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.1.1.1 Underwater Image Enhancement . . . . . . . . . . . . 77 5.1.1.2 Domain Adaptation for Image Enhancement . . . . . . 79 5.1.1.3 Style Separation . . . . . . . . . . . . . . . . . . . . . 80 5.1.2 Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.1.2.1 Domain Adaptation and Enhancement . . . . . . . . . 83 5.1.2.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . 85 5.1.2.3 Training Losses . . . . . . . . . . . . . . . . . . . . . 86 5.1.3 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.1.3.1 Implementation Details . . . . . . . . . . . . . . . . . 90 5.1.3.2 Quality Comparison . . . . . . . . . . . . . . . . . . . 91 5.1.3.3 Quantity Comparison . . . . . . . . . . . . . . . . . . 93 5.1.3.4 Cross Domain Image-to-Image Translation . . . . . . . 97 5.1.3.5 Latent Analysis . . . . . . . . . . . . . . . . . . . . . 98 5.1.3.6 Model Complexity Analysis . . . . . . . . . . . . . . . 101 5.1.3.7 Ablation Study . . . . . . . . . . . . . . . . . . . . . . 102 5.1.3.8 Generalization Study on Different Applications . . . . 102 Chapter 6 Discussion 105 6.1 Comparison and Analysis of Different Model Designing Methodologies 105 6.1.1 Intuition Behind Model Designing . . . . . . . . . . . . . . . . . . 105 6.1.2 Efficiency of Model Designing . . . . . . . . . . . . . . . . . . . . 107 6.1.3 Model Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.1.4 Generalization Capability and Practicality . . . . . . . . . . . . . . 108 6.2 Application on Rainy Night Object Detection and Segmentation . . . 109 Chapter 7 Conclusion 115 References 117
dc.language.isoen
dc.subject多重退化影像增強zh_TW
dc.subject深度學習zh_TW
dc.subject域適應zh_TW
dc.subject水下影像增強zh_TW
dc.subject低光增強zh_TW
dc.subject夜間除雨zh_TW
dc.subjectNighttime Rainy Image Enhancementen
dc.subjectMultiple Degradation Image Enhancementen
dc.subjectDeep Learningen
dc.subjectDomain Adaptationen
dc.subjectUnderwater Image Enhancementen
dc.subjectLow-light Enhancementen
dc.title多重退化影像之影像增強、域適應、物件偵測及其推廣zh_TW
dc.titleMultiple Degradation Image Enhancement, Domain Adaptation, Object Detection and Beyonden
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.author-orcid0000-0001-9127-6536
dc.contributor.oralexamcommittee杭學鳴(Hsueh-Ming Hang),丁建均(Jian-Jiun Ding),鍾國亮(Kuo-Liang Chung),曾建誠(Chien-Cheng Tseng)
dc.subject.keyword多重退化影像增強,深度學習,域適應,水下影像增強,低光增強,夜間除雨,zh_TW
dc.subject.keywordMultiple Degradation Image Enhancement,Deep Learning,Domain Adaptation,Underwater Image Enhancement,Low-light Enhancement,Nighttime Rainy Image Enhancement,en
dc.relation.page133
dc.identifier.doi10.6342/NTU202201061
dc.rights.note同意授權(全球公開)
dc.date.accepted2022-07-04
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
dc.date.embargo-lift2022-07-28-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
U0001-2206202213544600.pdf55.73 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved