Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85253
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林宗男(Tsung-Nan Lin)
dc.contributor.authorChia-Ming Changen
dc.contributor.author張家銘zh_TW
dc.date.accessioned2023-03-19T22:53:10Z-
dc.date.copyright2022-08-02
dc.date.issued2022
dc.date.submitted2022-07-29
dc.identifier.citation[1] Pranjay Shyam, Sandeep Singh Sengar, Kuk-Jin Yoon, and Kyung-Soo Kim. Evaluating copy-blend augmentation for low level vision tasks. arXiv preprint arXiv:2103.05889, 2021. [2] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017. [3] RobbyTTan.Visibilityinbadweatherfromasingleimage.In2008IEEEconference on computer vision and pattern recognition, pages 1–8. IEEE, 2008. [4] Raanan Fattal. Single image dehazing. ACM transactions on graphics (TOG), 27(3):1–9, 2008. [5] Jean-Philippe Tarel and Nicolas Hautiere. Fast visibility restoration from a single color or gray level image. In 2009 IEEE 12th international conference on computer vision, pages 2201–2208. IEEE, 2009. [6] Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341–2353, 2010. [7] Gaofeng Meng, Ying Wang, Jiangyong Duan, Shiming Xiang, and Chunhong Pan. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE international conference on computer vision, pages 617– 624, 2013. [8] Ketan Tang, Jianchao Yang, and Jue Wang. Investigating haze-relevant features in a learning framework for image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2995–3000, 2014. [9] Qingsong Zhu, Jiaming Mai, and Ling Shao. A fast single image haze removal algorithm using color attenuation prior. IEEE transactions on image processing, 24(11):3522–3533, 2015. [10] Dana Berman, Shai Avidan, et al. Non-local image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1674–1682, 2016. [11] Earl J McCartney. Optics of the atmosphere: scattering by molecules and particles. New York, 1976. [12] Srinivasa G Narasimhan and Shree K Nayar. Chromatic framework for vision in bad weather. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), volume 1, pages 598–605. IEEE, 2000. [13] Srinivasa G Narasimhan and Shree K Nayar. Vision and the atmosphere. International journal of computer vision, 48(3):233–254, 2002. [14] Yuanjie Shao, Lerenhan Li, Wenqi Ren, Changxin Gao, and Nong Sang. Domain adaptation for image dehazing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2808–2817, 2020. [15] Pranjay Shyam, Kuk-Jin Yoon, and Kyung-Soo Kim. Towards domain invariant single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9657–9665, 2021. [16] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. [17] Dinu Coltuc, Philippe Bolon, and J-M Chassery. Exact histogram specification. IEEE Transactions on Image processing, 15(5):1143–1152, 2006. [18] Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, and Dan Feng. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE international conference on computer vision, pages 4770–4778, 2017. [19] He Zhang,Vishwanath Sindagi,and Vishal M Patel. Multi-scale single image dehazing using perceptual pyramid deep network. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 902–911, 2018. [20] Xiaohong Liu, Yongrui Ma, Zhihao Shi, and Jun Chen. Griddehazenet: Attention-based multi-scale network for image dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7314–7323, 2019. [21] Xu Qin,Zhilin Wang,Yuanchao Bai,Xiaodong Xie,and Huizhu Jia. Ffa-net:Feature fusion attention network for single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11908–11915, 2020. [22] Hang Dong, Jinshan Pan, Lei Xiang, Zhe Hu, Xinyi Zhang, Fei Wang, and Ming-Hsuan Yang. Multi-scale boosted dehazing network with dense feature fusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2157–2167, 2020. [23] Yaniv Romano and Michael Elad. Boosting of image denoising algorithms. SIAM Journal on Imaging Sciences, 8(2):1187–1219, 2015. [24] Ming Hong, Yuan Xie, Cuihua Li, and Yanyun Qu. Distilling image dehazing with heterogeneous task imitation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3462–3471, 2020. [25] Haiyan Wu, Yanyun Qu, Shaohui Lin, Jian Zhou, Ruizhi Qiao, Zhizhong Zhang, Yuan Xie, and Lizhuang Ma. Contrastive learning for compact single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10551–10560, 2021. [26] Cédric Villani. Optimal transport: old and new, volume 338. Springer, 2009. [27] Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport: With applica- tions to data science. Foundations and Trends® in Machine Learning, 11(5-6):355– 607, 2019. [28] Ananya Kumar, Tengyu Ma, and Percy Liang. Understanding self-training for gradual domain adaptation. In International Conference on Machine Learning, pages 5468–5479. PMLR, 2020. [29] Hong-You Chen and Wei-Lun Chao. Gradual domain adaptation without indexed intermediate domains. Advances in Neural Information Processing Systems, 34:8201– 8214, 2021. [30] JoshTobin,RachelFong,AlexRay,JonasSchneider,WojciechZaremba,andPieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23–30. IEEE, 2017. [31] Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Cameracci, Shaad Boochoon, and Stan Birchfield. Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 969–977, 2018. [32] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017. [33] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Con- volutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018. [34] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017. [35] Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for image restoration with neural networks. IEEE Transactions on computational imaging, 3(1):47–57, 2016. [36] Codruta O. Ancuti, Cosmin Ancuti, Mateu Sbert, and Radu Timofte. Dense haze: A benchmark for image dehazing with dense-haze and haze-free images. In IEEE International Conference on Image Processing (ICIP), IEEE ICIP 2019, 2019. [37] CodrutaO.Ancuti,CosminAncuti,andRaduTimofte.NH-HAZE:an image dehazing benchmark with non-homogeneous hazy and haze-free images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE CVPR 2020, 2020. [38] Codruta O. Ancuti, Cosmin Ancuti, Radu Timofte, and Christophe De Vleeschouwer. O-haze: a dehazing benchmark with real hazy and haze-free outdoor images. In IEEE Conference on Computer Vision and Pattern Recognition, NTIRE Workshop, NTIRE CVPR’18, 2018. [39] Cosmin Ancuti, Codruta O Ancuti, and Radu Timofte. Ntire 2018 challenge on image dehazing: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 891–901, 2018. [40] Codruta O Ancuti, Cosmin Ancuti, Radu Timofte, Luc Van Gool, Lei Zhang, and Ming-Hsuan Yang. Ntire 2019 image dehazing challenge report. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE CVPR 2019, 2019. [41] CodrutaOAncuti,CosminAncuti,Florin-AlexandruVasluianu,RaduTimofte,etal. NTIRE 2020 challenge on nonhomogeneous dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE CVPR 2020, 2020. [42] Boyi Li, Wenqi Ren, Dengpan Fu, Dacheng Tao, Dan Feng, Wenjun Zeng, and Zhangyang Wang. Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1):492–505, 2019.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85253-
dc.description.abstract近年來,深度學習方法在單張影像除霧領域取得了相當大的成功。然而,這些方法在面對領域遷移時往往會出現性能下降的情況。尤其現有的數據集存在霧霾的濃度差異,當這些方法在不同的數據集上進行測試時,往往會導致性能下降。為了解決這個問題,我們提出一個霧霾濃度感知的數據擴增演算法 (DAMix)。DAMix 會試圖產生最小化與目標領域的 Wasserstein 距離的樣本。這些樣本是透過將一張有霧的圖與其對應的無霧圖像以及大氣光做組合所生成。通過這種方式,這些經過 DAMix 處理的樣本不僅縮小了領域之間的差別,而且還被證明符合大氣散射模型。由於上述原因,DAMix 能幫助現有除霧方法在數據上以及畫面品質上得到全面提升。尤其它幫助這些方法在面對訓練分佈之外的有霧圖像時減輕顏色偏移以及過度增強的問題。此外,我們也在實驗結果中表明 DAMix 能在不同方面下對數據效率有所幫助。具體來說,在其中一個實驗設置下,使用一半的源數據集進行訓練並且搭配 DAMix 的除霧模型相比使用整個源數據集但不使用 DAMix 訓練的模型能達到更好的適應性。並且由於 DAMix 的低計算資源要求,它可以很容易地被加入現有的除霧方法中,以便在面對領域遷移時獲得更好的效果。zh_TW
dc.description.abstractDeep learning-based methods have achieved considerable success on single image dehazing in recent years. However, these methods are often subject to performance degradation when domain shifts are confronted. Specifically, haze density gaps exist among the existing datasets, often resulting in poor performance when these methods are tested across datasets. To address this issue, we propose a density-aware mixup augmentation (DAMix). DAMix generates samples in an attempt to minimize the Wasserstein distance with the hazy images in the target domain. These samples are generated by combining a hazy image with its corresponding ground truth and the atmospheric light by two density-aware matrices. In this manner, these DAMix-ed samples not only mitigate domain gaps but are also proven to comply with the atmospheric scattering model. Thus, DAMix achieves comprehensive improvements on domain adaptation quantitatively and qualitatively. It helps the state-of-the-art networks mitigate the color shift and overenhancement when dealing with hazy images out of the training distribution. Furthermore, we show that DAMix is helpful with respect to data efficiency under different perspectives. Specifically, a network trained with half of the source dataset using DAMix can achieve even better adaptivity than that trained with the whole source dataset but without DAMix. Thanks to the low computational overhead of DAMix, it can be easily plugged into any codebase to achieve better performance when confronting domain shifts.en
dc.description.provenanceMade available in DSpace on 2023-03-19T22:53:10Z (GMT). No. of bitstreams: 1
U0001-2107202214265400.pdf: 45780867 bytes, checksum: 3d07501e986f45bc9ded3a5d57888c99 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents摘要 ... i Abstract ... iii Contents ... v List of Figures ... vii List of Tables ... ix Chapter 1 Introduction ... 1 Chapter 2 Related Work ... 7 2.1 Prior-based Methods ... 7 2.2 Deep Learning-based Methods ... 8 Chapter 3 Density-Aware Mixup (DAMix) ... 11 3.1 Notations ... 11 3.2 Regarding the Estimation of Haze Density ... 12 3.3 DAMix Formulation ... 14 3.4 Haze Density Alignment ... 16 3.5 Acquisition of Haze Density Target ... 17 3.6 Mitigating Domain Shifts ... 19 3.6.1 DA ... 19 3.6.2 DG ... 19 Chapter 4 Adaptive Dehazing Network (ADN) ... 23 4.1 Information Flow ... 23 4.2 Network Architecture ... 25 4.3 Loss Function for the Primary Branch ... 25 4.4 Loss Function for the Enhanced Branch ... 26 4.5 Loss Function for the Weight Generator ... 28 Chapter 5 Experiments ... 29 5.1 Experimental Settings ... 29 5.1.1 Datasets and Evaluation Metrics ... 29 5.1.2 Architecture ... 30 5.2 DAMix Helps with DA ... 30 5.2.1 Quantitative Comparisons ... 30 5.2.2 Qualitative Comparisons ... 32 5.3 DAMix Helps Data Efficiency ... 32 5.3.1 Without Domain Shifts ... 34 5.3.2 DA ... 34 5.3.3 DG ... 35 5.4 Adaptation to Real Images ... 37 5.5 Ablation Study ... 38 Chapter 6 Conclusion ... 39 References ... 41
dc.language.isoen
dc.subject領域遷移zh_TW
dc.subject影像除霧zh_TW
dc.subject數據擴增zh_TW
dc.subjectsingle image dehazingen
dc.subjectdata augmentationen
dc.subjectdomain shiften
dc.title針對影像除霧的領域遷移所提出霧霾濃度感知的數據擴增演算法zh_TW
dc.titleDAMix: A Density-Aware Mixup Augmentation for Single Image Dehazing under Domain Shiften
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.author-orcid0000-0003-2595-3759
dc.contributor.oralexamcommittee鄧惟中(Wei-Chung Teng),陳俊良(Chun-Liang Chen),蔡子傑(Tzu-Chieh Tsai)
dc.subject.keyword影像除霧,數據擴增,領域遷移,zh_TW
dc.subject.keywordsingle image dehazing,data augmentation,domain shift,en
dc.relation.page47
dc.identifier.doi10.6342/NTU202201606
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-08-01
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
dc.date.embargo-lift2022-08-02-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
U0001-2107202214265400.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
44.71 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved