Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51764
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor李明穗(Ming-Sui Lee)
dc.contributor.authorFu-Ning Yangen
dc.contributor.author楊馥寧zh_TW
dc.date.accessioned2021-06-15T13:48:33Z-
dc.date.available2020-08-24
dc.date.copyright2020-08-24
dc.date.issued2020
dc.date.submitted2020-08-16
dc.identifier.citationF. Cozman and E. Krotkov, “Depth from scattering,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1997.
E. J. McCartney, “Optics of the Atmosphere: Scattering by Molecules and Particles,” New York, John Wiley and Sons, Inc., 1976. 421 p., vol. 1, 1976.
S. G. Narasimhan and S. K. Nayar, “Chromatic Framework for Vision in Bad Weather,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 598–605, 2000.
S. G. Narasimhan and S. K. Nayar, “Vision and the Atmosphere,” in International Journal of Computer Vision (IJCV), vol. 48, no. 3, pp. 233-254, 2002.
I. Omer and M. Andwerman, “Color Lines: Image Specific Color Representation,” in IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2004.
A. Levin, D. Lischinski, and Y. Weiss, “A Closed Form Solution to Natural Image Matting,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2006.
S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 25, no. 6, pp. 713–724, 2003.
Z. Li, P. Tan, R. T. Tan, D. Zou, S. Z. Zhou, and L.-F. Cheong, “Simultaneous Video Defogging and Stereo Reconstruction,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in IEEE International Conference on Computer Vision (ICCV), vol. 2, 1999.
Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant Dehazing of Images Using Polarization,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2001.
S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind Haze Separation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2006.
J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” in IEEE Transactions on Image Processing (TIP), vol. 7, no. 2, pp. 167–179, 1998.
N. Hautiere, J.-P. Tarel, and D. Aubert, “Towards Fog-Free In-Vehicle Vision Systems through Contrast Restoration,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.
J. Kopf, B. Neubert, B. Chen, M. F. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep Photo: Model-Based Photograph Enhancement and Viewing,” ACM Transactions on Graphics (TOG), vol. 27, 2008.
S. G. Narasimhan and S. K. Nayar, “Interactive (De)Weathering of an Image using Physical Models,” in IEEE Workshop on Color and Photometric Methods in Computer Vision, in Conjunction with ICCV, 2003.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 39, no. 6, pp. 1137–1149, 2017.
E. Shelhamer, J. Long, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 39, no. 4, pp. 640–651, 2017.
Y. Qu, Y. Chen, J. Huang, and Y. Xie, “Enhanced Pix2pix Dehazing Network,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8160–8168, 2019.
X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q.Weinberger, “Densely Connected Convolutional Networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 3, 2017.
K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
J.-P. Tarel and N. Hautiere, “Fast Visibility Restoration from a Single Color or Gray Level Image,” in IEEE International Conference on Computer Vision (ICCV), 2009.
G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in IEEE International Conference on Computer Vision (ICCV), 2013.
R. Fattal, “Dehazing Using Color-Lines,” in ACM Transactions on Graphics (TOG), vol. 34, no. 1, 2014.
Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” in IEEE Transactions on Image Processing (TIP), vol. 24, no. 11, pp. 3522 – 3533, 2015.
D. Berman, T. Treibitz, and S. Avidan, “Non-Local Image Dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An End-to-End System for Single Image Haze Removal,” in IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016.
W. Ren, S. Liu, H. Zhang, X. Cao J. Pan, and M.-H. Yang, “Single Image Dehazing via Multi-Scale Convolutional Neural Networks,” in European Conference on Computer Vision (ECCV), 2016.
B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-Net: All-In-One Dehazing Network,” in IEEE International Conference on Computer Vision (ICCV), 2017.
Y. Cho, J. Jeong, and A. Kim, “Model-Assisted Multiband Fusion for Single Image Enhancement and Applications to Robot Vision,” in IEEE Robotics and Automation Letters (RA-L), vol. 3, no. 4, pp. 2822-2829, 2018.
W. Ren et al., “Gated Fusion Network for Single Image Dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
X. Liu, Y. Ma, Z. Shi, and J. Chen, “GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing,” in IEEE International Conference on Computer Vision (ICCV), 2019.
X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-Net: Feature Fusion Attention Network for Single Image Dehazing,” in Association for the Advancement of Artificial Intelligence (AAAI), 2020.
C. O. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” in IEEE Transactions on Image Processing (TIP), vol. 22, no. 8, pp. 3271–3282, 2013.
L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” in IEEE Transactions on Image Processing (TIP), vol. 24, no. 11, pp. 3888–3901, 2015.
N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor Segmentation and Support Inference from RGBD Images,” in European Conference on Computer Vision (ECCV), 2012.
B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking Single Image Dehazing and Beyond,” in IEEE Transactions on Image Processing (TIP), vol. 28, no. 1, pp. 492-505, 2018.
F. Liu, C. Shen, G. Lin, and I. Reid, “Learning depth from single monocular images using deep convolutional neural fields,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 38, no. 10, pp. 2024–2039, 2016.
H. Zhang and V. M. Patel, “Densely Connected Pyramid Dehazing Network,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
T. Guo, X. Li, V. Cherukuri, and V. Monga, “Dense Scene Information Estimation Network for Dehazing,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019.
J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” in European Conference on Computer Vision (ECCV), 2016.
S.-C. Huang, B.-H. Chen, and W.-J. Wang, “Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions,” in IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), vol. 24, no. 10, pp. 1814–1824, 2014.I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization,” in International Conference on Learning Representations (ICLR), 2019.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” in European Conference on Computer Vision (ECCV), pp. 818–833, 2014.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51764-
dc.description.abstract拍攝數位影像時,相機的感光元件所接受到的光源,除了來自所拍攝的物體與景象所反射光線外,也會接受到來自空氣中懸浮微粒所反射的光線。霧,便是一定濃度的懸浮微粒聚集在空氣中,吸收與反射光線所形成的自然景象。在起霧情況下所拍攝的照片,有著能見度低、對比度低、色彩飽和度低等特性。這樣的特性,廣泛影響各種電腦視覺相關任務,例如:邊緣檢測、提取圖片特徵、物件分類、監控系統等等。因此,若能將有霧的圖片還原得到一張清晰的影像,會有助於改善電腦視覺任務的判別與效能。
基於物理模型,本篇論文提出了一個深度學習的演算法,使用DenseNet提取霧的多個特徵,並利用兩個解碼器共同估計穿透圖(transmission map)和大氣光(atmospheric light),最後通過細緻化模組(refinement module)獲得去霧的清晰影像。在合成訓練資料的步驟中,考慮到真實場景下有霧圖片對比度較低的特性,因此加入降低對比度的步驟。同時,為了使模型能更精準的預測場景的穿透度,對產生穿透圖的場景深度進行細緻化處理。最後,為了提升模型的泛化能力考慮了涵蓋不同濃度的室內與室外場景。在訓練過程中,為了增強對遠方場景的除霧能力,損失函數使用衍生自穿透圖的權重數值進行均方誤差計算。在實際運用上,觀察到局部區域所估測的穿透圖較為精細,並考量到真實場景中的大氣光可能存在局部區域的不一致性,因此設計了一個局部估計方法,用於估測出較精細的穿透圖與局部區域大氣亮度,以提高在現實生活中的適用性。
zh_TW
dc.description.abstractWhen taking a digital image, the light received by the camera photo sensors includes not only the reflected light from the shooting scene but also the scattering light reflected by the particles suspended in the atmosphere. Haze is the weather phenomenon resulted from the extremely small particles in the air absorbing and scattering the light beam. Digital images captured under hazy circumstances will possess with the characteristics of poor visibility, lower contrast, and reduced color saturation. These characteristics widely affect various computer vision tasks, such as edge detection, feature extraction, object classification, and monitoring systems, etc. Therefore, recovering a clear and pleasing image would significantly improve the performance of the computer vision algorithms.
Based on the atmospheric light model, the proposed algorithm utilizes DenseNet to extract multiple features of hazy image, employs two decoders to jointly estimate the transmission map and global atmospheric light, and eventually obtains the final result through a refinement module. The training data is synthesized with a meticulous procedure that considers the contrast of hazy image, the exquisiteness of transmission map, and the density of variant hazy scenes. In order to strength the dehaze ability on remote scene, the loss function applies weighted mean squared error computation with weight derived from the transmission map. Observing the estimated local transmission map has finer prediction, and considering that the atmospheric light in real scene may exist inconsistent in local regions, a local estimation method is designed to estimate finer transmission map and local atmospheric light to enhance the applicability of dehazing in real life.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T13:48:33Z (GMT). No. of bitstreams: 1
U0001-0908202010544900.pdf: 6423174 bytes, checksum: 5cd7150246c256b1eef5af3ec0167dda (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents致謝 I
中文摘要 II
Abstract III
Contents V
List of Figures VII
List of Tables IX
Chapter 1 Introduction 1
1.1 Background 1
1.2 Main Contribution 4
1.3 Thesis Organization 7
Chapter 2 Related Work 8
2.1 Traditional prior-based approaches 8
2.2 Data-driven learning-based approaches 9
Chapter 3 Methodology 11
3.1 Procedure of Data Synthesis 11
3.2 Training and Testing Dataset 15
3.3 Network Architecture 16
3.3.1 Encoder 17
3.3.2 Transmission Map Decoder 18
3.3.3 Atmospheric Light Decoder 19
3.3.4 Refinement Module 19
3.4 Loss function 21
3.4.1 Transmission Map Loss 23
3.4.2 Atmospheric Light Loss 24
3.4.3 Haze-free Image Loss 24
3.4.4 Total Loss 25
3.5 Implementation 25
3.5.1 Training Phase 25
3.5.2 Testing Phase — the local estimation method 26
Chapter 4 Experimental Results 32
4.1 Evaluation on Real Hazy Images 32
4.2 Benefit of the local estimation method 47
4.3 More Comparisons with Real Hazy Images 48
4.4 Evaluation on Synthetic Hazy Images 52
Chapter 5 Conclusion 55
5.1 Assumptions and Limitations 55
5.2 Summary 55
Bibliography 57
dc.language.isoen
dc.subject編碼器解碼器架構zh_TW
dc.subject單一影像去霧化zh_TW
dc.subject影像增強zh_TW
dc.subject影像回復zh_TW
dc.subject大氣散射模型zh_TW
dc.subjectSingle Image Dehazingen
dc.subjectEncoder-Decoder Architectureen
dc.subjectAtmospheric Scattering Modelen
dc.subjectImage Restorationen
dc.subjectImage Enhancementen
dc.title基於細緻穿透圖與局部大氣亮度估測的單一影像去霧神經網路zh_TW
dc.titleDenseFeaturesNet: a single image dehazing network with refined transmission estimation and local atmospheric light predictionen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee葉家宏(Jia-Hong Ye),李界羲(Jie-Xi Li)
dc.subject.keyword單一影像去霧化,影像增強,影像回復,大氣散射模型,編碼器解碼器架構,zh_TW
dc.subject.keywordSingle Image Dehazing,Image Enhancement,Image Restoration,Atmospheric Scattering Model,Encoder-Decoder Architecture,en
dc.relation.page63
dc.identifier.doi10.6342/NTU202002700
dc.rights.note有償授權
dc.date.accepted2020-08-17
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-0908202010544900.pdf
  未授權公開取用
6.27 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved