請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74315
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
dc.contributor.author | Ting-Kang Liu | en |
dc.contributor.author | 劉庭綱 | zh_TW |
dc.date.accessioned | 2021-06-17T08:29:25Z | - |
dc.date.available | 2021-02-22 | |
dc.date.copyright | 2021-02-22 | |
dc.date.issued | 2021 | |
dc.date.submitted | 2021-01-25 | |
dc.identifier.citation | [1]J. Cai, S. Gu, and L. Zhang. Learning a deep single image contrast enhancer frommultiexposure images.IEEE Transactions on Image Processing, 27(4):2049–2062,2018. [2]W. Y. Chen Wei, Wenjing Wang and J. Liu. Deep retinex decomposition for lowlightenhancement. InBritish Machine Vision Conference, 2018. [3]X. Fu, D. Zeng, Y. Huang, X.P. Zhang, and X. Ding. A weighted variational modelfor simultaneous reflectance and illumination estimation. InProceedingsoftheIEEEConference on Computer Vision and Pattern Recognition, pages 2782–2790, 2016. [4]I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair,A. Courville, and Y. Bengio. Generative adversarial nets.Advances in neural information processing systems, 27:2672–2680, 2014. [5]C.Guo, C.Li, J.Guo, C.C.Loy, J.Hou, S.Kwong, andR.Cong. Zeroreferencedeepcurve estimation for lowlight image enhancement. InProceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition, pages 1780–1789, 2020. [6]X. Guo, Y. Li, and H. Ling. Lime: Lowlight image enhancement via illuminationmap estimation.IEEE Transactions on Image Processing, 26(2):982–993, 2017. [7]K. He and J. Sun. Fast guided filter.arXiv preprint arXiv:1505.00996, 2015. [8]K. He, J. Sun, and X. Tang. Guided image filtering. InEuropean conference oncomputer vision, pages 1–14. Springer, 2010.14 [9]K. He, J. Sun, and X. Tang. Guided image filtering.IEEE transactions on patternanalysis and machine intelligence, 35(6):1397–1409, 2012. [10]C. Hessel and J.M. Morel. An extended exposure fusion and its application to singleimage contrast enhancement. InThe IEEE Winter Conference on Applications ofComputer Vision, pages 137–146, 2020. [11]Y.Jiang, X.Gong, D.Liu, Y.Cheng, C.Fang, X.Shen, J. Yang, P.Zhou, and Z.Wang.Enlightengan: Deep light enhancement without paired supervision.arXiv preprintarXiv:1906.06972, 2019. [12]D. J. Jobson, Z.u. Rahman, and G. A. Woodell. A multiscale retinex for bridging thegap between color images and the human observation of scenes.IEEE Transactionson Image processing, 6(7):965–976, 1997. [13]D. J. Jobson, Z.u. Rahman, and G. A. Woodell. Properties and performance ofa center/surround retinex.IEEE transactions on image processing, 6(3):451–462,1997. [14]E. H. Land. The retinex theory of color vision.Scientificamerican, 237(6):108–129,1977. [15]E. H. Land and J. J. McCann. Lightness and retinex theory.Josa, 61(1):1–11, 1971. [16]M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo. Structurerevealing lowlight imageenhancement via robust retinex model.IEEE Transactions on Image Processing,27(6):2828–2841, 2018. [17]Z. Li, Z. Wei, C. Wen, and J. Zheng. Detailenhanced multiscale exposure fusion.IEEE Transactions on Image processing, 26(3):1243–1252, 2017. [18]J. H. Lim and J. C. Ye. Geometric gan.arXiv preprint arXiv:1705.02894, 2017. [19]S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer,B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld. Adaptive histogram equalization and its variations.Computer vision, graphics, and image processing,39(3):355–368, 1987. [20]Z.u. Rahman, D. J. Jobson, and G. A. Woodell. Multiscale retinex for color image enhancement. InProceedings of 3rd IEEE International Conference on ImageProcessing, volume 3, pages 1003–1006. IEEE, 1996. [21]O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical image computingand computerassisted intervention, pages 234–241. Springer, 2015. [22]D. Tran, R. Ranganath, and D. Blei. Hierarchical implicit models and likelihoodfreevariational inference. InAdvances in Neural Information Processing Systems, pages5523–5533, 2017. [23]R. Wang, Q. Zhang, C.W. Fu, X. Shen, W.S. Zheng, and J. Jia. Underexposedphoto enhancement using deep illumination estimation. InThe IEEE Conference onComputer Vision and Pattern Recognition (CVPR), June 2019. [24]S. Wang, J. Zheng, H.M. Hu, and B. Li. Naturalness preserved enhancement algorithmfornonuniformilluminationimages.IEEETransactionsonImageProcessing,22(9):3538–3548, 2013. [25]Y. Zhang, J. Zhang, and X. Guo. Kindling the darkness: A practical lowlight imageenhancer. InProceedings of the 27th ACM International Conference on Multimedia,MM ’19, pages 1632–1640, New York, NY, USA, 2019. ACM. [26]K. Zuiderveld. Contrast limited adaptive histogram equalization.Graphics gems,pages 474–485, 1994. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74315 | - |
dc.description.abstract | 本篇論文提出一個基於深度學習的低光源影像增強方法,相較於前人的方法直接估計輸出圖片,由於圖片照度相對更加平滑,我們的方法選擇估計輸入圖片的照度,再將原圖除以網路估計的照度來產生最終的輸出圖片;另外,我們基於 Retinex 理論設計了一個針對低光源影像的照度估計方法,並用其當作網路的損失函數之一來幫助其收斂;最後,由於環境與設備上的限制,導致成對資料在取得上較為困難,且在場景及數量上的豐富度較為不足,我們的方法利用非成對資料加上生成對抗網路來處理上述問題。透過與前人的方法比較,發現我們的方法在亮度上有較好的表現,也展示了此方法的可行性。 | zh_TW |
dc.description.abstract | This thesis proposes a deep learning-based low light image enhancement method. Previous works usually model it as an image-to-image translation problem. Compared with them, our method predicts the illumination of the input image, which is smoother than the image itself. Then we divide the input image by predicted illumination to get the final estimated image. Besides, we devise an illumination estimation method applied to low light images based on Retinex Theory and use the result as a loss function to help our network converge. Finally, it is difficult to get paired data for training because of the limitation on the environment and device. Therefore, researchers generally encounter less scene diversity and insufficient data amount problems when collecting paired data. To tackle the problems mentioned above, we adopt unpaired data and leverage generative adversarial networks. Through the comparison to previous work, we find our method has better performance on brightness, and it shows the potential of our method. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T08:29:25Z (GMT). No. of bitstreams: 1 U0001-2201202114271700.pdf: 5980940 bytes, checksum: d31bf89c5bd7eac97ae8f4a7fb7516ef (MD5) Previous issue date: 2021 | en |
dc.description.tableofcontents | 口試委員審定書 ii 誌謝 iii 摘要 iv Abstract v 1 Introduction 1 2 Related Work 3 2.1 Conventional Methods 3 2.2 Data-Driven Methods 4 3 Method 5 3.1 Illumination Estimation 5 3.2 Model Architecture 7 3.3 Loss Function 7 4 Experiment 10 4.1 Dataset 10 4.2 Implementation Details 10 4.3 Visual Quality Comparison 10 4.4 Future Work 11 5 Conclusion 13 Bibliography 14 | |
dc.language.iso | en | |
dc.title | 基於最大濾波空間的低光源影像增強方法 | zh_TW |
dc.title | A Low Light Image Enhancement Method Based on Maximum Filter Space | en |
dc.type | Thesis | |
dc.date.schoolyear | 109-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 吳賦哲(Fu-Che Wu),葉正聖(Jeng-Sheng Yeh) | |
dc.subject.keyword | 低光源影像增強,Retinex 理論,生成對抗網路, | zh_TW |
dc.subject.keyword | Low-light image enhancement,Retinex Theory,Generative Adversarial Network, | en |
dc.relation.page | 16 | |
dc.identifier.doi | 10.6342/NTU202100126 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2021-01-25 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-2201202114271700.pdf 目前未授權公開取用 | 5.84 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。