請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70891
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
dc.contributor.author | Hsueh-I Chen | en |
dc.contributor.author | 陳學儀 | zh_TW |
dc.date.accessioned | 2021-06-17T04:42:38Z | - |
dc.date.available | 2020-08-10 | |
dc.date.copyright | 2018-08-10 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-05 | |
dc.identifier.citation | Reference
[1] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. arXiv preprint arXiv:1805.01934, 2018. [2] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007. [3] X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, W. Meng, and Y. Lu. Fast efficient algorithm for enhancement of low lighting video. 2011. [4] C. Godard, K. Matzen, and M. Uyttendaele. Deep burst denoising. arXiv preprint arXiv:1712.05790, 2017. [5] X. Guo, Y. Li, and H. Ling. Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2):982–993, 2017. [6] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE conference on computer vision and pattern recognition (CVPR), volume 2, page 6, 2017. [7] V. Jain and S. Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems, pages 769–776, 2009. [8] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [9] A. Loza, D. R. Bull, P. R. Hill, and A. M. Achim. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Digital Signal Processing, 23(6):1856–1866, 2013. [10] M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian. Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms. IEEE Transactions on image processing, 21(9):3952–3966, 2012. [11] H. Malm, M. Oskarsson, E. Warrant, P. Clarberg, J. Hasselgren, and C. Lejdfors. Adaptive enhancement and noise reduction in very low light-level video. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1–8. IEEE, 2007. [12] B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll. Burst denoising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2502–2510, 2018. [13] S. Park, S. Yu, B. Moon, S. Ko, and J. Paik. Low-light image enhancement using variational optimization-based retinex model. IEEE Transactions on Consumer Electronics, 63(2):178–184, 2017. [14] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70891 | - |
dc.description.abstract | 對於大部分的相機來說,在低光源的環境下拍照是一個很有挑戰性的問題。在這篇論文裡我們提出了一個神經網路的架構。我們會輸入多張低亮度的原始圖檔,並經過對齊、降噪與混合這三個步驟。我們使用FlowNet2.0來預測多張影像之間的光流,並且將其對齊。接著我們會將這些對齊的多張原始圖檔輸入DenoiseUNet以獲得彩色的影像。其中DenoiseUNet包括了降噪以及色彩還原這兩個部分。最後我們會將多張影像的結果以及單張影像的結果溶合,讓最終的輸出在錯誤對齊時使用單張影像的結果。實驗結果顯示了使用多張影像的方法在細節上可以比只有使用單一影像來的清晰許多。 | zh_TW |
dc.description.abstract | Taking photos under low light environment is always a challenge for most camera. In this thesis, we propose a neural network pipeline for processing burst short-exposure raw data. Our method contains alignment, denoising and blending. First, we use FlowNet2.0 to predict the optical flow between burst images and align these burst images. And then, we feed the aligned burst raw data into a DenoiseUNet, which includes denoise-part and color-part, to generate an RGB image. Finally, we use a MaskUNet to generate a mask that can distinguish misalignment. We blend the outputs from single raw image and from burst raw images by the mask. Our method proves that using burst inputs has significantly improvement than single input. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T04:42:38Z (GMT). No. of bitstreams: 1 ntu-107-R05944021-1.pdf: 8097733 bytes, checksum: a14e94fdf5b2a9f16668ce75d73a4026 (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 致謝i
摘要ii Abstract iii List of Figures vi List of Tables viii Chapter 1 Introduction 1 Chapter 2 Related Work 3 2.1 Image Denoising 3 2.2 Low Light Image Enhancement 4 Chapter 3 Background 5 3.1 SID dataset 5 3.2 Learning-to-see-in-the-dark 6 Chapter 4 Methodology 8 4.1 Whole Pipeline 8 4.2 Burst Raw Images Alignment using FlowNet2.0 10 4.3 Burst Short-Exposure Raw Images Enhancement 12 4.3.1 DenoiseUNet 12 4.3.2 MultiUNet 13 4.4 Blending Single-RGB and Burst-RGB 14 4.5 Training Details 15 Chapter 5 Experiment and Result 16 5.1 Comparison with SID 16 5.1.1 SID dataset 16 5.1.2 Real data 17 5.2 Comparison with KPN 21 5.3 Effect of Blending 22 5.4 Model Comparison 24 5.4.1 Comparison between denoise-part and color-part 24 5.4.2 Comparison between DenoiseUNet and MultiUNet 25 Chapter 6 Conclusion 28 Bibliography 30 | |
dc.language.iso | en | |
dc.title | 基於對齊、降噪與混合的深度多張低亮度影像增強 | zh_TW |
dc.title | Deep Burst Low Light Image Enhancement with Alignment, Denoising and Blending | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林彥宇(Yen-Yu Lin),陳煥宗(Hwann-Tzong Chen) | |
dc.subject.keyword | 多張低亮度影像增強,對齊,降噪,溶合, | zh_TW |
dc.subject.keyword | Burst low light image enhancement,alignment,denoising,blending, | en |
dc.relation.page | 31 | |
dc.identifier.doi | 10.6342/NTU201802499 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-08-06 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 7.91 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。