請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21292完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 莊永裕 | |
| dc.contributor.author | Szu-Chieh Wang | en |
| dc.contributor.author | 王思傑 | zh_TW |
| dc.date.accessioned | 2021-06-08T03:30:25Z | - |
| dc.date.copyright | 2019-08-20 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-08-14 | |
| dc.identifier.citation | [1] T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron. Unprocess- ing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
[2] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition (CVPR’05), volume 2, pages 60–65. IEEE, 2005. [3] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291– 3300, 2018. [4] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising with block- matching and 3d filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, volume 6064, page 606414. International Society for Optics and Photonics, 2006. [5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [6] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017. [7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [8] G. E. Healey and R. Kondepudy. Radiometric ccd camera calibration and noise estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(3):267–276, 1994. [9] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626–6637, 2017. [10] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with con- ditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017. [11] A. Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734, 2018. [12] D.P.KingmaandJ.Ba.Adam:Amethodforstochasticoptimization.arXivpreprint arXiv:1412.6980, 2014. [13] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution us- ing a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681–4690, 2017. [14] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila. Noise2noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189, 2018. [15] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley. Least squares gen- erative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2794–2802, 2017. [16] B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll. Burst de- noising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2502–2510, 2018. [17] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018. [18] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learn- ing with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [19] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21292 | - |
| dc.description.abstract | 時下照相機在低光源拍照得到的照片,常常被很強的雜訊影響圖片的品 質,因此低光源影像品質的增進是非常重要且極需解決的問題。另一方面, 近幾年深度學習的方法在各個領域取得許多成功,但是在深度學習模型中的 大量參數,仰賴高品質且大量的數據集來訓練及調整,收集如此的數據集則 需要花費很多時間。在低光源影像品質增強這個任務上,一般的作法需要數 據以成對的形式存在,並且需要在同一個場景同時拍攝短曝光時間、長曝光 時間各一張照片,以此來做為監督式學習使用。但是,在同一個場景拍攝兩 種曝光時間的照片,會需要限制場景內容、控制場景很長一段時間,這樣的 限制也使的一些場景的選擇變得稀少。另外,現存的深度學習方法皆是以整 個模型取代現有的相機處理程序,必須花費額外的力氣去整合兩者的優點。 因此,我們的方法目標是希望藉由連續拍攝的短曝光照片,以及不同場景的 長曝光照片所形成的數據集下,藉由兩階段的訓練,運用生成對抗網路改善 輸出的品質,來達到低光源影像品質增進的目標。並且,我們的方法設計成 可以融入現有相機處理程序,當做原有處理的的預處理階段,因此也能得以 保有現有相機處理程序的各個參數與彈性。在同樣的設定下,我們的方法比 其他方法可以達到更多的品質增進。 | zh_TW |
| dc.description.abstract | Taking photos under low light environments is always a challenge for current imaging pipelines. Image noise and artifacts corrupt the image. Tak- ing the great success of deep learning into consideration recently, it may be straightforward to train a deep convolutional network to perform enhance- ment on such images to restore the underlying clean image. However, the large number of parameters in deep models may require a large amount of data to train. For the low light image enhancement task, paired data requires a short exposure image and a long exposure image to be taken with perfect alignment, which may not be achievable in every scene, thus limiting the choice of possible scenes to capture paired data and increasing the effort to collect training data. Also, data-driven solutions tend to replace the entire camera pipeline and cannot be easily integrated to existing pipelines. There- fore, we propose to handle the task with our 2-stage pipeline, consisting of an imperfect denoise network, and a bias correction net BC-UNet. Our method only requires noisy bursts of short exposure images and unpaired long expo- sure images, relaxing the effort of collecting training data. Also, our method works in raw domain and is capable of being easily integrated into the ex- isting camera pipeline. Our method achieves comparable improvements to other methods under the same settings. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-08T03:30:25Z (GMT). No. of bitstreams: 1 ntu-108-R06922019-1.pdf: 40489479 bytes, checksum: 00a687c6777b579a1c0cee5511abf18b (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 摘要 iii Abstract iv 1 Introduction 1 2 Related Work 3 2.1 ImageDenoising .............................. 3 2.1.1 Supervised Image Denoising with Neural Networks . . . . . . . . 3 2.1.2 Training with Real-world Low Light Images . . . . . . . . . . . 4 2.1.3 Training with only Noisy Observations . . . . . . . . . . . . . . 4 2.2 Generative Adversarial Networks...................... 5 3 Methodology 6 3.1 Whole Pipeline ............................... 6 3.2 Stage1 - Denoiser trained with Noise2Noise . . . . . . . . . . . . . . . . 7 3.3 Stage2 - Bias Correction with Unpaired Adversarial Training (BC-UNet) 8 3.4 Training details ............................... 10 4 Experiment 11 4.1 SID dataset ................................. 11 4.2 Comparison with Synthetic-based Methods . . . . . . . . . . . . . . . . 12 4.2.1 Signal Dependent Gaussian Noise ................. 12 4.2.2 Generate Noise according to Prior Knowledge on Cameras . . . . 13 4.3 Comparison with Noise2Noise ....................... 13 4.4 Experiment Settings............................. 14 4.5 PSNR Results................................ 15 4.5.1 Raw PSNR Results......................... 16 4.5.2 JPG PSNR Results ......................... 16 4.5.3 Results of Noise2Truth....................... 17 4.5.4 Ablation study on global information . . . . . . . . . . . . . . . 18 4.6 Qualitative Results ............................. 18 5 Conclusion 22 Bibliography 23 | |
| dc.language.iso | en | |
| dc.title | 基於生成對抗網路之極低光源影像品質增強 | zh_TW |
| dc.title | Extreme Low Light Image Enhancement with Generative Adversarial Networks | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 107-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 吳賦哲,葉正聖 | |
| dc.subject.keyword | 低光源影像,低光源影像品質增進,生成對抗網路, | zh_TW |
| dc.subject.keyword | low light imaging,low light image enhancement,generative adversarial network, | en |
| dc.relation.page | 25 | |
| dc.identifier.doi | 10.6342/NTU201903501 | |
| dc.rights.note | 未授權 | |
| dc.date.accepted | 2019-08-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 39.54 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
