請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74856
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
dc.contributor.author | Sheng-Yeh Chen | en |
dc.contributor.author | 陳聖曄 | zh_TW |
dc.date.accessioned | 2021-06-17T09:08:56Z | - |
dc.date.available | 2019-11-04 | |
dc.date.copyright | 2019-11-04 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-10-28 | |
dc.identifier.citation | [1] J. Cai, S. Gu, and L. Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4):2049–2062, 2018.
[2] Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1511–1520, 2017. [3] D. DeTone, T. Malisiewicz, and A. Rabinovich. Deep image homography estimation. arXiv preprint arXiv:1606.03798, 2016. [4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [5] M. Harman. Open camera, 2019. http://opencamera.org.uk/. [6] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017. [7] N. K. Kalantari and R. Ramamoorthi. Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph., 36(4):144–1, 2017. [8] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [9] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681–4690, 2017. [10] K. Ma, H. Li, H. Yong, Z. Wang, D. Meng, and L. Zhang. Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Transactions on Image Processing, 26(5):2519–2532, 2017. [11] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2794–2802, 2017. [12] T. Mertens, J. Kautz, and F. Van Reeth. Exposure fusion. In 15th Pacific Conference on Computer Graphics and Applications (PG’07), pages 382–390. IEEE, 2007. [13] T.-H. Oh, J.-Y. Lee, Y.-W. Tai, and I. S. Kweon. Robust high dynamic range imaging by rank minimization. IEEE transactions on pattern analysis and machine intelligence, 37(6):1219–1232, 2014. [14] K. R. Prabhakar, V. S. Srikar, and R. V. Babu. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. In ICCV, pages 4724–4732, 2017. [15] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. [16] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman. Robust patch-based hdr reconstruction of dynamic scenes. ACM Trans. Graph., 31(6):203–1, 2012. [17] Z. L. Szpak, W. Chojnacki, and A. van den Hengel. Robust multiple homography estimation: An ill-solved problem. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2132–2141, 2015. [18] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. [19] G. Ward. Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures. Journal of graphics tools, 8(2):17–30, 2003. [20] S. Wu, J. Xu, Y.-W. Tai, and C.-K. Tang. Deep high dynamic range imaging with large foreground motions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 117–132, 2018. [21] Q. Yan, D. Gong, Q. Shi, A. v. d. Hengel, C. Shen, I. Reid, and Y. Zhang. Attention-guided network for ghost-free high dynamic range imaging. arXiv preprint arXiv:1904.10293, 2019. [22] X. Zhang, R. Ng, and Q. Chen. Single image reflection separation with perceptual losses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4786–4794, 2018. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74856 | - |
dc.description.abstract | 現代相機能夠拍攝的動態範圍有限,在單一曝光值拍攝的影像中,時常出現過曝或黑暗的區域。雖然這個問題可以採用多張不同曝光值的影像解決,曝光融合的方法仍存在由相機運動和移動中物體所造成的鬼影瑕疵和細節損失。本篇論文提出一個針對曝光融合的深度學習網路。為減少可能產生的鬼影問題,我們的網路只採用兩張影像,一張低曝光值的影像和一張高曝光值的影像。我們的網路將補償相機運動的單應性估計、修正剩餘未對齊和移動中像素的注意力機制以及去除剩餘瑕疵的對抗式學習整合為一。測試在現實世界用手機相機手持拍攝的影像上,實驗顯示我們提出的方法可產生高質量影像並在黑暗和光明區域中有可信細節、逼真色彩的呈現。 | zh_TW |
dc.description.abstract | Modern cameras have limited dynamic ranges and often produce images with saturated or dark regions using a single exposure. Although the problem could be addressed by taking multiple images with different exposures, exposure fusion methods need to deal with ghosting artifacts and detail loss caused by camera motion or moving objects. This paper proposes a deep network for exposure fusion. For reducing the potential ghosting problem, our network only takes two images, an underexposed image and an overexposed one. Our network integrates together homography estimation for compensating camera motion, attention mechanism for correcting remaining misalignment and moving pixels, and adversarial learning for alleviating other remaining artifacts. Experiments on real-world photos taken using handheld mobile phones show that the proposed method can generate high-quality images with faithful detail and vivid color rendition in both dark and bright areas. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T09:08:56Z (GMT). No. of bitstreams: 1 ntu-108-R06922165-1.pdf: 5394765 bytes, checksum: 6087fbce0650702389dc03feee125e63 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 誌謝 ii
摘要 iii Abstract iv 1 Introduction 1 2 Related Work 4 2.1 Multi-exposure Image Alignment 4 2.2 HDR Imaging and Exposure Fusion 5 2.3 Generative Adversarial Networks 5 3 Methodology 7 3.1 Homography Network 8 3.1.1 4-point homography parameterization 8 3.1.2 Training example generation 9 3.1.3 Homography network 9 3.2 Generator 10 3.2.1 Attention network 10 3.2.2 Merging network 11 3.3 Discriminator 12 4 Experiment 14 4.1 Quantitative Evaluation 15 4.2 Homography Estimation 15 4.3 Exposure Fusion 16 4.4 Ablation Studies 17 4.4.1 Study on the model architecture 17 4.4.2 Study on the training loss function 18 5 Conclusion 20 Bibliography 21 | |
dc.language.iso | en | |
dc.title | 藉由單應性估計與注意力學習進行深度去鬼影曝光融合 | zh_TW |
dc.title | Deep Exposure Fusion with Deghosting via Homography Estimation and Attention Learning | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳文進(Wen-Chin Chen),黃子魁(Tzu-Kuei Huang) | |
dc.subject.keyword | 曝光融合,去鬼影,單應性估計,注意力學習,對抗式學習, | zh_TW |
dc.subject.keyword | exposure fusion,deghosting,homography estimation,attention learning,adversarial learning, | en |
dc.relation.page | 23 | |
dc.identifier.doi | 10.6342/NTU201904236 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-10-29 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 5.27 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。