Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74856
Title: | 藉由單應性估計與注意力學習進行深度去鬼影曝光融合 Deep Exposure Fusion with Deghosting via Homography Estimation and Attention Learning |
Authors: | Sheng-Yeh Chen 陳聖曄 |
Advisor: | 莊永裕(Yung-Yu Chuang) |
Keyword: | 曝光融合,去鬼影,單應性估計,注意力學習,對抗式學習, exposure fusion,deghosting,homography estimation,attention learning,adversarial learning, |
Publication Year : | 2019 |
Degree: | 碩士 |
Abstract: | 現代相機能夠拍攝的動態範圍有限,在單一曝光值拍攝的影像中,時常出現過曝或黑暗的區域。雖然這個問題可以採用多張不同曝光值的影像解決,曝光融合的方法仍存在由相機運動和移動中物體所造成的鬼影瑕疵和細節損失。本篇論文提出一個針對曝光融合的深度學習網路。為減少可能產生的鬼影問題,我們的網路只採用兩張影像,一張低曝光值的影像和一張高曝光值的影像。我們的網路將補償相機運動的單應性估計、修正剩餘未對齊和移動中像素的注意力機制以及去除剩餘瑕疵的對抗式學習整合為一。測試在現實世界用手機相機手持拍攝的影像上,實驗顯示我們提出的方法可產生高質量影像並在黑暗和光明區域中有可信細節、逼真色彩的呈現。 Modern cameras have limited dynamic ranges and often produce images with saturated or dark regions using a single exposure. Although the problem could be addressed by taking multiple images with different exposures, exposure fusion methods need to deal with ghosting artifacts and detail loss caused by camera motion or moving objects. This paper proposes a deep network for exposure fusion. For reducing the potential ghosting problem, our network only takes two images, an underexposed image and an overexposed one. Our network integrates together homography estimation for compensating camera motion, attention mechanism for correcting remaining misalignment and moving pixels, and adversarial learning for alleviating other remaining artifacts. Experiments on real-world photos taken using handheld mobile phones show that the proposed method can generate high-quality images with faithful detail and vivid color rendition in both dark and bright areas. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74856 |
DOI: | 10.6342/NTU201904236 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 資訊工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-108-1.pdf Restricted Access | 5.27 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.