Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70891
Title: | 基於對齊、降噪與混合的深度多張低亮度影像增強 Deep Burst Low Light Image Enhancement with Alignment, Denoising and Blending |
Authors: | Hsueh-I Chen 陳學儀 |
Advisor: | 莊永裕(Yung-Yu Chuang) |
Keyword: | 多張低亮度影像增強,對齊,降噪,溶合, Burst low light image enhancement,alignment,denoising,blending, |
Publication Year : | 2018 |
Degree: | 碩士 |
Abstract: | 對於大部分的相機來說,在低光源的環境下拍照是一個很有挑戰性的問題。在這篇論文裡我們提出了一個神經網路的架構。我們會輸入多張低亮度的原始圖檔,並經過對齊、降噪與混合這三個步驟。我們使用FlowNet2.0來預測多張影像之間的光流,並且將其對齊。接著我們會將這些對齊的多張原始圖檔輸入DenoiseUNet以獲得彩色的影像。其中DenoiseUNet包括了降噪以及色彩還原這兩個部分。最後我們會將多張影像的結果以及單張影像的結果溶合,讓最終的輸出在錯誤對齊時使用單張影像的結果。實驗結果顯示了使用多張影像的方法在細節上可以比只有使用單一影像來的清晰許多。 Taking photos under low light environment is always a challenge for most camera. In this thesis, we propose a neural network pipeline for processing burst short-exposure raw data. Our method contains alignment, denoising and blending. First, we use FlowNet2.0 to predict the optical flow between burst images and align these burst images. And then, we feed the aligned burst raw data into a DenoiseUNet, which includes denoise-part and color-part, to generate an RGB image. Finally, we use a MaskUNet to generate a mask that can distinguish misalignment. We blend the outputs from single raw image and from burst raw images by the mask. Our method proves that using burst inputs has significantly improvement than single input. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70891 |
DOI: | 10.6342/NTU201802499 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 資訊網路與多媒體研究所 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-107-1.pdf Restricted Access | 7.91 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.