請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96303| 標題: | 基於擴散模型與散焦圖估計之雙像素散焦影像去模糊 Dual-Pixel Defocus Deblurring with Diffusion Model and Defocus Map Estimation |
| 作者: | 張力仁 Li-Jen Chang |
| 指導教授: | 莊永裕 Yung-Yu Chuang |
| 關鍵字: | 散焦去模糊,雙像素影像,擴散模型,散焦圖估計, Defocus Deblurring,Dual-Pixel Image,Diffusion Model,Defocus Map Estimation, |
| 出版年 : | 2024 |
| 學位: | 碩士 |
| 摘要: | 大光圈常用於需要保持固定快門速度的應用或在低光環境下提高曝光量。然而,大光圈可能因較淺的景深而導致散焦模糊,需進行去模糊以恢復清晰的影像。散焦去模糊在處理嚴重模糊的區域時較具有挑戰性,現有的回歸方法通常難以有效還原細節。雖然生成式模型能改善部分結果,但有時會產生偏離目標影像分布的輸出,從而在衡量失真程度的指標上存在局限性。為解決這些問題,我們提出了一個新的散焦去模糊架構,結合生成式模型和確定性模型的優勢。該方法利用潛在擴散模型的生成能力來修復嚴重失焦的區域。同時,我們結合了雙像素成像的物理原理,以自監督學習的方式估計散焦圖,並透過其引導來融合生成式模型和確定性模型的影像復原結果。這種融合方法使我們能夠在修復顯著模糊區域的同時,保持了失真準確度。實驗結果顯示,我們的方法在視覺品質上表現優於既有的方法,並且在處理具有挑戰性的散焦場景時有較穩定的去模糊效果。 Wide apertures are crucial in computer vision applications that require enhanced exposure in low-light conditions or a fast shutter speed. However, they can introduce defocus blur due to shallow depth of field, necessitating defocus deblurring to restore all-in-focus images. Defocus deblurring is particularly challenging in image regions with significant blur, where existing regression-based methods often struggle to recover fine details. While generative approaches can improve results, they sometimes produce outputs misaligned with the target image distribution, causing limitations in distortion-based metrics. To address these issues, we present a novel defocus deblurring framework that integrates the strengths of both generative and deterministic models. Our method leverages the generative capabilities of latent diffusion to restore highly defocused areas. By incorporating the physical model of dual-pixel imaging, we estimate defocus maps in a self-supervised manner, guiding a fusion process that combines generative and deterministic restoration outputs. This fusion allows us to preserve distortion accuracy while recovering heavily blurred regions. Experimental results demonstrate that our approach outperforms state-of-the-art methods in visual quality, delivering robust performance in challenging defocus scenarios. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96303 |
| DOI: | 10.6342/NTU202404628 |
| 全文授權: | 未授權 |
| 顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-1.pdf 未授權公開取用 | 5.89 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
