請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96373完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 莊永裕 | zh_TW |
| dc.contributor.advisor | Yung-Yu Chuang | en |
| dc.contributor.author | 陳啟維 | zh_TW |
| dc.contributor.author | Chi-Wei Chen | en |
| dc.date.accessioned | 2025-02-13T16:10:26Z | - |
| dc.date.available | 2025-02-14 | - |
| dc.date.copyright | 2025-02-13 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-01-18 | - |
| dc.identifier.citation | [1] F. Banterle, K. Debattista, A. Artusi, S. Pattanaik, K. Myszkowski, P. Ledda, and A. Chalmers. High dynamic range imaging and low dynamic range expansion for generating hdr content. Computer Graphics Forum, 2009.
[2] F. Banterle, P. Ledda, K. Debattista, and A. Chalmers. Expanding low dynamic range videos for high dynamic range applications. In Spring Conference on Computer Graphics, 2008. [3] S.-K. Chen, H.-L. Yen, Y.-L. Liu, M.-H. Chen, H.-N. Hu, W.-H. Peng, and Y.-Y. Lin. Learning continuous exposure value representations for single-image hdr re- construction. In ICCV, 2023. [4] P. E. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. ACM Transactions on Graphics, 1997. [5] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In NeurIPS, pages 6840–6851, 2020. [6] N. K. Kalantari and R. Ramamoorthi. Deep high dynamic range imaging of dynamic scenes. ACM Transactions on Graphics, 2017. [7] S. Lin, J. Gu, S. Yamazaki, and H. Shum. Radiometric calibration from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3–4, 2004. [8] S. Lin and L. Zhang. Determining the radiometric response function from a single grayscale image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3–4, 2005. [9] Y.-L. Liu, W.-S. Lai, Y.-S. Chen, Y.-L. Kao, M.-H. Yang, Y.-Y. Chuang, and J.-B. Huang. Single-image hdr reconstruction by learning to reverse the camera pipeline. In CVPR, 2020. [10] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks: Learning 3d reconstruction in function space. In CVPR, 2019. [11] A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen. Glide: Towards photorealistic image generation and editing with text- guided diffusion models. arXiv preprint arXiv:2112.10741, 2021. [12] S. Saito, T. Simon, J. Saragih, and H. Joo. Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In CVPR, 2020. [13] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In ICML, pages 2256–2265, 2015. [14] J. Song, C. Meng, and S. Ermon. Denoising diffusion implicit models. In ICLR, 2021. [15] S. Wu, J. Xu, Y.-W. Tai, and C.-K. Tang. Deep high dynamic range imaging with large foreground motions. In ECCV, 2018. [16] Q. Xu, W. Wang, D. Ceylan, R. Mech, and U. Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. In NeurIPS, 2019. [17] Q. Yan, T. Hu, Y. Sun, H. Tang, Y. Zhu, W. Dong, L. V. Gool, and Y. Zhang. Towards high-quality hdr deghosting with conditional diffusion models. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96373 | - |
| dc.description.abstract | HDR 重建任務傳統上是通過使用多張不同曝光的 LDR 影像來完成的。本文提出了一種新的方法,首先使用隱式函數模型從單一曝光的 LDR 生成多曝光的LDR 堆疊。然後,將 HDR 重建視為一個影像生成任務,其中 LDR 堆疊的特徵被用作擴散模型的輸入條件來生成 HDR 影像。
與直接將條件輸入擴散模型不同,我們選擇使用帶有注意力機制的條件特徵生成器來引導基於參考影像的特徵融合。為了解決監督式擴散模型中的顏色失真問題,加入了直方圖損失(Histogram Loss),有效地校正了生成影像中的顏色偏移。實驗結果顯示,這種方法在評估指標和真實世界影像質量上表現良好,並具有較強的泛化能力。 | zh_TW |
| dc.description.abstract | The HDR reconstruction task is traditionally accomplished using multiple LDR images with different exposures. This paper introduces a new approach that begins by using an implicit function model to generate a multi-exposure LDR stack from a middle exposure (EV0) LDR image. Then, HDR reconstruction is treated as an image generation task, where the features of the LDR stack are used as input conditions for a diffusion model to create the HDR image.
Instead of directly inputting the condition into the diffusion model, we choose to use a Conditional Feature Generator with an attention mechanism to guide the merging process based on the reference image. To solve the issue of color distortion in supervised diffusion models, a Histogram Loss is added, which effectively corrects color shifts in the generated images. Experimental results show that this method performs well in both evaluation metrics and real-world image quality, with strong generalization ability. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-13T16:10:26Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-02-13T16:10:26Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements ii 摘要 iii Abstract iv Contents v List of Figures vii List of Tables viii Chapter 1 Introduction 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 2 Related Work 3 2.1 HDR reconstruction using multiple images . . . . . . . . . . . . . . 3 2.2 HDR reconstruction by single image . . . . . . . . . . . . . . . . . . 3 2.3 LDR Stack Generation . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.4 Diffusion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 3 Methodology 6 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Continuous Exposure Value Representation . . . . . . . . . . . . . . 8 3.3 Conditional Feature Generator . . . . . . . . . . . . . . . . . . . . . 9 3.3.1 Input features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3.2 Attention Module . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3.3 Domain Feature Align (DFA) . . . . . . . . . . . . . . . . . . . . . 12 3.4 Conditional Diffusion Models . . . . . . . . . . . . . . . . . . . . . 13 3.4.1 Diffusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.2 Reverse Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.3 Conditional Objective Function . . . . . . . . . . . . . . . . . . . . 14 3.5 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 4 Experiments 16 4.1 Experiments Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2 Comparison with State-of-the-Art . . . . . . . . . . . . . . . . . . . 17 4.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 5 Conclusion and Limitation 24 References 25 | - |
| dc.language.iso | en | - |
| dc.subject | 影像增強 | zh_TW |
| dc.subject | 曝光合成 | zh_TW |
| dc.subject | 注意力機制 | zh_TW |
| dc.subject | 擴散模型 | zh_TW |
| dc.subject | Attention Method | en |
| dc.subject | Exposure Fusion | en |
| dc.subject | Diffusion Model | en |
| dc.subject | Image Enhancement | en |
| dc.title | 基於擴散模型的高動態範圍圖片之重建 | zh_TW |
| dc.title | Reconstruction of High Dynamic Range Images Based on Diffusion Models | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 吳賦哲;葉正聖 | zh_TW |
| dc.contributor.oralexamcommittee | Fu-Che Wu;Jeng-Sheng Yeh | en |
| dc.subject.keyword | 影像增強,擴散模型,注意力機制,曝光合成, | zh_TW |
| dc.subject.keyword | Image Enhancement,Diffusion Model,Attention Method,Exposure Fusion, | en |
| dc.relation.page | 27 | - |
| dc.identifier.doi | 10.6342/NTU202500168 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-01-19 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| dc.date.embargo-lift | 2025-02-14 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-1.pdf | 272.29 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
