Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96373
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕zh_TW
dc.contributor.advisorYung-Yu Chuangen
dc.contributor.author陳啟維zh_TW
dc.contributor.authorChi-Wei Chenen
dc.date.accessioned2025-02-13T16:10:26Z-
dc.date.available2025-02-14-
dc.date.copyright2025-02-13-
dc.date.issued2025-
dc.date.submitted2025-01-18-
dc.identifier.citation[1] F. Banterle, K. Debattista, A. Artusi, S. Pattanaik, K. Myszkowski, P. Ledda, and A. Chalmers. High dynamic range imaging and low dynamic range expansion for generating hdr content. Computer Graphics Forum, 2009.
[2] F. Banterle, P. Ledda, K. Debattista, and A. Chalmers. Expanding low dynamic range videos for high dynamic range applications. In Spring Conference on Computer Graphics, 2008.
[3] S.-K. Chen, H.-L. Yen, Y.-L. Liu, M.-H. Chen, H.-N. Hu, W.-H. Peng, and Y.-Y. Lin. Learning continuous exposure value representations for single-image hdr re- construction. In ICCV, 2023.
[4] P. E. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. ACM Transactions on Graphics, 1997.
[5] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In NeurIPS, pages 6840–6851, 2020.
[6] N. K. Kalantari and R. Ramamoorthi. Deep high dynamic range imaging of dynamic scenes. ACM Transactions on Graphics, 2017.
[7] S. Lin, J. Gu, S. Yamazaki, and H. Shum. Radiometric calibration from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3–4, 2004.
[8] S. Lin and L. Zhang. Determining the radiometric response function from a single grayscale image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3–4, 2005.
[9] Y.-L. Liu, W.-S. Lai, Y.-S. Chen, Y.-L. Kao, M.-H. Yang, Y.-Y. Chuang, and J.-B. Huang. Single-image hdr reconstruction by learning to reverse the camera pipeline. In CVPR, 2020.
[10] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks: Learning 3d reconstruction in function space. In CVPR, 2019.
[11] A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen. Glide: Towards photorealistic image generation and editing with text- guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
[12] S. Saito, T. Simon, J. Saragih, and H. Joo. Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In CVPR, 2020.
[13] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In ICML, pages 2256–2265, 2015.
[14] J. Song, C. Meng, and S. Ermon. Denoising diffusion implicit models. In ICLR, 2021.
[15] S. Wu, J. Xu, Y.-W. Tai, and C.-K. Tang. Deep high dynamic range imaging with large foreground motions. In ECCV, 2018.
[16] Q. Xu, W. Wang, D. Ceylan, R. Mech, and U. Neumann. Disn: Deep implicit surface network for high-quality single-view 3d reconstruction. In NeurIPS, 2019.
[17] Q. Yan, T. Hu, Y. Sun, H. Tang, Y. Zhu, W. Dong, L. V. Gool, and Y. Zhang. Towards high-quality hdr deghosting with conditional diffusion models. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96373-
dc.description.abstractHDR 重建任務傳統上是通過使用多張不同曝光的 LDR 影像來完成的。本文提出了一種新的方法,首先使用隱式函數模型從單一曝光的 LDR 生成多曝光的LDR 堆疊。然後,將 HDR 重建視為一個影像生成任務,其中 LDR 堆疊的特徵被用作擴散模型的輸入條件來生成 HDR 影像。
與直接將條件輸入擴散模型不同,我們選擇使用帶有注意力機制的條件特徵生成器來引導基於參考影像的特徵融合。為了解決監督式擴散模型中的顏色失真問題,加入了直方圖損失(Histogram Loss),有效地校正了生成影像中的顏色偏移。實驗結果顯示,這種方法在評估指標和真實世界影像質量上表現良好,並具有較強的泛化能力。
zh_TW
dc.description.abstractThe HDR reconstruction task is traditionally accomplished using multiple LDR images with different exposures. This paper introduces a new approach that begins by using an implicit function model to generate a multi-exposure LDR stack from a middle exposure (EV0) LDR image. Then, HDR reconstruction is treated as an image generation task, where the features of the LDR stack are used as input conditions for a diffusion model to create the HDR image.
Instead of directly inputting the condition into the diffusion model, we choose to use a Conditional Feature Generator with an attention mechanism to guide the merging process based on the reference image. To solve the issue of color distortion in supervised diffusion models, a Histogram Loss is added, which effectively corrects color shifts in the generated images. Experimental results show that this method performs well in both evaluation metrics and real-world image quality, with strong generalization ability.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-13T16:10:26Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-13T16:10:26Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2 Related Work 3
2.1 HDR reconstruction using multiple images . . . . . . . . . . . . . . 3
2.2 HDR reconstruction by single image . . . . . . . . . . . . . . . . . . 3
2.3 LDR Stack Generation . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Diffusion Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chapter 3 Methodology 6
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Continuous Exposure Value Representation . . . . . . . . . . . . . . 8
3.3 Conditional Feature Generator . . . . . . . . . . . . . . . . . . . . . 9
3.3.1 Input features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3.2 Attention Module . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3.3 Domain Feature Align (DFA) . . . . . . . . . . . . . . . . . . . . . 12
3.4 Conditional Diffusion Models . . . . . . . . . . . . . . . . . . . . . 13
3.4.1 Diffusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.2 Reverse Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.4.3 Conditional Objective Function . . . . . . . . . . . . . . . . . . . . 14
3.5 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 4 Experiments 16
4.1 Experiments Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Comparison with State-of-the-Art . . . . . . . . . . . . . . . . . . . 17
4.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 5 Conclusion and Limitation 24
References 25
-
dc.language.isoen-
dc.subject影像增強zh_TW
dc.subject曝光合成zh_TW
dc.subject注意力機制zh_TW
dc.subject擴散模型zh_TW
dc.subjectAttention Methoden
dc.subjectExposure Fusionen
dc.subjectDiffusion Modelen
dc.subjectImage Enhancementen
dc.title基於擴散模型的高動態範圍圖片之重建zh_TW
dc.titleReconstruction of High Dynamic Range Images Based on Diffusion Modelsen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee吳賦哲;葉正聖zh_TW
dc.contributor.oralexamcommitteeFu-Che Wu;Jeng-Sheng Yehen
dc.subject.keyword影像增強,擴散模型,注意力機制,曝光合成,zh_TW
dc.subject.keywordImage Enhancement,Diffusion Model,Attention Method,Exposure Fusion,en
dc.relation.page27-
dc.identifier.doi10.6342/NTU202500168-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-01-19-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2025-02-14-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf272.29 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved