Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101201
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor丁建均zh_TW
dc.contributor.advisorJian-Jiun Dingen
dc.contributor.author葉芷彤zh_TW
dc.contributor.authorChih-Tung Yehen
dc.date.accessioned2025-12-31T16:18:11Z-
dc.date.available2026-01-01-
dc.date.copyright2025-12-31-
dc.date.issued2025-
dc.date.submitted2025-11-27-
dc.identifier.citation[1] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: A simple and practical alternative to high dynamic range photography,” Computer Graphics Forum, vol. 28, no. 1, pp. 161–171, March 2009.
[2] J. Shen, Y. Zhao, S. Yan, and X. Li, “Exposure fusion using boosting Laplacian pyramid,” IEEE Transactions on Cybernetics, vol. 44, no. 9, pp. 1579–1590, Sep. 2014.
[3] F. Kou, Z. Li, C. Wen, and W. Chen, “Multi-scale exposure fusion via gradient domain guided image filtering,” Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), pp. 1105–1110, Jul. 2017.
[4] F. Kou, Z. Li, C. Wen, and W. Chen, “Edge-preserving smoothing pyramid based multi-scale exposure fusion,” Journal of Visual Communication and Image Representation, vol. 53, pp. 235–244, May 2018.
[5] Z. G. Li, J. H. Zheng, and S. Rahardja, “Detail-enhanced exposure fusion,” IEEE Transactions on Image Processing, vol. 21, no. 11, pp. 4672–4676, Nov. 2012.
[6] Z. Li, Z. Wei, C. Wen, and J. Zheng, “Detail-enhanced multi-scale exposure fusion,” IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1243–1252, Mar. 2017.
[7] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2864–2875, Jul. 2013.
[8] M. Nejati, M. Karimi, S. R. Soroushmehr, N. Karimi, S. Samavi, and K. Najarian, “Fast exposure fusion using exposedness function,” IEEE International Conference on Image Processing (ICIP), pp. 2234–2238, Sept. 2017.
[9] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and A. C. Bovik, “Single-scale fusion: An effective approach to merging images,” IEEE Transactions on Image Processing, vol. 26, no. 1, pp. 65–78, Jan. 2017.
[10] Q. Wang, W. Chen, X. Wu, and Z. Li, “Detail-enhanced multi-scale exposure fusion in YUV color space,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 8, pp. 2418–2429, Aug. 2020.
[11] H. Li, K. Ma, H. Yong, and L. Zhang, “Fast multi-scale structural patch decomposition for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 29, pp. 5805–5816, Aug. 2020.
[12] K. Ma, H. Li, H. Yong, Z. Wang, D. Meng, and L. Zhang, “Robust multi-exposure image fusion: A structural patch decomposition approach,” IEEE Transactions on Image Processing, vol. 26, no. 5, pp. 2519–2532, May 2017.
[13] H. Li, T. N. Chan, X. Qi, and W. Xie, “Detail-preserving multi-exposure fusion with edge-preserving structural patch decomposition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 11, pp. 4293–4304, November 2021.
[14] B. Gu, W. Li, J. Wong, M. Zhu, and M. Wang, “Gradient field multi-exposure images fusion for high dynamic range image visualization,” Journal of Visual Communication and Image Representation, vol. 23, no. 4, pp. 604–610, May 2012.
[15] M. Song, D. Tao, C. Chen, J. Bu, J. Luo, and C. Zhang, “Probabilistic exposure fusion,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 341–357, Jan. 2012.
[16] K. Ma, Z. Duanmu, H. Yeganeh, and Z. Wang, “Multi-exposure image fusion by optimizing a structural similarity index,” IEEE Transactions on Computational Imaging, vol. 4, no. 1, pp. 60–72, Mar. 2018.
[17] K. R. Prabhakar, V. S. Srikar, and R. V. Babu, “DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct. 2017, pp. 4724–4732.
[18] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345–3356, Nov. 2015.
[19] H. Xu, J. Ma, and X.-P. Zhang, “MEF-GAN: Multi-exposure image fusion via generative adversarial networks,” IEEE Transactions on Image Processing, vol. 29, pp. 7203–7216, 2020.
[20] K. Ma, Z. Duanmu, H. Zhu, Y. Fang, and Z. Wang, “Deep guided learning for fast multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 29, pp. 2808–2819, 2020.
[21] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, “IFCNN: A general image fusion framework based on convolutional neural network,” Information Fusion, vol. 54, pp. 99–118, Feb. 2020.
[22] T. Jiang, C. Wang, X. Li, R. Li, H. Fan, and S. Liu, “MEFLUT: Unsupervised 1D lookup tables for multi-exposure image fusion,” Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10508–10517, Oct. 2023.
[23] P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” ACM Transactions on Graphics (SIGGRAPH), vol. 16, no. 3, pp. 369–378, Aug. 1997.
[24] O. Gallo, N. Gelfand, W.-C. Chen, M. Tico, and K. Pulli, “Artifact-free high dynamic range imaging,” Proceedings of the IEEE International Conference on Computational Photography (ICCP), pp. 1–7, Apr. 2009.
[25] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman, “Robust patch-based HDR reconstruction of dynamic scenes,” ACM Transactions on Graphics, vol. 31, no. 6, p. 203, 2012.
[26] C. Lee, Y. Li, and V. Monga, “Ghost-free high dynamic range imaging via rank minimization,” IEEE Signal Processing Letters, vol. 21, no. 9, pp. 1045–1049, Sep. 2014.
[27] T.-H. Oh, J.-Y. Lee, Y.-W. Tai, and I. S. Kweon, “Robust high dynamic range imaging by rank minimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 6, pp. 1219–1232, Jun. 2015.
[28] N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Transactions on Graphics, vol. 36, no. 4, pp. 144:1–144:12, 2017.
[29] R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 249–256, Jul. 2002.
[30] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 1–10, Aug. 2008.
[31] W. Zhang and W.-K. Cham, “Gradient-directed multiexposure composition,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2318–2323, Apr. 2012.
[32] Y. Liu and Z. Wang, “Dense SIFT for ghost-free multi-exposure fusion,” Journal of Visual Communication and Image Representation, vol. 31, pp. 208–224, Aug. 2015.
[33] H. Li and L. Zhang, “Multi-exposure fusion with CNN features,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, pp. 1723–1727, Oct. 2018.
[34] O. Ulucan, D. Ulucan, and M. Turkan, “Ghosting-free multi-exposure image fusion for static and dynamic scenes,” Signal Processing, vol. 202, p. 108774, 2023.
[35] Z. Li, J. Zheng, Z. Zhu, and S. Wu, “Selectively detail-enhanced fusion of differently exposed images with moving objects,” IEEE Transactions on Image Processing, vol. 23, no. 10, pp. 4372–4382, Oct. 2014.
[36] S. Paul, I. S. Sevcenco, and P. Agathoklis, “Multi-exposure and multi-focus image fusion in gradient domain,” Journal of Circuits, Systems, and Computers, vol. 25, no. 10, p. 1650123, 2016.
[37] F. Kou, Z. Li, C. Wen, and W. Chen, “Multi-scale exposure fusion via gradient domain guided image filtering,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, pp. 1105–1110, Jul. 2017.
[38] Z. Yang, Y. Chen, Z. Le, and Y. Ma, "GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks," Neural Computing and Applications, vol. 33, pp. 6133-6145, 2021.
[39] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “FusionGAN: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019.
[40] S.-h. Lee, J. S. Park, and N. I. Cho, “A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient,” IEEE International Conference on Image Processing (ICIP), pp. 1737–1741, Oct. 2018.
[41] N. Hayat and M. Imran, “Ghost-free multi-exposure image fusion technique using dense SIFT descriptor and guided filter,” Journal of Visual Communication and Image Representation, vol. 62, pp. 295–308, 2019.
[42] S. Zhu and Z. Yu, "Self-guided filter for image denoising," IET Image Processing, vol. 14, no. 11, pp. 2561-2566, 2020.
[43] K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, June 2013.
[44] D. Kraft, “A software package for sequential quadratic programming,” DFVLR Obersfaffeuhofen: Institute for Flight Mechanics, Tech. Rep. DFVLR-FB 88-28, 1988.
[45] M. Afifi, K. G. Derpanis, B. Ommer, and M. S. Brown, "Learning multi-scale photo exposure correction," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 9153–9163, June 2021.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101201-
dc.description.abstract多重曝光影像融合(Multi-Exposure Fusion, MEF)旨在將不同曝光的低動態範圍(LDR)影像整合為單一視覺平衡的結果,同時保留亮部與暗部的細節。傳統的 MEF 方法多依賴預先定義的規則與多尺度融合策略,然而常因權重估計不穩定與適應性不足,導致光暈(halo)現象與細節保留不完整的問題。
本論文提出一個以優化為核心的 MEF 方法,在可解釋性與適應性之間取得平衡,連結傳統非學習式方法與資料驅動模型。所提出的演算法結合強度–飽和度導向的自適應權重設計與優化導向的金字塔融合架構,並引入曝光區域的自適應分流機制以區分極端與正常曝光區域,提升整體的視覺一致性。在融合階段中,本方法將傳統的金字塔重建轉化為受限優化問題,透過多重損失函數(Composite Loss Function)同時控制光暈抑制、局部對比度與平坦區域的平滑性。
在共計 2,617 組影像序列的實驗中,所提出的方法於 MEF-SSIM 指標上平均表現優於多項代表性傳統與深度學習方法,並在視覺結果上展現更佳的光暈抑制與細節保留平衡。綜合而言,本研究提出一個兼具理論基礎與可調適性的優化式融合框架,為後續混合式或學習型 MEF 模型的發展奠定基礎。
zh_TW
dc.description.abstractMulti-exposure fusion (MEF) combines differently exposed low dynamic range (LDR) images to produce a single, visually balanced image that preserves both highlight and shadow details. Traditional MEF methods typically rely on predefined heuristics and multi-scale fusion strategies but often suffer from halo artifacts and incomplete detail preservation due to heuristic weighting and limited adaptability.
This thesis presents an optimization-based MEF method that bridges traditional non-learning methods with data-driven paradigms through an interpretable and adaptive design. The proposed algorithm integrates intensity–saturation-guided weighting with an optimization-driven pyramid fusion scheme, where an adaptive gating mechanism differentiates extreme and normal exposure regions to enhance perceptual balance. The pyramid fusion stage is reformulated as a constrained optimization problem governed by a composite loss function, enabling explicit control over halo suppression, local contrast, and smoothness.
Extensive experiments on a 2,617-sequence benchmark demonstrate that the proposed approach achieves the highest average MEF-SSIM among both traditional and learning-based algorithms. Visual comparisons further confirm its superior trade-off between halo reduction and detail preservation, underscoring the method’s robustness, interpretability, and potential as a foundation for future hybrid or learning-based MEF models.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-12-31T16:18:11Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-12-31T16:18:11Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 #
誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS iv
LIST OF FIGURES vii
LIST OF TABLES ix
Chapter 1 Introduction 1
1.1 Background 1
1.2 Main Contribution 2
1.3 Thesis Organization 3
Chapter 2 Related Work 4
2.1 Static MEF Methods 4
2.1.1 Multi-Scale Fusion Frameworks 4
2.1.2 Optimization-Based Methods 6
2.1.3 Deep-Learning-Based Methods 7
2.2 Dynamic MEF and Ghosting Removal 7
2.2.1 Radiance-Domain Approaches 8
2.2.2 Intensity-Domain and Patch-Based Approaches 8
2.3 Hybrid and Advanced Techniques 9
2.4 Summary 9
Chapter 3 Proposed Method 11
3.1 Overview of the Framework 11
3.2 Adaptive Weight Map Construction 13
3.2.1 Motivation 13
3.2.2 Contrast Measure 15
3.2.3 Saturation Measure 17
3.2.4 Well-exposedness Measure 20
3.2.5 Adaptive Region Classification via Intensity and Saturation 22
3.2.6 Region-adaptive Weighting 25
3.2.7 Edge-aware Weight Smoothing 27
3.3 Pyramid Decomposition and Optimization-based Fusion 28
3.3.1 Gaussian–Laplacian Pyramid Construction 29
3.3.2 Brightness-guided Prior 31
3.3.3 Pre-fused Pyramid and Reconstruction 32
3.3.4 Optimization with Loss Function Design 32
3.3.5 Optimization of Scale Weights 35
3.3.6 Final Reconstruction and Post-processing 36
3.4 Summery 38
Chapter 4 Experiment Results 40
4.1 Dataset and Preprocessing 40
4.2 MEF-SSIM Metric 41
4.3 Comparison to Other Methods 42
4.3.1 Comparison with Traditional (Non-Learning) Methods 43
4.3.2 Comparison with Learning-Based Methods 44
4.3.3 Visual Comparison 45
Chapter 5 Conclusion and Future Work 63
5.1 Conclusion 63
5.2 Future Work 63
REFERENCE 65
-
dc.language.isoen-
dc.subject多重曝光融合(MEF)-
dc.subject高動態範圍(HDR)-
dc.subject自適應權重-
dc.subject最佳化方法-
dc.subject損失函數設計-
dc.subject暈影抑制-
dc.subject細節保留-
dc.subjectMulti-exposure fusion (MEF)-
dc.subjecthigh dynamic range (HDR)-
dc.subjectadaptive weighting-
dc.subjectoptimization-
dc.subjectloss function design-
dc.subjecthalo reduction-
dc.subjectdetail preservation-
dc.title基於亮度與飽和度自適應的多重曝光融合演算法與彈性損失函數設計zh_TW
dc.titleAdaptive Intensity and Saturation Based Multi-Exposure Fusion Algorithm with Flexible Loss Function Designen
dc.typeThesis-
dc.date.schoolyear114-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee劉俊麟;簡鳳村;葉家宏zh_TW
dc.contributor.oralexamcommitteeChun-Lin Liu;Feng-Tsun Chien;Chia-Hung Yehen
dc.subject.keyword多重曝光融合(MEF),高動態範圍(HDR)自適應權重最佳化方法損失函數設計暈影抑制細節保留zh_TW
dc.subject.keywordMulti-exposure fusion (MEF),high dynamic range (HDR)adaptive weightingoptimizationloss function designhalo reductiondetail preservationen
dc.relation.page70-
dc.identifier.doi10.6342/NTU202504721-
dc.rights.note未授權-
dc.date.accepted2025-11-27-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電信工程學研究所-
dc.date.embargo-liftN/A-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-114-1.pdf
  未授權公開取用
4.58 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved