Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100921
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕zh_TW
dc.contributor.advisorYung-Yu Chuangen
dc.contributor.author蘇浚笙zh_TW
dc.contributor.authorCHUN-SHENG SUen
dc.date.accessioned2025-11-26T16:06:06Z-
dc.date.available2025-11-27-
dc.date.copyright2025-11-26-
dc.date.issued2025-
dc.date.submitted2025-11-12-
dc.identifier.citationTom Mertens, Jan Kautz, and Frank Van Reeth. Exposure fusion. In 15th Pacific Conference on Computer Graphics and Applications (PG’07), pages 382–390. IEEE, 2007.
Rui Shen, Imran Cheng, Jiayi Shi, and Arijit Basu. Generalized random walks for fusion of multi-exposure images. IEEE Transactions on Image Processing, 20(12):3634–3646, Dec 2011.
Sheng Li and Xin Kang. Fast multi-exposure image fusion with median filter and recursive filter. IEEE Transactions on Consumer Electronics, 58(2):626–632, May 2012.
K Ram Prabhakar, V Sai Srikar, and R Venkatesh Babu. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. In Proceedings of the IEEE international conference on computer vision, pages 4714–4722, 2017.
Linhao Qu, Shaolei Liu, Manning Wang, and Zhijian Song. Transmef: A transformer-based multi-exposure image fusion framework using self-supervised multi-task learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2126–2134, 2022.
Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4):2049–2062, 2018.
Xiao Tan, Huaian Chen, Rui Zhang, Qihan Wang, Yan Kan, Jinjin Zheng, Yi Jin, and Enhong Chen. Deep multi-exposure image fusion for dynamic scenes. IEEE Transactions on Image Processing, 32:5310–5325, 2023.
Zhilu Zhang, Haoyu Wang, Shuai Liu, Xiaotao Wang, Lei Lei, and Wangmeng Zuo. Self-supervised high dynamic range imaging with multi-exposure images in dynamic scenes. arXiv preprint arXiv:2310.01840, 2023.
K Ram Prabhakar, Gowtham Senthil, Susmit Agrawal, R Venkatesh Babu, and Rama Krishna Sai S Gorthi. Labeled from unlabeled: Exploiting unlabeled data for few-shot deep hdr deghosting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4875–4885, 2021.
Sheng-Yeh Chen and Yung-Yu Chuang. Deep exposure fusion with deghosting via homography estimation and attention learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1464–1468. IEEE, 2020.
Wenhui Hong, Hao Zhang, and Jiayi Ma. Ofpf-mef: An optical flow guided dynamic multi-exposure image fusion network with progressive frequencies learning. IEEE Transactions on Multimedia, 26:8581–8595, 2024.
Wenhui Hong, Hao Zhang, and Jiayi Ma. Merf: A practical hdr-like image generator via mutual-guided learning between multi-exposure registration and fusion. IEEE Transactions on Image Processing, 33:2361–2376, 2024.
Hamid R Sheikh and Alan C Bovik. Image information and visual quality. IEEE Transactions on image processing, 15(2):430–444, 2006.
Keyan Ding, Kede Ma, Shiqi Wang, and Eero P Simoncelli. Image quality assessment: Unifying structure and texture similarity. IEEE transactions on pattern analysis and machine intelligence, 44(5):2567–2581, 2020
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100921-
dc.description.abstract本研究針對動態多重曝光融合(Multi-Exposure Fusion, MEF)資料集的稀缺問題,提出一種方法將現有的靜態資料集轉換為可供模型訓練的動態版本,以提升模型的去鬼影能力。我們設計一套演算法,能夠分析靜態場景的多曝光影像組,並自動判定可用於合成動態組件的區域。我們進行了多層面的實驗比較,涵蓋不同演算法設定以及訓練策略,藉此找出最能優化模型效能的配置。此外,我們亦進行跨資料集的實驗,以驗證方法在不同數據來源下的泛用性。實驗結果顯示,使用經本方法轉換之靜態資料集進行訓練的模型,在多項評估指標上皆能達到與真實動態資料集訓練結果相當甚至更優的表現,證明了本研究方法的實用性與穩健性。zh_TW
dc.description.abstractThis work addresses the scarcity of dynamic multi-exposure fusion (MEF) datasets and proposes a method to transform existing static datasets into dynamic datasets suitable for training models capable of de-ghosting. We develop an algorithm that analyzes static multi-exposure image sequences to identify candidate regions for synthesizing dynamic components. We conduct extensive experiments under various conditions,including comparisons with different algorithmic settings and training strategies. These comparisons enable us to determine the optimal configuration that consistently improves model performance. Furthermore, we perform cross-dataset experiments to demonstrate the generalizability of the proposed method. Experimental results show that models trained on the transformed static datasets achieve comparable or even better performance to those trained on real dynamic datasets, confirming the practicality and robustness of our approach.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-11-26T16:06:06Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-11-26T16:06:06Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
摘要 iii
Abstract v
Contents vii
List of Figures ix
List of Tables xi
Chapter 1 Introduction 1
Chapter 2 Related Works 5
2.1 Construct Dynamic MEF Dataset 5
2.2 Few-Shot and Self-Supervised Learning 6
2.3 Adapt or Manipulate Static Datasets 7
Chapter 3 Approach 9
3.1 Overview 9
3.2 Synthetic Dataset Construction 10
3.2.1 Generate Masks from Over-Exposed and Labeled Images 10
3.2.2 Locate Bounding Boxes Based on Set Conditions 11
3.2.3 Synthesizing Under-Exposed Images Using Bounding Boxes 12
3.3 Training Strategy 13
Chapter 4 Experiments 15
4.1 Experiments Setting 15
4.1.1 Implementation Details 15
4.1.2 Evaluation Metrics 15
4.2 Within-dataset Comparison 17
4.2.1 Static vs. Dynamic Datasets 18
4.2.2 Bounding Box Threshold 19
4.2.3 Training Strategy 21
4.2.4 Overall Comparison 25
4.3 Cross-dataset Comparison 27
Chapter 5 Conclusion 31
References 33
-
dc.language.isoen-
dc.subject多曝光影像融合-
dc.subject資料集-
dc.subject鬼影-
dc.subjectMulti-Exposure image fusion-
dc.subjectdataset-
dc.subjectghosting-
dc.title合成多重曝光融合資料集:從靜態資料生成動態訓練 數據的方法zh_TW
dc.titleSynthesis Multi-Exposure Fusion Dataset: Generating Dynamic Training Sets from Static Datasetsen
dc.typeThesis-
dc.date.schoolyear114-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee吳賦哲;葉正聖zh_TW
dc.contributor.oralexamcommitteeFu-Che Wu;Jeng-Sheng Yehen
dc.subject.keyword多曝光影像融合,資料集鬼影zh_TW
dc.subject.keywordMulti-Exposure image fusion,datasetghostingen
dc.relation.page35-
dc.identifier.doi10.6342/NTU202504663-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-11-12-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊網路與多媒體研究所-
dc.date.embargo-lift2025-11-27-
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-114-1.pdf13.24 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved