請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80319完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
| dc.contributor.author | Chuang-Wei Chueh | en |
| dc.contributor.author | 闕壯維 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:04:25Z | - |
| dc.date.available | 2022-02-21 | |
| dc.date.available | 2022-11-24T03:04:25Z | - |
| dc.date.copyright | 2022-02-21 | |
| dc.date.issued | 2022 | |
| dc.date.submitted | 2022-01-19 | |
| dc.identifier.citation | [1] G. Bhat, M. Danelljan, L. Van Gool, and R. Timofte. Deep burst super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9209–9218, 2021. [2] T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron. Unprocessing images for learned raw denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019. [3] J. Cai, S. Gu, and L. Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4):2049–2062, 2018. [4] S.-Y. Chen and Y.-Y. Chuang. Deep exposure fusion with deghosting via homography estimation and attention learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1464–1468. IEEE, 2020. [5] F. Ciurea and B. Funt. A large image database for color constancy research. In Color and Imaging Conference, volume 2003, pages 160–164. Society for Imaging Science and Technology, 2003. [6] M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand. Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG), 36(4):1–12, 2017. [7] A. Ignatov, L. Van Gool, and R. Timofte. Replacing mobile camera isp with a single deep learning model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 536–537, 2020. [8] N. K. Kalantari, R. Ramamoorthi, et al. Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph., 36(4):144–1, 2017. [9] G. R. KS, A. Biswas, M. S. Patel, and B. P. Prasad. Deep multi-stage learning for hdr with large object motions. In 2019 IEEE International Conference on Image Processing (ICIP), pages 4714–4718. IEEE, 2019. [10] Z. Liang, J. Cai, Z. Cao, and L. Zhang. Cameranet: A two-stage framework for effective camera isp learning. IEEE Transactions on Image Processing, 30:2248–2262, 2021. [11] O. Liba, K. Murthy, Y.-T. Tsai, T. Brooks, T. Xue, N. Karnad, Q. He, J. T. Barron, D. Sharlet, R. Geiss, et al. Handheld mobile photography in very low light. ACM Transactions on Graphics (TOG), 38(6):1–16, 2019. [12] C. Liu et al. Beyond pixels: exploring new representations and applications for motion analysis. PhD thesis, Massachusetts Institute of Technology, 2009. [13] Y.-L. Liu, W.-S. Lai, Y.-S. Chen, Y.-L. Kao, M.-H. Yang, Y.-Y. Chuang, and J.-B. Huang. Single-image hdr reconstruction by learning to reverse the camera pipeline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1651–1660, 2020. [14] S. Moran, S. McDonagh, and G. Slabaugh. Curl: Neural curve layers for global image enhancement. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 9796–9803. IEEE, 2021. [15] K. R. Prabhakar, S. Agrawal, D. K. Singh, B. Ashwath, and R. V. Babu. Towards practical and efficient high-resolution hdr deghosting with cnn. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16, pages 497–513. Springer, 2020. [16] R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew. Color image processing pipeline. IEEE Signal Processing Magazine, 22(1):34–43, 2005. [17] E. Schwartz, R. Giryes, and A. M. Bronstein. Deepisp: Toward learning an end-toend image processing pipeline. IEEE Transactions on Image Processing, 28(2):912–923, 2018. [18] T. Suda, M. Tanaka, Y. Monno, and M. Okutomi. Deep snapshot hdr imaging using multi-exposure color filter array. In Proceedings of the Asian Conference on Computer Vision, 2020. [19] Z. Teed and J. Deng. Raft: Recurrent all-pairs field transforms for optical flow. In European conference on computer vision, pages 402–419. Springer, 2020. [20] S. Wu, J. Xu, Y.-W. Tai, and C.-K. Tang. Deep high dynamic range imaging with large foreground motions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 117–132, 2018. [21] Q. Yan, D. Gong, Q. Shi, A. v. d. Hengel, C. Shen, I. Reid, and Y. Zhang. Attention-guided network for ghost-free high dynamic range imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1751–1760, 2019. [22] Q. Yan, D. Gong, P. Zhang, Q. Shi, J. Sun, I. Reid, and Y. Zhang. Multi-scale dense networks for deep high dynamic range imaging. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 41–50. IEEE, 2019. [23] Z. Zhang, H. Wang, M. Liu, R. Wang, J. Zhang, and W. Zuo. Learning raw-to-srgb mappings with inaccurately aligned supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4348–4358, 2021. [24] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80319 | - |
| dc.description.abstract | 近年來的論文已經證明我們可以學習一個轉換RAW到sRGB的深度學習模型來取代傳統相機的訊號處理器,藉此來減少開發相機訊號處理器的時間和成本。但是近年使用智慧型手機拍照變得愈來愈普遍,智慧型手機硬體的限制使得單張照片的品質受限,常常有過曝及曝光不足的區域,難以僅藉由一個轉換RAW到sRGB的深度學習模型來完全恢復這些區域的資訊,因此我們提出一個方法來融合兩張不同曝光值的RAW影像來解決這個問題。同時使用一個神經網路曲線模型來幫助轉換RAW到sRGB影像的顏色,經過我們的實驗證實,加入曲線模型特徵可以使融合兩張RAW影像的模型在轉換顏色和合成顏色上有更好的表現。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:04:25Z (GMT). No. of bitstreams: 1 U0001-1701202215093300.pdf: 24568855 bytes, checksum: 2e45c7578ef17374d8a460d87bb73eff (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i Acknowledgements iii 摘要 v Abstract vii Contents ix List of Figures xiii List of Tables xv Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 Deep Network for RAW-to-sRGB Mapping . . . . . . . . . . . . . . 5 2.2 Multiple Exposure Fusion . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.1 sRGB Image Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.2 RAW Image Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Neural Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3.1 Curve in RAW-to-sRGB Mapping . . . . . . . . . . . . . . . . . . 7 2.3.2 Neural Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 3 Dataset 9 3.1 Synthetic RAW Image . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.1 CycleGAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.2 Inverse ISP Pipeline (Unprocess) . . . . . . . . . . . . . . . . . . . 11 3.1.3 Supervised Training sRGB-to-RAW model . . . . . . . . . . . . . . 12 3.2 Synthetic RAW SICE Dataset . . . . . . . . . . . . . . . . . . . . . 12 Chapter 4 Method 15 4.1 Raw alignment Model . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.1.1 RAW-to-sRGB mapping . . . . . . . . . . . . . . . . . . . . . . . 15 4.1.2 Calculate Flow and Apply on Feature . . . . . . . . . . . . . . . . 17 4.2 Exposure Fusion Model . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2.1 RAW-to-sRGB mapping with Unet-style Structure . . . . . . . . . . 18 4.2.2 Neural Curve Feature . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2.3 Attention Module . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.2.4 Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 5 Experiments 25 5.1 Implementation Detail . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.2 Quantitative Result . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.3 Qualitative Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.3.1 RAW Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.3.2 Multi-Exposure RAW Fusion Result . . . . . . . . . . . . . . . . . 27 5.4 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.5 More Multi-Exposure RAW Fusion Result . . . . . . . . . . . . . . 32 Chapter 6 Conclusion 37 References 39 | |
| dc.language.iso | en | |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 多重曝光融合 | zh_TW |
| dc.subject | RAW圖片 | zh_TW |
| dc.subject | 神經網路曲線模型 | zh_TW |
| dc.subject | RAW圖片到sRGB圖片 | zh_TW |
| dc.subject | RAW image | en |
| dc.subject | Neural Curve | en |
| dc.subject | Deep Learning | en |
| dc.subject | Multi-Exposure Fusion | en |
| dc.subject | RAW-to-sRGB Mapping | en |
| dc.title | 利用神經網路曲線模型進行RAW照片多重曝光融合 | zh_TW |
| dc.title | Multi-Exposure RAW Image Fusion with the Neural Curve | en |
| dc.date.schoolyear | 110-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 吳賦哲(Wei-Jiun Su),葉正聖(Yu-Hsun Chen),(Guan-Chen Chen) | |
| dc.subject.keyword | 多重曝光融合,RAW圖片到sRGB圖片,RAW圖片,神經網路曲線模型,深度學習, | zh_TW |
| dc.subject.keyword | Multi-Exposure Fusion,RAW-to-sRGB Mapping,RAW image,Neural Curve,Deep Learning, | en |
| dc.relation.page | 42 | |
| dc.identifier.doi | 10.6342/NTU202200081 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2022-01-20 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1701202215093300.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 23.99 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
