請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90788完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 莊永裕 | zh_TW |
| dc.contributor.advisor | Yung-Yu Chuang | en |
| dc.contributor.author | 吳勝濬 | zh_TW |
| dc.contributor.author | Sheng-Chun Wu | en |
| dc.date.accessioned | 2023-10-03T17:37:26Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-10-03 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-07-24 | - |
| dc.identifier.citation | A. Dudhane, S.W. Zamir, S. Khan, F. S. Khan, and M.-H. Yang, “Burst image restoration and enhancement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 5759–5768. v, 4, 10, 16, 18, 20, 21, 29, 39, 41
S. Guo, X. Yang, J. Ma, G. Ren, and L. Zhang, “A differentiable two-stage alignment scheme for burst image reconstruction with large shift,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 17 472–17 481. v, 5, 7, 9, 10, 22, 26, 27, 29, 39, 41 B. Gunturk, J. Glotzbach, Y. Altunbasak, R. Schafer, and R. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Processing Magazine, vol. 22, no. 1, pp. 44–54, 2005. 3 T. Ehret and G. Facciolo, “A Study of Two CNN Demosaicking Algorithms,” Image Processing On Line, vol. 9, pp. 220–230, 2019. 3 F. Luisier, T. Blu, and M. Unser, “Image denoising in mixed poisson–gaussian noise,” IEEE Transactions on Image Processing, vol. 20, no. 3, pp. 696–708, 2011. 4 V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Advances in Neural Information Processing Systems, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, Eds., vol. 21. Curran Associates, Inc., 2008. 4 B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.-K. Liang, M. Levoy, and P. Milanfar, “Handheld multi-frame super-resolution,” ACM Transactions on Graphics, vol. 38, no. 4, pp. 1–18, jul 2019. 4 G. Bhat, M. Danelljan, F. Yu, L. Van Gool, and R. Timofte, “Deep reparametrization of multi-frame super-resolution and denoising,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 2460–2470. 4, 29 Z. Luo, Y. Li, S. Cheng, L. Yu, Q. Wu, Z. Wen, H. Fan, J. Sun, and S. Liu, “Bsrt: Improving burst super-resolution with swin transformer and flowguided deformable alignment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2022, pp. 998–1008. 4, 6, 10, 29 X. Wang, K. C. Chan, K. Yu, C. Dong, and C. C. Loy, “Edvr: Video restoration with enhanced deformable convolutional networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 4, 18, 21 Q. Jin, G. Facciolo, and J.-M. Morel, “A review of an old dilemma: Demosaicking first, or denoising first?” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020. 5 M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph., vol. 35, no. 6, pp. 191:1–191:12, Nov. 2016. 5 L. Condat and S. Mosaddegh, “Joint demosaicking and denoising by total variation minimization,” in 2012 19th IEEE International Conference on Image Processing, 2012, pp. 2781–2784. 5 F. Kokkinos and S. Lefkimmiatis, “Iterative joint image demosaicking and denoising using a residual denoising network,” IEEE Transactions on Image Processing, vol. 28, no. 8, pp. 4177–4188, aug 2019. 5 L. Liu, X. Jia, J. Liu, and Q. Tian, “Joint demosaicing and denoising with self guidance,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 5 H. Yue, C. Cao, L. Liao, R. Chu, and J. Yang, “Supervised raw video denoising with a benchmark dataset on dynamic scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 5 S. Guo, Z. Liang, and L. Zhang, “Joint denoising and demosaicking with green channel prior for real-world burst images,” IEEE Transactions on Image Processing, vol. 30, pp. 6930–6942, 2021. 5, 26 H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, and M. Wang, “Swinunet: Unet-like pure transformer for medical image segmentation,” 2021. 6 Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 10 012–10 022. 6, 10 C.-M. Fan, T.-J. Liu, and K.-H. Liu, “Sunet: Swin transformer unet for image denoising,” in 2022 IEEE International Symposium on Circuits and Systems (ISCAS), 2022, pp. 2333–2337. 6 A. van den Oord, O. Vinyals, and k. kavukcuoglu, “Neural discrete representation learning,” in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30. Curran Associates, Inc., 2017. 12 S. Nah, S. Baik, S. Hong, G. Moon, S. Son, R. Timofte, and K. M. Lee, “Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study,” in CVPR Workshops, June 2019. 26 T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron, “Unprocessing images for learned raw denoising,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 27 A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian, “Practical poissonian-gaussian noise modeling and fitting for single-image raw-data,” IEEE Transactions on Image Processing, vol. 17, no. 10, pp. 1737–1754, 2008. 27 | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90788 | - |
| dc.description.abstract | 連拍,一種快速拍攝多張圖像的過程,帶來了獨特的挑戰,包括幀間的大幅度移動和在圖像捕獲過程中引入的噪聲。然而,良好的圖像潛在品質為克服這些障礙提供了顯著的動力。一種有效的提高連拍圖像品質的技術是同時結合去噪和去馬賽克的聯合方法。雖然這種聯合方法顯著提高了圖像品質,但往往由於幀間的大幅度移動而遇到困難。為了應對這一挑戰,我們引入了一種兩階段對齊方案。這種粗略對齊技術熟練地處理大幅度的像素位移,降低對齊的複雜性,進而幫助後續更進階的圖像修復技術。然而,在完成粗略對齊之後,需要更精細的對齊來糾正幀間的微小差異。一種有效的解決方案是結合邊緣增強特徵對齊模塊和金字塔可變形對齊模塊。這種方式可以處理大和小的像素位移,確保了對齊的準確性,從而顯著提高圖像重建的品質。另一方面,去噪和去馬賽克技術的一個副作用可能是紋理細節的丟失。為了儘量避免此情況,可以應用非銳化濾鏡方法。這種方法通過增強邊緣的對比度來提高圖像的銳利度,有效地恢復在去噪和去馬賽克過程中可能消失的紋理。此外,基於轉換器的結構可以捕捉圖像數據內部的長距離相依性。通過理解和利用圖像數據內部的複雜關係,這種創新的結構進一步增強了圖像去噪和去馬賽克過程的性能。總體而言,我們提出了一種較為全面的方法集成了聯合去噪和去馬賽克、兩階段對齊、邊緣增強特徵對齊模塊、金字塔可變形對齊模塊、非銳化濾鏡,以及基於轉換器的結構,可以為連拍圖像修復帶來的挑戰提供一種解決方案。值得注意的是,這種方法顯著提高了合成和真實世界圖像去噪和去馬賽克的品質。 | zh_TW |
| dc.description.abstract | Burst photography, a process involving the rapid capture of multiple images, presents unique challenges, including substantial shifts between frames and noise introduced during image capture. However, the potential for superior image quality provides a significant motivation for overcoming these hurdles. An effective technique for enhancing the quality of burst images combines the processes of denoising and demosaicing. While this joint approach significantly improves image quality, it often confronts difficulties due to large shifts between frames. In response to this challenge, we introduce a two-stage alignment scheme. This coarse alignment technique adeptly manages large shifts, reducing alignment complexity and preparing the ground for more advanced image restoration techniques. However, a more refined alignment is necessary to correct minor discrepancies between frames upon completion of the coarse alignment. Combining an edge-boosting feature alignment module and a pyramid deformable alignment module can provide a powerful solution for this refined alignment. This dual-method approach ensures a comprehensive alignment process that addresses large and small shifts, thereby significantly improving image reconstruction. On the other hand, one side effect of the denoising and demosaicing techniques can be the loss of texture detail. To counter this, the unsharp masking method can be applied. This method enhances the sharpness of an image by boosting the contrast of edges, effectively restoring any textures that may have vanished during denoising and demosaicing. In addition, a transformer-based structure can capture long-range dependencies within the image data. This innovative structure further enhances the performance of the image denoising and demosaicing processes by understanding and harnessing the intricate relationships within the image data. In conclusion, a comprehensive approach integrates a joint denoising and demosaicing process, a two-stage alignment scheme, an edge-boosting feature alignment module, a pyramid deformable alignment module, unsharp masking, and a transformer-based structure can provide a holistic solution to the challenges presented by burst images. Notably, this approach has significantly improved noise reduction and demosaicking of synthetic and real-world image quality. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-10-03T17:37:26Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-10-03T17:37:26Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Abstract i
List of Figures v List of Tables vii 1 Introduction 1 2 RelatedWork 3 2.1 Demosaicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Multi-frame Restoration . . . . . . . . . . . . . . . . . . . . . . 4 2.4 Joint Denoising and Demosaicking for Burst images . . . . . . . . 5 2.5 Unsharp Masking . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.6 Low-Level Vision Transformer . . . . . . . . . . . . . . . . . . . 6 3 Methodology 7 3.1 Coarse Alignment Module . . . . . . . . . . . . . . . . . . . . . 10 3.2 Single RAW Image Denoising Module . . . . . . . . . . . . . . . 11 3.3 Feature Extraction Module . . . . . . . . . . . . . . . . . . . . . 13 3.3.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3.2 Swin Transformer Block . . . . . . . . . . . . . . . . . . 14 3.4 Refined Alignment Module . . . . . . . . . . . . . . . . . . . . . 15 3.4.1 Deformable Convolutional Networks . . . . . . . . . . . 16 3.4.2 Pyramid Deformable Alignment . . . . . . . . . . . . . . 18 3.4.3 Edge Boosting Feature Alignment . . . . . . . . . . . . . 18 3.4.4 Pyramid Edge Boosting DCN Alignment . . . . . . . . . 18 3.5 Feature Fusion Module . . . . . . . . . . . . . . . . . . . . . . . 22 3.6 Swin Transformer Reconstruction Module . . . . . . . . . . . . . 22 4 Experiment 26 4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3.1 Quantitative Results . . . . . . . . . . . . . . . . . . . . 29 4.3.2 Visual Results - Synthetic Dataset . . . . . . . . . . . . . 30 4.3.3 Visual Results - Real-world Dataset . . . . . . . . . . . . 31 5 Ablation Study 39 6 Conclusion 41 Reference 43 | - |
| dc.language.iso | en | - |
| dc.subject | 連拍 | zh_TW |
| dc.subject | 視覺轉換器 | zh_TW |
| dc.subject | 非銳化濾鏡 | zh_TW |
| dc.subject | 可變形卷積 | zh_TW |
| dc.subject | 多幀對齊 | zh_TW |
| dc.subject | 去馬賽克 | zh_TW |
| dc.subject | 去噪 | zh_TW |
| dc.subject | Denoising | en |
| dc.subject | Burst Photography | en |
| dc.subject | Swin Transformer | en |
| dc.subject | Unsharp Masking | en |
| dc.subject | Alignment | en |
| dc.subject | Deformable Convolution | en |
| dc.subject | Demosaicing | en |
| dc.title | 多張影像原始檔去噪及去馬賽克 | zh_TW |
| dc.title | Multiple Raw Image Denoise and Demosaic | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 葉正聖;吳賦哲 | zh_TW |
| dc.contributor.oralexamcommittee | Jeng-Sheng Yeh;Fu-Che Wu | en |
| dc.subject.keyword | 連拍,去噪,去馬賽克,多幀對齊,可變形卷積,非銳化濾鏡,視覺轉換器, | zh_TW |
| dc.subject.keyword | Burst Photography,Denoising,Demosaicing,Alignment,Deformable Convolution,Unsharp Masking,Swin Transformer, | en |
| dc.relation.page | 48 | - |
| dc.identifier.doi | 10.6342/NTU202301263 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2023-07-25 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 未授權公開取用 | 2.89 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
