Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21487
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕(Yung-Yu Chuang)
dc.contributor.authorJyun-Ruei Wongen
dc.contributor.author翁浚瑞zh_TW
dc.date.accessioned2021-06-08T03:35:32Z-
dc.date.copyright2021-02-22
dc.date.issued2021
dc.date.submitted2021-01-25
dc.identifier.citation[1] A. Abdelhamed, S. Lin, and M. S. Brown. A high­ quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1692–1700, 2018.
[2] M. Aittala and F. Durand. Burst image deblurring using permutation invariant convo­lutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 731–747, 2018.
[3] A. Alsaiari, R. Rustagi, M. M. Thomas, A. G. Forbes, et al. Image denoising using a generative adversarial network. In2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT), pages 126–132. IEEE,2019.
[4] J. Anaya and A. Barbu. Renoir­-a dataset for real low ­light noise image reduction.arXiv preprint arXiv, 1409:6, 2014.
[5] T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron. Unprocess­ing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
[6] J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi. Real-­time video super­resolution with spatio-­temporal networks and motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages 4778–4787, 2017.
[7] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291–3300, 2018.
[8] J. Chen, J. Chen, H. Chao, and M. Yang. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3155–3164, 2018.
[9]  S. Chen, D. Shi, M. Sadiq, and M. Zhu. Image denoising via generative adversarial networks with detail loss. In Proceedings of the 2019 2nd International Conference on Information Science and Systems, pages 261–265, 2019.
[10]  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance­-chrominance space. In 2007 IEEE International Conference on Image Processing, volume 1, pages I–313. IEEE, 2007.
[11] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3­d transform­ domain collaborative filtering.IEEE Transactions on image processing,16(8):2080–2095, 2007.
[12] M. Gharbi, G. Chaurasia, S. Paris, and F. Durand. Deep joint demosaicking and denoising. ACM Transactions on Graphics (TOG), 35(6):1–12, 2016.
[13] C. Godard, K. Matzen, and M. Uyttendaele. Deep burst denoising. In Proceedings of the European Conference on Computer Vision (ECCV), pages 538–554, 2018.
[14] S. Gu, R. Timofte, and L. Van Gool. Multi-­bin trainable linear unit for fast image restoration networks.arXiv preprint arXiv:1807.11389, 2018.
[15] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang. Toward convolutional blind de­noising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1712–1722, 2019.
[16] Y. Huang, W. Wang, and L. Wang. Bidirectional recurrent convolutional networks for multi-­frame super-­resolution. Advances in neural information processing systems,28:235–243, 2015.
[17] Y. Jo, S. Wug Oh, J. Kang, and S. Joo Kim. Deep video super­-resolution network using dynamic upsampling filters without explicit motion compensation. In Pro­ceedings of the IEEE conference on computer vision and pattern recognition, pages 3224–3232, 2018.
[18] A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos. Video super­-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging,2(2):109–122, 2016.
[19] S. Lefkimmiatis. Non­-local color image denoising with convolutional neural net­works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3587–3596, 2017.
[20] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, andT. Aila. Noise2noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189, 2018.
[21] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang. Non­-local recurrent network for image restoration. In Advances in Neural Information Processing Systems, pages 1673–1682, 2018.
[22] M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian. Video denoising, deblocking,and enhancement through separable 4­d non-local spatio-temporal transforms. IEEE Transactions on image processing, 21(9):3952–3966, 2012.
[23] O. Makansi, E. Ilg, and T. Brox. End-­to-­end learning of video super-­resolution with motion compensation. In German conference on pattern recognition, pages 203–214. Springer, 2017.
[24] F. Perazzi, J. Pont­Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine­-Hornung. A benchmark dataset and evaluation methodology for video object seg­mentation. In Computer Vision and Pattern Recognition, 2016.
[25] T. Plotz and S. Roth. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE conference on computer vision and pattern recognition,pages 1586–1595, 2017.
[26] T. Plötz and S. Roth. Neural nearest neighbors networks. Advances in Neural Infor­mation Processing Systems, 31:1087–1098, 2018.
[27] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, andZ. Wang. Real-­time single image and video super-­resolution using an efficient sub­pixel convolutional neural network. In Proceedings of the IEEE conference on com­puter vision and pattern recognition, pages 1874–1883, 2016.
[28] Y. Tai, J. Yang, X. Liu, and C. Xu. Memnet: A persistent memory network for image restoration. In Proceedings of the IEEE international conference on computer vision, pages 4539–4547, 2017.
[29] X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia. Detail ­revealing deep video super-­resolution. In Proceedings of the IEEE International Conference on Computer Vi­sion, pages 4472–4480, 2017.
[30] M. Tassano, J. Delon, and T. Veit. Fastdvdnet: Towards real-­time deep video de­noising without flow estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1354–1363, 2020.
[31] Y. Tian, Y. Zhang, Y. Fu, and C. Xu. Tdan: Temporally ­deformable alignment net­work for video super­resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3360–3369, 2020.
[32] X. Wang, K. C. Chan, K. Yu, C. Dong, and C. Change Loy. Edvr: Video restoration with enhanced deformable convolutional networks. In Proceedings of the IEEE Con­ference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
[33] J. Xu, H. Li, Z. Liang, D. Zhang, and L. Zhang. Real-world noisy image denoising: A new benchmark.arXiv preprint arXiv:1804.02603, 2018.
[34] T. Xue, B. Chen, J. Wu, D. Wei, and W. T. Freeman. Video enhancement with task-­oriented flow. International Journal of Computer Vision, 127(8):1106–1125, 2019.
[35] D. Yang and J. Sun. Bm3d­net: A convolutional neural network for transform­-domain collaborative filtering.IEEE Signal Processing Letters, 25(1):55–59, 2017.
[36] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
[37] K. Zhang, W. Zuo, S. Gu, and L. Zhang. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3929–3938, 2017.
[38] K. Zhang, W. Zuo, and L. Zhang. Ffdnet: Toward a fast and flexible solution for cnn­-based image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622,2018.
[39] Q. ZhiPing, Z. YuanQi, S .Yi, and L.XiangBo. A new generative adversarial network for texture preserving image denoising. In 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA), pages 1–5. IEEE, 2018.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21487-
dc.description.abstract由於硬體的限制,雜訊在攝影學上是一個不可避免的問題。為了處理雜訊學術上已經有了許多去雜訊的方法。其中一個稱為影片去雜訊,也就是利用影片中其他幀來幫忙對每個幀去雜訊。本論文中提出一個使用 N3NET 作為骨幹的模型。我們將這個概念延伸到多影像的去雜訊問題。另外我們還訓練另一個子模型來學一個所謂的細節水平圖。細節水平圖之於影像的概念類似於雜訊水平圖之於雜訊。整個模型最後使用細節水平圖和原本的影像共同預測最後的結果。利用 3D 的 N3NET 可以在視覺上得到和前人成果類似的品質。並且使用接近真實的細節水平圖,我們可以得到再進一步更好的結果。zh_TW
dc.description.abstractNoise is an inevitable problem of photography due to hardware limitations. To tackle with it, researchers have developed various kinds of denoising methods. One of the methods use neighbor frames from video to help denoising each frames, which is so-called video denoising. In this paper, we use N3NET as backbone, which leverages neighbor patches to help denoising, and extend the concept of it to multiple images denoising problem. Furthermore, we train another sub-model to learn a so-called detail-level map of images, an analogy to noise-level map of noise from photography terms. In the end we use both detail-level map and original frames to predict the denoised result. We show that by using 3D N3Net we can have similar visual quality with state-of-the-art methods. And with close-to-ground-truth detail-level map, we can further improve the result.en
dc.description.provenanceMade available in DSpace on 2021-06-08T03:35:32Z (GMT). No. of bitstreams: 1
U0001-2201202111175000.pdf: 3532849 bytes, checksum: 12ecf13042776e8ee92bdadf74100f02 (MD5)
Previous issue date: 2021
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Related Work 4
2.1 Denoising Models of Synthetic Noise 5
2.2 Denoising Models of Real Noise 5
2.3 Burst Denoising Models. 6
2.4 Video Denoising Models. 6
2.5 Synthetic Noise Datasets 6
2.6 Real Noise Datasets 7
Chapter 3 Method 8
3.1 Overall Architecture 8
3.2 Preliminary of N3Net 8
3.3 3D N3Net 10
3.4 Detail-­Level Map 12
Chapter 4 Experiments and discussion 14
4.1 Dataset 14
4.2 Implementation Details 14
4.3 Result 15
4.4 Ablation Study 16
Chapter 5 Conclusion 19
References 20
dc.language.isoen
dc.title使用三維 N3Net 和細節水平圖作影片去噪zh_TW
dc.titleVideo Denoising using 3D N3Net and Detail-Level Mapen
dc.typeThesis
dc.date.schoolyear109-1
dc.description.degree碩士
dc.contributor.oralexamcommittee葉正聖(Jeng-Sheng Yeh),吳賦哲(Fu-Che Wu)
dc.subject.keyword深度學習,影像去雜訊,影片去雜訊,zh_TW
dc.subject.keywordDeep learning,Image Denoising,Video Denoising,en
dc.relation.page25
dc.identifier.doi10.6342/NTU202100123
dc.rights.note未授權
dc.date.accepted2021-01-26
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
U0001-2201202111175000.pdf
  目前未授權公開取用
3.45 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved