請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83455
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊永裕 | zh_TW |
dc.contributor.advisor | Yung-Yu Chuang | en |
dc.contributor.author | 曾尹均 | zh_TW |
dc.contributor.author | Ying-Chun Tseng | en |
dc.date.accessioned | 2023-03-19T21:07:59Z | - |
dc.date.available | 2023-12-26 | - |
dc.date.copyright | 2022-10-06 | - |
dc.date.issued | 2022 | - |
dc.date.submitted | 2002-01-01 | - |
dc.identifier.citation | [1] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[2] F. Battisti, E. Bosc, M. Carli, P. Le Callet, and S. Perugia. Objective image quality assessment of 3d synthesized views. Signal Processing: Image Communication, 30:78–88, 2015. [3] S.Bosse,D.Maniry,K.-R.Müller,T.Wiegand,andW.Samek.Deepneuralnetworks for no-reference and full-reference image quality assessment. IEEE Transactions on image processing, 27(1):206–219, 2017. [4] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End- to-end object detection with transformers. In European conference on computer vision, pages 213–229. Springer, 2020. [5] C.-F. R. Chen, Q. Fan, and R. Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF international conference on computer vision, pages 357–366, 2021. [6] M.Cheon,S.-J.Yoon,B.Kang,andJ.Lee.Perceptualimagequalityassessmentwith transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 433–442, 2021. [7] S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam. Objective video quality assessment methods: A classification, review, and performance comparison. IEEE transactions on broadcasting, 57(2):165–182, 2011. [8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. [9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Un- terthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. [10] Y. Fang, H. Zhu, Y. Zeng, K. Ma, and Z. Wang. Perceptual quality assessment of smartphone photography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. [11] H. Guo, Y. Bin, Y. Hou, Q. Zhang, and H. Luo. Iqma network: Image quality multi-scale assessment network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 443–452, 2021. [12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. Advances in neural information processing systems, 28, 2015. [13] M. Narwaria and W. Lin. Objective image quality assessment based on support vector regression. IEEE Transactions on Neural Networks, 21(3):515–519, 2010. [14] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. [15] N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al. Image database tid2013: Peculiarities, results and perspectives. Signal processing: Image communication, 30:57–77, 2015. [16] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti. Tid2008-a database for evaluation of full-reference visual quality assessment met- rics. Advances of Modern Radioelectronics, 10(4):30–45, 2009. [17] H. R. Sheikh, M. F. Sabir, and A. C. Bovik. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on image processing, 15(11):3440–3451, 2006. [18] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI conference on artificial intelligence, 2017. [19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [20] Z.Wang,A.C.Bovik,H.R.Sheikh,andE.P.Simoncelli.Imagequalityassessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004. [21] F.Yang,H.Yang,J.Fu,H.Lu,andB.Guo.Learningtexturetransformernetworkfor image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5791–5800, 2020. [22] J. You and J. Korhonen. Transformer for image quality assessment. In 2021 IEEE International Conference on Image Processing (ICIP), pages 1389–1393. IEEE, 2021. [23] G. Zhai and X. Min. Perceptual image quality assessment: a survey. Science China Information Sƒciences, 63(11):1–52, 2020. [24] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018. [25] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83455 | - |
dc.description.abstract | 圖像質量評估(IQA)在理解和改善視覺體驗方面起著至關重要的作用。近年來,已經有大量的 IQA 研究在現有的資料集中有了顯著的結果。這些模型中,特徵融合的部分幾乎是基於合成資料集的圖像而設計的。因此,訓練圖像通常僅限於合成資料集,也就是完全對齊的圖像,這也導致這些模型難以適應真實世界的圖像。在本文中,我們提出了一種強化的全參考 IQA 模型,此模型可以將未完全對齊的輸入圖像,包括位置、視野,經過全局到局部的對齊處理後做特徵融合,並評估圖像質量。
我們的模型分為四個主要部分:特徵提取、交叉注意力視覺變換器、全局對齊和局部對齊。 實驗結果顯示,我們提出的模型在模擬的資料集媲美其他模型的性能。而在真實資料集上,我們提出的模型不僅體現了對齊的重要性,與其他方法相比,我們也取得了非常出色的結果。 | zh_TW |
dc.description.abstract | Image quality assessment (IQA) plays a vital role in understanding and improving visual experience. In recent years, there has already been plenty of research in IQA that built notable results in existing datasets. The performance of these models is often compromised by the fully aligned datasets constraint in feature fusion. To deal with this, the training images are usually limited to synthetic datasets, causing difficulties to fit in real world. In this paper, we propose an enhanced full-reference IQA model with global-to-local alignment and vision transformer to deal with the input images that are not fully aligned, including position, filed of view.
There are four major components in our model: feature extraction, cross-attention vision transformer, spatial transformer network, and the local patch alignment. The experimental results show that our proposed model has a good performance for the standard $synthetic$ IQA datasets. On the $real$ dataset, our proposed model reflects the importance of alignment. Moreover, compared with other methods, we achieve very outstanding results. | en |
dc.description.provenance | Made available in DSpace on 2023-03-19T21:07:59Z (GMT). No. of bitstreams: 1 U0001-1209202217120100.pdf: 26474101 bytes, checksum: 410c881a29412a73d705439ac6bbf304 (MD5) Previous issue date: 2022 | en |
dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements iii 摘要 v Abstract vii Contents ix List of Figures xi List of Tables xiii Denotation xv Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 Image Quality Assessment....................... 5 2.2 Vision Transformer........................... 6 2.3 Vision Transformer based IQA..................... 7 2.4 Spatial Transformer Network...................... 8 Chapter 3 Method 9 3.1 Feature Extraction ........................... 11 3.2 Multi-Scale Vision Transformer .................... 11 3.2.1 Overview of Cross-Attention Multi-Scale Vision Transformer . . . . 11 3.2.2 Image Quality Assessment with Cross-ViT . . . . . . . . . . . . . . 12 3.3 Global Alignment............................ 13 3.3.1 Model................................. 14 3.3.2 Pretrain and Finetune ......................... 15 3.4 Fusion of Features ........................... 15 3.4.1 Concatenate.............................. 16 3.4.2 Local Patch Alignment ........................ 16 Chapter 4 Experiments 19 4.1 Datasets................................. 19 4.2 Implementation Details......................... 22 4.3 Qualitative Results ........................... 23 Chapter 5 Conclusion 29 References 31 | - |
dc.language.iso | zh_TW | - |
dc.title | 使用全局到局部對齊和 ViT 增強全參考圖像質量評估 | zh_TW |
dc.title | Enhance Full-Reference Image Quality Assessment with Global-to-Local Alignment and Vision Transformer | en |
dc.type | Thesis | - |
dc.date.schoolyear | 110-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 吳賦哲;葉正聖 | zh_TW |
dc.contributor.oralexamcommittee | Fu-Che Wu;Jeng-Sheng Yeh | en |
dc.subject.keyword | 深度學習,圖像質量評估,全局到局部的對齊處理, | zh_TW |
dc.subject.keyword | Deep Learning,Image Quality Assessment,Global-to-Local Alignment, | en |
dc.relation.page | 34 | - |
dc.identifier.doi | 10.6342/NTU202203313 | - |
dc.rights.note | 未授權 | - |
dc.date.accepted | 2022-09-14 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊工程學系 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-110-2.pdf 目前未授權公開取用 | 25.85 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。