Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83455
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕zh_TW
dc.contributor.advisorYung-Yu Chuangen
dc.contributor.author曾尹均zh_TW
dc.contributor.authorYing-Chun Tsengen
dc.date.accessioned2023-03-19T21:07:59Z-
dc.date.available2023-12-26-
dc.date.copyright2022-10-06-
dc.date.issued2022-
dc.date.submitted2002-01-01-
dc.identifier.citation[1] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[2] F. Battisti, E. Bosc, M. Carli, P. Le Callet, and S. Perugia. Objective image quality assessment of 3d synthesized views. Signal Processing: Image Communication, 30:78–88, 2015.
[3] S.Bosse,D.Maniry,K.-R.Müller,T.Wiegand,andW.Samek.Deepneuralnetworks for no-reference and full-reference image quality assessment. IEEE Transactions on image processing, 27(1):206–219, 2017.
[4] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End- to-end object detection with transformers. In European conference on computer vision, pages 213–229. Springer, 2020.
[5] C.-F. R. Chen, Q. Fan, and R. Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF international conference on computer vision, pages 357–366, 2021.
[6] M.Cheon,S.-J.Yoon,B.Kang,andJ.Lee.Perceptualimagequalityassessmentwith transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 433–442, 2021.
[7] S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam. Objective video quality assessment methods: A classification, review, and performance comparison. IEEE transactions on broadcasting, 57(2):165–182, 2011.
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
[9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Un- terthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[10] Y. Fang, H. Zhu, Y. Zeng, K. Ma, and Z. Wang. Perceptual quality assessment of smartphone photography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
[11] H. Guo, Y. Bin, Y. Hou, Q. Zhang, and H. Luo. Iqma network: Image quality multi-scale assessment network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 443–452, 2021.
[12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. Advances in neural information processing systems, 28, 2015.
[13] M. Narwaria and W. Lin. Objective image quality assessment based on support vector regression. IEEE Transactions on Neural Networks, 21(3):515–519, 2010.
[14] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
[15] N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al. Image database tid2013: Peculiarities, results and perspectives. Signal processing: Image communication, 30:57–77, 2015.
[16] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti. Tid2008-a database for evaluation of full-reference visual quality assessment met- rics. Advances of Modern Radioelectronics, 10(4):30–45, 2009.
[17] H. R. Sheikh, M. F. Sabir, and A. C. Bovik. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on image processing, 15(11):3440–3451, 2006.
[18] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI conference on artificial intelligence, 2017.
[19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[20] Z.Wang,A.C.Bovik,H.R.Sheikh,andE.P.Simoncelli.Imagequalityassessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
[21] F.Yang,H.Yang,J.Fu,H.Lu,andB.Guo.Learningtexturetransformernetworkfor image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5791–5800, 2020.
[22] J. You and J. Korhonen. Transformer for image quality assessment. In 2021 IEEE International Conference on Image Processing (ICIP), pages 1389–1393. IEEE, 2021.
[23] G. Zhai and X. Min. Perceptual image quality assessment: a survey. Science China Information Sƒciences, 63(11):1–52, 2020.
[24] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018.
[25] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83455-
dc.description.abstract圖像質量評估(IQA)在理解和改善視覺體驗方面起著至關重要的作用。近年來,已經有大量的 IQA 研究在現有的資料集中有了顯著的結果。這些模型中,特徵融合的部分幾乎是基於合成資料集的圖像而設計的。因此,訓練圖像通常僅限於合成資料集,也就是完全對齊的圖像,這也導致這些模型難以適應真實世界的圖像。在本文中,我們提出了一種強化的全參考 IQA 模型,此模型可以將未完全對齊的輸入圖像,包括位置、視野,經過全局到局部的對齊處理後做特徵融合,並評估圖像質量。
我們的模型分為四個主要部分:特徵提取、交叉注意力視覺變換器、全局對齊和局部對齊。
實驗結果顯示,我們提出的模型在模擬的資料集媲美其他模型的性能。而在真實資料集上,我們提出的模型不僅體現了對齊的重要性,與其他方法相比,我們也取得了非常出色的結果。
zh_TW
dc.description.abstractImage quality assessment (IQA) plays a vital role in understanding and improving visual experience. In recent years, there has already been plenty of research in IQA that built notable results in existing datasets. The performance of these models is often compromised by the fully aligned datasets constraint in feature fusion. To deal with this, the training images are usually limited to synthetic datasets, causing difficulties to fit in real world. In this paper, we propose an enhanced full-reference IQA model with global-to-local alignment and vision transformer to deal with the input images that are not fully aligned, including position, filed of view.
There are four major components in our model: feature extraction, cross-attention vision transformer, spatial transformer network, and the local patch alignment.
The experimental results show that our proposed model has a good performance for the standard $synthetic$ IQA datasets. On the $real$ dataset, our proposed model reflects the importance of alignment. Moreover, compared with other methods, we achieve very outstanding results.
en
dc.description.provenanceMade available in DSpace on 2023-03-19T21:07:59Z (GMT). No. of bitstreams: 1
U0001-1209202217120100.pdf: 26474101 bytes, checksum: 410c881a29412a73d705439ac6bbf304 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements iii
摘要 v
Abstract vii
Contents ix
List of Figures xi
List of Tables xiii
Denotation xv
Chapter 1 Introduction 1
Chapter 2 Related Work 5
2.1 Image Quality Assessment....................... 5
2.2 Vision Transformer........................... 6
2.3 Vision Transformer based IQA..................... 7
2.4 Spatial Transformer Network...................... 8
Chapter 3 Method 9
3.1 Feature Extraction ........................... 11
3.2 Multi-Scale Vision Transformer .................... 11
3.2.1 Overview of Cross-Attention Multi-Scale Vision Transformer . . . . 11
3.2.2 Image Quality Assessment with Cross-ViT . . . . . . . . . . . . . . 12
3.3 Global Alignment............................ 13
3.3.1 Model................................. 14
3.3.2 Pretrain and Finetune ......................... 15
3.4 Fusion of Features ........................... 15
3.4.1 Concatenate.............................. 16
3.4.2 Local Patch Alignment ........................ 16
Chapter 4 Experiments 19
4.1 Datasets................................. 19
4.2 Implementation Details......................... 22
4.3 Qualitative Results ........................... 23
Chapter 5 Conclusion 29
References 31
-
dc.language.isozh_TW-
dc.subject全局到局部的對齊處理zh_TW
dc.subject圖像質量評估zh_TW
dc.subject深度學習zh_TW
dc.subject深度學習zh_TW
dc.subject全局到局部的對齊處理zh_TW
dc.subject圖像質量評估zh_TW
dc.subjectGlobal-to-Local Alignmenten
dc.subjectGlobal-to-Local Alignmenten
dc.subjectImage Quality Assessmenten
dc.subjectDeep Learningen
dc.subjectDeep Learningen
dc.subjectImage Quality Assessmenten
dc.title使用全局到局部對齊和 ViT 增強全參考圖像質量評估zh_TW
dc.titleEnhance Full-Reference Image Quality Assessment with Global-to-Local Alignment and Vision Transformeren
dc.typeThesis-
dc.date.schoolyear110-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee吳賦哲;葉正聖zh_TW
dc.contributor.oralexamcommitteeFu-Che Wu;Jeng-Sheng Yehen
dc.subject.keyword深度學習,圖像質量評估,全局到局部的對齊處理,zh_TW
dc.subject.keywordDeep Learning,Image Quality Assessment,Global-to-Local Alignment,en
dc.relation.page34-
dc.identifier.doi10.6342/NTU202203313-
dc.rights.note未授權-
dc.date.accepted2022-09-14-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-110-2.pdf
  未授權公開取用
25.85 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved