Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95329
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor廖世偉zh_TW
dc.contributor.advisorShie-Wei Liaoen
dc.contributor.author費俊昱zh_TW
dc.contributor.authorChun-Yu Feien
dc.date.accessioned2024-09-05T16:11:50Z-
dc.date.available2024-09-06-
dc.date.copyright2024-09-05-
dc.date.issued2024-
dc.date.submitted2024-08-09-
dc.identifier.citation[1] L. Agnolucci, L. Galteri, M. Bertini, and A. Del Bimbo. Arniqa: Learning distortion manifold for image quality assessment. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 189–198, 2024.
[2] J. Antkowiak, T. J. Baina, F. V. Baroncini, N. Chateau, F. FranceTelecom, A. C. F. Pessoa, F. S. Colonnese, I. L. Contin, J. Caviedes, and F. Philips. Final report from the video quality experts group on the validation of objective models of video quality assessment march 2000. Final report from the video quality experts group on the validation of objective models of video quality assessment march, 10, 2000.
[3] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), volume 1, pages 539–546. IEEE, 2005.
[4] F. De Simone, M. Naccari, M. Tagliasacchi, F. Dufaux, S. Tubaro, and T. Ebrahimi. Subjective assessment of h. 264/avc video sequences transmitted over a noisy channel. In 2009 international workshop on quality of multimedia experience, pages 204–209. IEEE, 2009.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
[6] S. Dodge and L. Karam. Understanding how image quality affects deep neural networks. In 2016 eighth international conference on quality of multimedia experience (QoMEX), pages 1–6. IEEE, 2016.
[7] J. P. Ebenezer, Z. Shang, Y. Wu, H. Wei, S. Sethuraman, and A. C. Bovik. Chipqa: No-reference video quality prediction via space-time chips. IEEE Transactions on Image Processing, 30:8059–8074, 2021.
[8] C. Feichtenhofer, H. Fan, J. Malik, and K. He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6202–6211, 2019.
[9] C. Feng, D. Danier, F. Zhang, and D. Bull. Rankdvqa: Deep vqa based on ranking-inspired hybrid training. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1648–1658, 2024.
[10] R. Gao, Z. Huang, and S. Liu. Ql-iqa: Learning distance distribution from quality levels for blind image quality assessment. Signal Processing: Image Communication, 101:116576, 2022.
[11] Y. Gao, Y. Cao, T. Kou, W. Sun, Y. Dong, X. Liu, X. Min, and G. Zhai. Vdpve: Vqa dataset for perceptual video enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1474–1483, 2023.
[12] F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe. Konvid-150k: A dataset for no-reference video quality assessment of videos in-the-wild. IEEE Access, 9:72139–72160, 2021|
[13] J. Gu, H. Cai, C. Dong, J. S. Ren, R. Timofte, Y. Gong, S. Lao, S. Shi,J. Wang, S. Yang, et al. Ntire 2022 challenge on perceptual image quality assessment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 951–967, 2022.
[14] V. Hosu, F. Hahn, M. Jenadeleh, H. Lin, H. Men, T. Szirányi, S. Li, and D. Saupe. The konstanz natural video database (konvid-1k). In 2017 Ninth international conference on quality of multimedia experience (QoMEX), pages 1–6. IEEE, 2017.
[15] V. Hosu, H. Lin, T. Sziranyi, and D. Saupe. Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29:4041–4056, 2020.
[16] D.-J. Huang, Y.-T. Kao, T.-H. Chuang, Y.-C. Tsai, J.-K. Lou, and S.-H. Guan. Sb-vqa: A stack-based video quality assessment framework for video enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1613–1622, 2023.
[17] G. Jinjin, C. Haoming, C. Haoyu, Y. Xiaoxing, J. S. Ren, and D. Chao. Pipal: a large-scale image quality assessment dataset for perceptual image restoration. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pages 633–651. Springer, 2020.
[18] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
[19] J. Ke, Q. Wang, Y. Wang, P. Milanfar, and F. Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5148–5157, 2021.
[20] J. Korhonen. Two-level approach for no-reference consumer video quality assessment. IEEE Transactions on Image Processing, 28(12):5923–5938, 2019.
[21] E. C. Larson and D. M. Chandler. Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of electronic imaging, 19(1):011006–011006, 2010.
[22] B. Li, W. Zhang, M. Tian, G. Zhai, and X. Wang. Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception. IEEE Transactions on Circuits and Systems for Video Technology, 32(9):5944–5958, 2022.
[23] D. Li, T. Jiang, and M. Jiang. Quality assessment of in-the-wild videos. In Proceedings of the 27th ACM international conference on multimedia, pages 2351–2359, 2019.
[24] H. Lin, V. Hosu, and D. Saupe. Kadid-10k: A large-scale artificially distorted iqa database. In 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), pages 1–3. IEEE, 2019.
[25] X. Liu, X. Min, W. Sun, Y. Zhang, K. Zhang, R. Timofte, G. Zhai, Y. Gao, Y. Cao, T. Kou, et al. Ntire 2023 quality assessment of video enhancement challenge. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1551–1569, 2023.
[26] X. Liu, J. Van De Weijer, and A. D. Bagdanov. Rankiqa: Learning from rankings for no-reference image quality assessment. In Proceedings of the IEEE international conference on computer vision, pages 1040–1049, 2017.
[27] Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12009–12019, 2022.
[28] K. Ma, Z. Duanmu, Q. Wu, Z. Wang, H. Yong, H. Li, and L. Zhang. Waterloo exploration database: New challenges for image quality assessment models. IEEE Transactions on Image Processing, 26(2):1004–1016, 2016.
[29] K. Ma, W. Liu, K. Zhang, Z. Duanmu, Z. Wang, and W. Zuo. End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 27(3):1202–1213, 2017.
[30] P. C. Madhusudana, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik. Conviqt: Contrastive video quality estimator. IEEE Transactions on Image Processing, 2023.
[31] A. Mittal, A. K. Moorthy, and A. C. Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695–4708, 2012.
[32] A. Mittal, M. A. Saad, and A. C. Bovik. A completely blind video integrity oracle. IEEE Transactions on Image Processing, 25(1):289–300, 2015.
[33] A. Mittal, R. Soundararajan, and A. C. Bovik. Making a“completely blind"image quality analyzer. IEEE Signal processing letters, 20(3):209–212, 2012.
[34] A. K. Moorthy, L. K. Choi, A. C. Bovik, and G. De Veciana. Video quality assessment on mobile devices: Subjective, behavioral and objective studies. IEEE Journal of Selected Topics in Signal Processing, 6(6):652–671, 2012.
[35] N. Murray, L. Marchesotti, and F. Perronnin. Ava: A large-scale database for aesthetic visual analysis. In 2012 IEEE conference on computer vision and pattern recognition, pages 2408–2415. IEEE, 2012.
[36] Z. Pan, H. Zhang, J. Lei, Y. Fang, X. Shao, N. Ling, and S. Kwong. Dacnn: Blind image quality assessment via a distortion-aware convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 32(11):7518–7531, 2022.
[37] T. Peng, C. Feng, D. Danier, F. Zhang, and D. Bull. Rmt-bvqa: Recurrent memory transformer-based blind video quality assessment for enhanced video content. arXiv preprint arXiv:2405.08621, 2024.
[38] Y. Pitrey, M. Barkowsky, R. Pépion, P. Le Callet, and H. Hlavacs. Influence of the source content and encoding configuration on the perceived quality for scalable video coding. In Human Vision and Electronic Imaging XVII, volume 8291, pages 460–467. SPIE, 2012.
[39] N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al. Image database tid2013: Peculiarities, results and perspectives. Signal processing: Image communication, 30:57–77, 2015.
[40] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
[41] M. A. Saad, A. C. Bovik, and C. Charrier. Blind image quality assessment: A natural scene statistics approach in the dct domain. IEEE transactions on Image Processing, 21(8):3339–3352, 2012.
[42] M. A. Saad, A. C. Bovik, and C. Charrier. Blind prediction of natural video quality. IEEE Transactions on image Processing, 23(3):1352–1365, 2014.
[43] B. Series. Methodology for the subjective assessment of the quality of television pictures. Recommendation ITU-R BT, 500(13), 2012.
[44] K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack. Study of subjective and objective quality assessment of video. IEEE transactions on Image Processing, 19(6):1427–1441, 2010.
[45] H. R. Sheikh, M. F. Sabir, and A. C. Bovik. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on image processing, 15(11):3440–3451, 2006.
[46] Z. Sinno and A. C. Bovik. Large-scale study of perceptual video quality. IEEE Transactions on Image Processing, 28(2):612–627, 2018.
[47] S. Su, Q. Yan, Y. Zhu, C. Zhang, X. Ge, J. Sun, and Y. Zhang. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3667–3676, 2020.
[48] W. Sun, X. Min, W. Lu, and G. Zhai. A deep learning based no-reference quality assessment model for ugc videos. In Proceedings of the 30th ACM International Conference on Multimedia, pages 856–865, 2022
[49] H. Talebi and P. Milanfar. Nima: Neural image assessment. IEEE transactions on image processing, 27(8):3998–4011, 2018.
[50] M. Tan and Q. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pages 6105–6114. PMLR, 2019.
[51] Z. Tu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik. Ugc-vqa: Benchmarking blind video quality assessment for user generated content. IEEE Transactions on Image Processing, 30:4449–4464, 2021.
[52] Z. Tu, X. Yu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik. Rapique: Rapid and accurate video quality prediction of user generated content. IEEE Open Journal of Signal Processing, 2:425–440, 2021.
[53] P. V. Vu and D. M. Chandler. Vis 3: An algorithm for video quality assessment via analysis of spatial and spatiotemporal slices. Journal of Electronic Imaging, 23(1):013016–013016, 2014.
[54] J. Wang, K. C. Chan, and C. C. Loy. Exploring clip for assessing the look and feel of images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 2555–2563, 2023.
[55] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7794–7803, 2018.
[56] Y. Wang, S. Inguva, and B. Adsumilli. Youtube ugc dataset for video compression research. In 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP), pages 1–5. IEEE, 2019.
[57] H. Wu, C. Chen, J. Hou, L. Liao, A. Wang, W. Sun, Q. Yan, and W. Lin. Fast-vqa: Efficient end-to-end video quality assessment with fragment sampling. In European conference on computer vision, pages 538–554. Springer, 2022.
[58] H. Wu, E. Zhang, L. Liao, C. Chen, J. Hou, A. Wang, W. Sun, Q. Yan, and W. Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20144–20154, 2023.
[59] H. Wu, E. Zhang, L. Liao, C. Chen, J. Hou, A. Wang, W. Sun, Q. Yan, and W. Lin. Towards explainable in-the-wild video quality assessment: a database and a language-prompted approach. In Proceedings of the 31st ACM International Conference on Multimedia, pages 1045–1054, 2023.
[60] H. Wu, Z. Zhang, W. Zhang, C. Chen, L. Liao, C. Li, Y. Gao, A. Wang, E. Zhang, W. Sun, et al. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. arXiv preprint arXiv:2312.17090, 2023.
[61] W. Wu, S. Hu, P. Xiao, S. Deng, Y. Li, Y. Chen, and K. Li. Video quality assessment based on swin transformer with spatio-temporal feature fusion and data augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1846–1854, 2023.
[62] S. Yang, T. Wu, S. Shi, S. Lao, Y. Gong, M. Cao, J. Wang, and Y. Yang. Maniqa: Multi-dimension attention network for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1191–1200, 2022.
[63] Q. Ye, H. Xu, J. Ye, M. Yan, A. Hu, H. Liu, Q. Qian, J. Zhang, and F. Huang. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13040–13051, 2024.
[64] Z. Ying, M. Mandal, D. Ghadiyaram, and A. Bovik. Patch-vq:’patching up’the video quality problem. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14019–14029, 2021.
[65] Z. Ying, H. Niu, P. Gupta, D. Mahajan, D. Ghadiyaram, and A. Bovik. From patches to pictures (paq-2-piq): Mapping the perceptual space of picture quality. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3575–3585, 2020.
[66] Z. You, Z. Li, J. Gu, Z. Yin, T. Xue, and C. Dong. Depicting beyond scores: Advancing image quality assessment through multi-modal language models. arXiv preprint arXiv:2312.08962, 2023.
[67] W. Zhang, K. Ma, J. Yan, D. Deng, and Z. Wang. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 30(1):36–47, 2018.
[68] W. Zhang, K. Ma, G. Zhai, and X. Yang. Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Transactions on Image Processing, 30:3474–3486, 2021.
[69] W. Zhang, G. Zhai, Y. Wei, X. Yang, and K. Ma. Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14071–14081, 2023.
[70] K. Zhao, K. Yuan, M. Sun, M. Li, and X. Wen. Quality-aware pre-trained models for blind image quality assessment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22302–22313, 2023.
[71] K. Zhao, K. Yuan, M. Sun, and X. Wen. Zoom-vqa: Patches, frames and clips integration for video quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1302–1310, 2023.
[72] H. Zhu, L. Li, J. Wu, W. Dong, and G. Shi. Metaiqa: Deep meta-learning for no-reference image quality assessment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14143–14152, 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95329-
dc.description.abstract用戶生成內容影片品質評估(UGC-VQA)旨在無給定參考基準影片下預測用戶生成影片的品質。目前,大多數研究集中在具有未知和自然失真的一般用戶生成影片上。過往文獻利用傳統影像演算法和深度學習獲得不錯的預測效能。然而,這些模型無法很好的評估增強影片的視覺品質,代表此研究領域仍存在未開發的空間。有鑑於此,本文提出一種可套用在任何模型上的兩階段訓練策略,利用大量合成失真在大型影片品質評估數據集上進行訓練,以預測增強影片的視覺品質。此外,我們透過研究數據證明此模型可適用於一般的用戶生成影片上。

為了量化各種失真類型對用戶生成影片感知品質的影響,我們提出一種結合數據拓展和學習失真的方法。具體而言,我們在現有影片數據集上加上多種可能出現在用戶生成影片上的失真,進而形成一個規模更大且包含人工失真的影片數據集。對於新生成的失真數據,我們利用一個已經訓練完畢的大語言模型生成偽分數,然後搭建一個孿生網絡模型,並用成對的失真數據訓練。訓練完成後,我
們凍結主幹網路的參數以降低計算複雜度。在對下游數據進行微調時,我們僅訓練一個額外的輕量網路,該網路用於增強模型對整體輸入的感知及計算模型輸出特徵的權重以得出最終預測分數。我們利用一個大型的影像增強數據集證明模型的預測效能和不足之處,並提出一些利用失真評估增強影像品質之改善方法。
zh_TW
dc.description.abstractUser-generated content video quality assessment (UGC-VQA) is aimed at predicting the perceptual quality of user-generated videos without reference. Currently, most works focus on the general type of user-generated videos with unknown authentic distortion. Several hand-crafted and deep-learning methods have been developed to achieve high performance. Nevertheless, these models have diverse performances when evaluating the perceptual quality of UGC videos with enhancement effects, making the solution to the UGC-VQA task flawed. In this work, we propose a model-agnostic two-stage training strategy that includes a pre-training stage to train a dual encoder architecture and a fine-tuning stage that trains a lightweight fusion network to predict the perceptual quality of enhanced videos. We demonstrate that our solution can be extended to a more unconstrained setting on general UGC-VQA datasets.

To capture the synthetic effects accompanied by enhanced videos, we present a learning-by-degrading approach with a data amplification method to quantify the impact of various distortion types on the perceptual quality of videos. Specifically, we impose multiple UGC-related degradations to extend the size of an existing video dataset and leverage a well-trained MLLM to produce pseudo-scores for pre-training the newly generated distorted data. Furthermore, we build a Siamese network that learns the degradations with pairwise input of the same distortion type. The backbone network weights are frozen when fine-tuning downstream data to reduce computation complexity. A lightweight global weighted fusion network is trained to capture the additional information during fine-tuning. We demonstrate the proposed framework's effectiveness and weaknesses by evaluating the largest video enhancement dataset with various categorized enhancement approaches. Furthermore, we suggest some future works that ameliorate our proposed method.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-09-05T16:11:50Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-09-05T16:11:50Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
摘要 ii
Abstract iii
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
1.1 Introduction 1
Chapter 2 Related Work 5
2.1 General UGC-VQA models 5
2.2 Video Enhancement UGC-VQA models 6
2.3 Synthetic and Authentic distortion datasets 7
2.4 VLM and LLM for VQA 9
Chapter 3 Methodology 10
3.1 Data generation technique for UGC videos 10
3.1.1 Frame extraction 10
3.1.2 Data amplification 11
3.2 Two-stage training strategy 13
3.2.1 Pre-training stage of the Aesthetic branch 14
3.2.2 Pre-training stage of the Degradation branch 14
3.2.3 Fine-tuning stage 16
Chapter 4 Evaluation 20
4.1 Experimental Setup 20
4.1.1 Pre-training 20
4.1.2 Fine-tuning 21
4.2 Databases and evaluation metrics 22
4.2.1 Evaluation metrics 22
4.2.2 Evaluation databases 23
4.2.3 Evaluation criteria 24
4.3 Performance Comparison 25
4.4 Ablation Studies 31
Chapter 5 Conclusion 33
References 35
-
dc.language.isoen-
dc.title利用大量人工失真效果評估增強影像之視覺品質zh_TW
dc.titleDEGRAVE: Learning from Synthetic Degradation for Assessing Perceptual Quality of Video Enhancementen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee盧瑞山;傅楸善;傅昭穎zh_TW
dc.contributor.oralexamcommitteeRuei-Shan Lu;Chiou-Shann Fuh;Zhao-Ying Fuen
dc.subject.keyword影像品質評估,人工失真,孿生網路,影像增強,zh_TW
dc.subject.keywordVideo quality assessment,Synthetic Distortion,Siamese Network,Video enhancement,en
dc.relation.page45-
dc.identifier.doi10.6342/NTU202403836-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2024-08-12-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf11.31 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved