Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81827
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor傅立成(Li-Chen Fu)
dc.contributor.authorJie Tingen
dc.contributor.author丁杰zh_TW
dc.date.accessioned2022-11-25T03:04:30Z-
dc.date.available2024-09-01
dc.date.copyright2021-11-06
dc.date.issued2021
dc.date.submitted2021-08-18
dc.identifier.citation[1]衛生福利部中央健康保險署, “109年分級醫療整體成效進度追蹤.”https://www.nhi.gov.tw. Updated: 2021-­02­-20. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012. [3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. [4] F. De la Torre, W.­S. Chu, X. Xiong, X. Ding, and J. Cohn, “Intraface,” in IEEE International Conference on Automatic Face Gesture Recognition and Workshops, 05 2015. [5] K. Hara, H. Kataoka, and Y. Satoh, “Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6546–6555, 2018. [6] K. Hara, H. Kataoka, and Y. Satoh, “Learning spatio­temporal features with 3d resid­ ual networks for action recognition,” in Proceedings of the IEEE International Con­ ference on Computer Vision Workshops, pp. 3154–3160, 2017. [7]. H. Kataoka, T. Wakamiya, K. Hara, and Y. Satoh, “Would mega­scale datasets further enhance spatiotemporal 3d cnns?,” arXiv preprint arXiv:2004.04968, 2020 [8] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpa­ thy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition chal­ lenge,” International journal of computer vision, vol. 115, no. 3, pp. 211–252, 2015. [9] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Vi­ ola, T. Green, T. Back, P. Natsev, et al., “The kinetics human action video dataset,” arXiv preprint arXiv:1705.06950, 2017. [10] Z. Niu, M. Zhou, L. Wang, X. Gao, and G. Hua, “Ordinal regression with multiple output cnn for age estimation,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4920–4928, 2016. [11] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accu­ rate object detection and semantic segmentation,” in Proceedings of the IEEE con­ ference on computer vision and pattern recognition, pp. 580–587, 2014. [12] S. Li and W. Deng, “Deep facial expression recognition: A survey,” IEEE Transac­ tions on Affective Computing, 2020. [13] W. Cao, V. Mirjalili, and S. Raschka, “Consistent rank logits for ordinal regression with convolutional neural networks,” arXiv preprint arXiv:1901.07884, vol. 6, 2019. [14] M. Tavakolian and A. Hadid, “A spatiotemporal convolutional neural network for automatic pain intensity estimation from facial dynamics,” International Journal of Computer Vision, vol. 127, no. 10, pp. 1413–1425, 2019. [15] D. Liu, P. Fengjiao, R. Picard, et al., “Deepfacelift: interpretable personalized mod­ els for automatic estimation of self­reported pain,” in IJCAI 2017 Workshop on Ar­tificial Intelligence in Affective Computing, pp. 1–16, PMLR, 2017. [16] D. Erekat, Z. Hammal, M. Siddiqui, and H. Dibeklioğlu, “Enforcing multilabel con­ sistency for automatic spatio­temporal assessment of shoulder pain intensity,” in Companion Publication of the 2020 International Conference on Multimodal Inter­ action, ICMI ’20 Companion, (New York, NY, USA), p. 156–164, Association for Computing Machinery, 2020. [17] X. Xu, J. S. Huang, and V. R. De Sa, “Pain evaluation in video using extended multitask learning from multidimensional measurements,” in Machine Learning for Health Workshop, pp. 141–154, PMLR, 2020. [18] I. Masi, Y. Wu, T. Hassner, and P. Natarajan, “Deep face recognition: A survey,” in 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pp. 471–478, IEEE, 2018. [19] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition,” in Proceedings of the IEEE conference on com­ puter vision and pattern recognition, pp. 212–220, 2017. [20] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5265–5274, 2018. [21] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4690–4699, 2019. [22] R. Girshick, “Fast r­cnn,” in Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015. [23] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r­cnn: Towards real­time object detection with region proposal networks,” arXiv preprint arXiv:1506.01497, 2015. [24] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real­time object detection,” in Proceedings of the IEEE conference on computer vi­ sion and pattern recognition, pp. 779–788, 2016. [25] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271, 2017. [26] A. Othmani, A. R. Taleb, H. Abdelkawy, and A. Hadid, “Age estimation from faces using deep learning: A comparative analysis,” Computer Vision and Image Under­ standing, vol. 196, p. 102961, 2020. [27] R. Rothe, R. Timofte, and L. Van Gool, “Deep expectation of real and apparent age from a single image without facial landmarks,” International Journal of Computer Vision, vol. 126, no. 2, pp. 144–157, 2018. [28] K.­Y. Chang and C.­S. Chen, “A learning framework for age rank estimation based on face images with scattering transform,” IEEE Transactions on Image Processing, vol. 24, no. 3, pp. 785–798, 2015. [29] A. Bulat and G. Tzimiropoulos, “How far are we from solving the 2d 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks),” in International Conference on Computer Vision, 2017. [30] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 6, pp. 681– 685, 2001. [31] G. Antipov, M. Baccouche, S.­A. Berrani, and J.­L. Dugelay, “Effective training of convolutional neural networks for face­based gender and age prediction,” Pattern Recognition, vol. 72, pp. 15–26, 2017. [32] S. Chen, C. Zhang, M. Dong, J. Le, and M. Rao, “Using ranking­cnn for age esti­ mation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [33] O. Sendik and Y. Keller, “Deepage: Deep learning of face­based age estimation,”Signal Processing: Image Communication, vol. 78, pp. 368–375, 2019. [34] M. Lynch, “Pain as the fifth vital sign,” Journal of Infusion Nursing, vol. 24, no. 2, pp. 85–94, 2001. [35] F.­X. Lesage, S. Berjot, and F. Deschamps, “Clinical stress assessment using a visual analogue scale,” Occupational medicine, vol. 62, no. 8, pp. 600–605, 2012. [36] H. Flor, C. Breitenstein, N. Birbaumer, and M. Fürst, “A psychophysiological anal­ ysis of spouse solicitousness towards pain behaviors, spouse interaction, and pain perception,” Behavior Therapy, vol. 26, no. 2, pp. 255–272, 1995. [37] E. Friesen and P. Ekman, “Facial action coding system: a technique for the measure­ ment of facial movement,” Palo Alto, vol. 3, no. 2, p. 5, 1978. [38] K. M. Prkachin and P. E. Solomon, “The structure, reliability and validity of pain ex­pression: Evidence from patients with shoulder pain,” Pain, vol. 139, no. 2, pp. 267– 274, 2008. [39] S. B. McMahon, M. Koltzenburg, I. Tracey, and D. Turk, Wall melzack’s textbook of pain e­book. Elsevier Health Sciences, 2013. [40] R. B. Fillingim, C. D. King, M. C. Ribeiro­Dasilva, B. Rahim­Williams, and J. L. Riley III, “Sex, gender, and pain: a review of recent clinical and experimental find­ ings,” The journal of pain, vol. 10, no. 5, pp. 447–485, 2009. [41] K. D. Craig, “The facial expression of pain better than a thousand words?,” APS Journal, vol. 1, no. 3, pp. 153–162, 1992. [42] P. Lucey, J. F. Cohn, K. M. Prkachin, P. E. Solomon, and I. Matthews, “Painful data: The unbc­mcmaster shoulder pain expression archive database,” in 2011 IEEE International Conference on Automatic Face Gesture Recognition (FG), pp. 57–64, 2011. [43] D. L. Martinez, O. Rudovic, and R. Picard, “Personalized automatic estimation of self­reported pain intensity from facial expressions,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2318–2327, IEEE, 2017. [44] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308, 2017. [45] D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “C3d: generic fea­ tures for video analysis,” CoRR, abs/1412.0767, vol. 2, no. 7, p. 8, 2014. [46] S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy, “Rethinking spatiotemporal feature learning for video understanding,” arXiv preprint arXiv:1712.04851, vol. 1, no. 2, p. 5, 2017. [47] G. Varol, I. Laptev, and C. Schmid, “Long­term temporal convolutions for ac­ tion recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 6, pp. 1510–1517, 2017. [48] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks for action recognition in videos,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 11, pp. 2740–2755, 2018. [49] C.­Y. Wu, C. Feichtenhofer, H. Fan, K. He, P. Krahenbuhl, and R. Girshick, “Long­ term feature banks for detailed video understanding,” in Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition, pp. 284–293, 2019. [50] C. Feichtenhofer, H. Fan, J. Malik, and K. He, “Slowfast networks for video recogni­ tion,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 6202–6211, 2019. [51] J. Lin, C. Gan, and S. Han, “Tsm: Temporal shift module for efficient video under­ standing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7083–7093, 2019. [52] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in European conference on computer vision, pp. 499–515, Springer, 2016. [53] J. Wang, F. Zhou, S. Wen, X. Liu, and Y. Lin, “Deep metric learning with angular loss,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. [54] E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in Interna­ tional workshop on similarity­based pattern recognition, pp. 84–92, Springer, 2015. [55] X. Wang, X. Han, W. Huang, D. Dong, and M. R. Scott, “Multi­similarity loss with general pair weighting for deep metric learning,” in Proceedings of the IEEE Con­ ference on Computer Vision and Pattern Recognition, pp. 5022–5030, 2019. [56] A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, Z. Ambadar, K. M. Prkachin, and P. E. Solomon, “The painful face–pain expression recognition using active appearance models,” Image and vision computing, vol. 27, no. 12, pp. 1788–1796, 2009. [57] S. Brahnam, C.­F. Chuang, F. Y. Shih, and M. R. Slack, “Machine recognition and representation of neonatal facial displays of acute pain,” Artificial intelligence in medicine, vol. 36, no. 3, pp. 211–222, 2006. [58] F. Wang, X. Xiang, C. Liu, T. D. Tran, A. Reiter, G. D. Hager, H. Quon, J. Cheng, and A. L. Yuille, “Regularizing face verification nets for pain intensity regression,” in 2017 IEEE International Conference on Image Processing (ICIP), pp. 1087–1091, IEEE, 2017. [59] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann ma­ chines,” in Proc. 27th International Conference on Machine Learning, 2010. [60] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016. [61] D.P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Interna­ tional Conference on Learning Representations, 2015. [62] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization.,” Journal of machine learning research, vol. 12, no. 7, 2011. [63] I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International conference on machine learning, pp. 1139–1147, PMLR, 2013. [64] S. Hochreiter and J. Schmidhuber, “Long short­term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. [65] Y. Fan, X. Lu, D. Li, and Y. Liu, “Video­based emotion recognition using cnn­rnn and c3d hybrid networks,” in Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 445–450, 2016. [66] S. Jaiswal, J. Egede, and M. Valstar, “Deep learned cumulative attribute regression,” in 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018), pp. 715–722, 2018. [67] P. Rodriguez, G. Cucurull, J. Gonzàlez, J. M. Gonfaus, K. Nasrollahi, T. B. Moes­ lund, and F. X. Roca, “Deep pain: Exploiting long short­term memory networks for facial expression classification,” IEEE transactions on cybernetics, 2017. [68] J. Egede, M. Valstar, and B. Martinez, “Fusing deep learned and hand­crafted fea­ tures of appearance, shape, and dynamics for automatic pain estimation,” in 2017 12th IEEE international conference on automatic face gesture recognition (FG 2017), pp. 689–696, IEEE, 2017. [69] Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li, “S3fd: Single shot scale­ invariant face detector,” in Proceedings of the IEEE international conference on computer vision, pp. 192–201, 2017. [70] Y. Fu, G. Guo, and T. S. Huang, “Age synthesis and estimation via faces: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 11, pp. 1955–1976, 2010. [71] M. W. Heft, R. H. Gracely, R. Dubner, and P. A. McGrath, “A validation model for verbal descriptor scaling of human clinical pain,” Pain, vol. 9, no. 3, pp. 363–373, 1980. [72] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., “Pytorch: An imperative style, high­performance deep learning library,” Advances in neural information processing systems, vol. 32, pp. 8026–8037, 2019. [73] L. Van der Maaten and G. Hinton, “Visualizing data using t­-sne.,” Journal of machine learning research, vol. 9, no. 11, 2008.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81827-
dc.description.abstract在醫院急診中,檢傷系統幫助醫護人員決定病患的危急程度。在台灣,我們 所使用的是台灣急診檢傷五級分類系統,然而統計顯示有超過一半 (54%) 的急診 病患被分類至第三級。這隱含了兩個問題,這個系統會高估或低估病患的緊急程 度,低估可能會造成病患病情的急劇惡化而高估可能會浪費有限的醫療資源,所 以我們需要一個新的檢傷系統來最佳化我們醫療資源的使用。 疼痛指數是判斷病患狀態的其中一個指標,所以在這篇論文中,我們專注在 預測每個病患的疼痛指數,在現行的醫療體系中,疼痛指數是根據視覺類比量表 定義出來的,因為這種疼痛指數是病患自己評估的,所以這種數值有非常強烈的 主觀性並造成不同病患之間的疼痛判定有很大的落差。為了解決這種問題,我們 基於端到端訓練的深度模型架構設計了一個自動評估疼痛指數的系統,我們根據 所看到的病患影像資訊,並在訓練時加入特徵距離排序損失。我們的模型會先從 影片萃取出特徵,這個特徵不僅是拿來預測疼痛指數的,它也幫助我們計算特徵 距離排序損失。我們根據疼痛強度的有序性來設計這個損失函數,這個損失函數 會依據特徵和每個等級的距離做特徵排序。 在實驗中顯示我們的方法在各種指標上勝過以前的方法,這些指標包含平均 平方誤差、平均絕對誤差、組內相關係數和皮爾森積動差相關係數。在消融研究 中更可以顯示出我們的方法是如何改善疼痛指數的預測。zh_TW
dc.description.provenanceMade available in DSpace on 2022-11-25T03:04:30Z (GMT). No. of bitstreams: 1
U0001-2607202116503300.pdf: 21265931 bytes, checksum: 422ba93d9fbe932bc33017fe865dd01f (MD5)
Previous issue date: 2021
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i 致謝 ii 中文摘要 iii ABSTRACT iv CONTENTS vi LIST OF FIGURES x LIST OF TABLES xii Chapter 1 Introduction 1 1.1 Motivation·························· 1 1.2 RelatedWork························ 6 1.2.1 Spatiotempora lNeural Network .................. 6 1.2.2 Deep Metric Learning ....................... 7 1.2.3 Pain Intensity Estimation . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Contribution························· 8 1.4 Thesis Organization······················ 9 Chapter 2 Preliminaries 11 2.1 Deep Neural Network·····················11 2.1.1 Neural Network........................... 12 2.1.2 Activation Function ........................ 13 2.1.3 Optimizer.............................. 15 2.2 Convolutional Neural Network·················17 2.2.1 Convolutional Layers........................ 17 2.2.2 Pooling Layers ........................... 20 2.2.3 AlexNet............................... 21 2.2.4 Residual Net ............................ 21 2.2.5 3D Residual Net .......................... 22 2.3 Recurrent Neural Network···················23 2.4 PainIntensity························25 2.4.1 Prkachin and Solomon Pain Intensity Scale . . . . . . . . . . . . 26 2.4.2 Visual Analog Scale ........................ 26 Chapter 3 Method 28 3.1 System Overview·······················28 3.2 Network Architecture Design··················29 3.2.1 Preprocessing............................ 29 3.2.2 Backbone Network......................... 30 3.3 Distance Ordering······················31 3.3.1 Feature Pool ............................ 32 3.3.2 Feature Gathering.......................... 34 3.3.3 Weighted Pushing Loss....................... 36 3.3.4 Distance­-based Triplet Loss .................... 38 3.4 Training and Inference ····················39 Chapter 4 Experiments 41 4.1 Experimental Setup······················41 4.1.1 Datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.1.2 Evaluation Metrics. . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1.3 Implementation Details. . . . . . . . . . . . . . . . . . . . . . . 46 4.2 Ablation Study························48 4.2.1 Face Alignment. . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.2 Action Units. . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.3 Feature Gathering. . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.4 Analysis of different length of Sequence. . . . . . . . . . . . . . 49 4.2.5 Analysis of Weighted Pushing Loss. . . . . . . . . . . . . . . . 50 4.2.6 Analysis of Distance­-based Triplet Loss. . . . . . . . . . . . . . 50 4.2.7 The effect of different components. . . . . . . . . . . . . . . . . 51 4.2.8 The t­SNE visualization of different components. . . . . . . . . 51 4.3 Experimental Result on UNBC­-McMaster Dataset·········53 4.4 Experimental Result on Age Estimation·············54 4.4.1 Comparison with Other Methods on AFAD. . . . . . . . . . . . 54 4.4.2 Comparison with Other Methods on FGNet. . . . . . . . . . . . 55 4.5 Experimental Result on NTUH­ED Dataset············55 4.5.1 NTUH­ED Dataset. . . . . . . . . . . . . . . . . . . . . . . . . 55 4.5.2 Analysis of Triage Process. . . . . . . . . . . . . . . . . . . . . 57 4.5.3 Distance Ordering on NTUH­-ED Dataset . . . . . . . . . . . . .58 4.5.4 Analysis of Sampling Strategies ..................58 Chapter 5 Conclusion 60 REFERENCE 61
dc.language.isoen
dc.subject深度學習zh_TW
dc.subject疼痛指數zh_TW
dc.subject臉部表情zh_TW
dc.subject影像識別zh_TW
dc.subject視覺類比量表zh_TW
dc.subjectFacial Ex­pressionen
dc.subjectPainen
dc.subjectVisual Analogue Scaleen
dc.subjectDeep learningen
dc.subjectVideo Recognitionen
dc.title具有距離排序機制的深度監督式度量學習之基於時空間序列的疼痛強度估測系統zh_TW
dc.titleDistance Ordering : A Deep Supervised Metric Learning for Pain Intensity Estimation via SpatioTemporal Sequenceen
dc.date.schoolyear109-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳祝嵩(Hsin-Tsai Liu),張智星(Chih-Yang Tseng),黃建華,蔡居霖
dc.subject.keyword疼痛指數,視覺類比量表,深度學習,影像識別,臉部表情,zh_TW
dc.subject.keywordPain,Visual Analogue Scale,Deep learning,Video Recognition,Facial Ex­pression,en
dc.relation.page70
dc.identifier.doi10.6342/NTU202101765
dc.rights.note同意授權(全球公開)
dc.date.accepted2021-08-19
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
dc.date.embargo-lift2024-09-01-
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
U0001-2607202116503300.pdf20.77 MBAdobe PDFView/Open
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved