Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71438
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成(Li-Chen Fu)
dc.contributor.authorLingfeng Zhouen
dc.contributor.author周靈風zh_TW
dc.date.accessioned2021-06-17T06:00:44Z-
dc.date.available2024-02-18
dc.date.copyright2021-02-26
dc.date.issued2021
dc.date.submitted2021-02-19
dc.identifier.citation[1] N. Hoot and D. Aronsky, 'Systematic review of emergency department crowding: causes, effects, and solutions,' Annals of emergency medicine, vol. 52 2, pp. 126-36, 2008.
[2] A. B. Nassif, I. Shahin, I. B. Attili, M. Azzeh, and K. Shaalan, 'Speech Recognition Using Deep Neural Networks: A Systematic Review,' IEEE Access, vol. 7, pp. 19143-19165, 2019.
[3] M. Leening, M. M. Vedder, J. Witteman, M. Pencina, and E. Steyerberg, 'Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide,' Annals of internal medicine, vol. 160 2, pp. 122-31, 2014.
[4] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, 'A Semi-automatic Methodology for Facial Landmark Annotation,' 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 896-903, 2013.
[5] L.-C. Fu , '以人工智慧改善急診病人流動及解決擁塞之全面性策略 - 快速精確的電子化驗傷,' ed. 臺北國際醫療科技展: 台大醫院, 2020.
[6] A. Beam and I. Kohane, 'Big Data and Machine Learning in Health Care,' JAMA, vol. 319 13, pp. 1317-1318, 2018.
[7] K. M. Prkachin and P. Solomon, 'The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain,' PAIN, vol. 139, pp. 267-274, 2008.
[8] A. Ashraf, S. Lucey, J.Cohn T. Chen, Z. Ambadar, K. Prkachin, and P. Solomon, 'The painful face - Pain expression recognition using active appearance models,' Image and vision computing, vol. 27 12, pp. 1788-1796, 2009.
[9] P. Lucey, J. Cohn, S. Lucey, S. Sridharan, and K. Prkachin, 'Automatically detecting action units from faces of pain: Comparing shape and appearance features,' 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 12-18, 2009.
[10] R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz, 'Pain detection through shape and appearance features,' 2013 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, 2013.
[11] H. Pedersen, 'Learning Appearance Features for Pain Detection Using the UNBC-McMaster Shoulder Pain Expression Archive Database,' in ICVS, 2015.
[12] M. Adibuzzaman, C. Ostberg, S. Ahamed, R. Povinelli, B. Sindhu, R. Love, F. Kawsar, and G. M. T. Ahsan , 'Assessment of Pain Using Facial Pictures Taken with a Smartphone,' 2015 IEEE 39th Annual Computer Software and Applications Conference, vol. 2, pp. 726-731, 2015.
[13] D. Delgado, B. Lambert, N. Boutris, P. Mcculloch, A. B. Robbins, M. R. Moreno, and J. Harris, 'Validation of Digital Visual Analog Scale Pain Scoring With a Traditional Paper-based Visual Analog Scale in Adults,' Journal of the American Academy of Orthopaedic Surgeons. Global Research Reviews, vol. 2, 2018.
[14] S. Kaltwang, O. Rudovic, and M. Pantic, 'Continuous Pain Intensity Estimation from Facial Expressions,' in ISVC, 2012.
[15] X. Hong, G. Zhao, S. Zafeiriou, M. Pantic, and M. Pietikäinen, 'Capturing correlations of local features for image representation,' Neurocomputing, vol. 184, pp. 99-106, 2016.
[16] M. Xu, W. Cheng, Q. Zhao, L. Ma, and F. Xu, 'Facial expression recognition based on transfer learning from deep convolutional networks,' 2015 11th International Conference on Natural Computation (ICNC), pp. 702-708, 2015.
[17] P. Rodríguez, G. Cucurull, J. Gonalez, J. M. Gonfaus, K. Nasrollahi, T. Moeslund, and F. Roca, 'Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification,' IEEE transactions on cybernetics, 2017.
[18] M. A. Haque, R. B. Bautista, F. Noroozi, K. Kulkarni, C. B. Laursen, R. Irani, M. Bellantonio, S. Escalera, G. Anbarjafari, K. Nasrollahi, O. Andersen, E. Spaich, and T. Moeslund, 'Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities,' 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018), pp. 250-257, 2018.
[19] J. Zhou, X. Hong, F. Su, and G. Zhao, 'Recurrent Convolutional Neural Network Regression for Continuous Pain Intensity Estimation in Video,' 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1535-1543, 2016.
[20] G. Bargshady, J. Soar, X. Zhou, R. Deo, F. Whittaker, and H. Wang, 'A Joint Deep Neural Network Model for Pain Recognition from Face,' 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), pp. 52-56, 2019.
[21] C. Cortes and V. Vapnik, 'Support-Vector Networks,' Machine Learning, vol. 20, pp. 273-297, 2004.
[22] J. Susskind, G. E. Hinton, J. Movellan, and A. Anderson, 'Generating Facial Expressions with Deep Belief Nets,' 2008.
[23] K. Zhao, W.-S. Chu, and H. Zhang, 'Deep Region and Multi-label Learning for Facial Action Unit Detection,' 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3391-3399, 2016.
[24] P. Lucey, J. Cohn, K. Prkachin, P. Solomon, and I. Matthews, 'Painful data: The UNBC-McMaster shoulder pain expression archive database,' Face and Gesture 2011, pp. 57-64, 2011.
[25] T. Cootes, G. Edwards, and C. Taylor, 'Active Appearance Models,' IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, pp. 681-685, 2001.
[26] S. Woo, J. Park, J.-Y. Lee, and I.-S. Kweon, 'CBAM: Convolutional Block Attention Module,' in ECCV, 2018.
[27] O. M. Parkhi, A. Vedaldi, and A. Zisserman, 'Deep Face Recognition,' in BMVC, 2015.
[28] C. Li, P. Wang, S. Wang, Y. Hou, and W. Li, 'Skeleton-based action recognition using LSTM and CNN,' 2017 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pp. 585-590, 2017.
[29] S. Hochreiter and J. Schmidhuber, 'Long Short-Term Memory,' Neural Computation, vol. 9, pp. 1735-1780, 1997.
[30] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, 'Attention is All you Need,' in NIPS, 2017.
[31] B. Farnsworth. 'Facial Action Coding System (FACS) – A Visual Guidebook.' https://imotions.com/blog/facial-action-coding-system/.
[32] T. Cootes, C. Taylor, D. Cooper, and J. Graham, 'Active Shape Models-Their Training and Application,' Comput. Vis. Image Underst., vol. 61, pp. 38-59, 1995.
[33] M. Kass, A. Witkin, and D. Terzopoulos, 'Snakes: Active contour models,' International Journal of Computer Vision, vol. 1, pp. 321-331, 2004.
[34] N. Dalal and B. Triggs, 'Histograms of oriented gradients for human detection,' 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, pp. 886-893 vol. 1, 2005.
[35] P. Viola and M. J. Jones, 'Rapid object detection using a boosted cascade of simple features,' Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, pp. I-I, 2001.
[36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. Berg, and L. Fei-Fei 'ImageNet Large Scale Visual Recognition Challenge,' International Journal of Computer Vision, vol. 115, pp. 211-252, 2015.
[37] M. D. Zeiler and R. Fergus, 'Visualizing and Understanding Convolutional Networks,' in ECCV, 2014.
[38] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'ImageNet classification with deep convolutional neural networks,' Communications of the ACM, vol. 60, pp. 84 - 90, 2017.
[39] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,' 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580-587, 2014.
[40] R. B. Girshick, 'Fast R-CNN,' 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448, 2015.
[41] S. Ren, K. He, R. B. Girshick, and J. Sun, 'Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137-1149, 2015.
[42] J. Redmon, S. Divvala, R. B. Girshick, and A. Farhadi, 'You Only Look Once: Unified, Real-Time Object Detection,' 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
[43] K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, 'Mask R-CNN,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, pp. 386-397, 2020.
[44] K. Simonyan and A. Zisserman, 'Two-Stream Convolutional Networks for Action Recognition in Videos,' in NIPS, 2014.
[45] C. Feichtenhofer, A. Pinz, and A. Zisserman, 'Convolutional Two-Stream Network Fusion for Video Action Recognition,' 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1933-1941, 2016.
[46] G. LoweDavid, 'Distinctive Image Features from Scale-Invariant Keypoints,' International Journal of Computer Vision, 2004.
[47] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep Residual Learning for Image Recognition,' 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
[48] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, 'Squeeze-and-Excitation Networks,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, pp. 2011-2023, 2020.
[49] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich 'Going deeper with convolutions,' 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
[50] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, 'MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,' ArXiv, vol. abs/1704.04861, 2017.
[51] D. Rumelhart, G. E. Hinton, and R. J. Williams, 'Learning representations by back-propagating errors,' Nature, vol. 323, pp. 533-536, 1986.
[52] R. Józefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu, 'Exploring the Limits of Language Modeling,' ArXiv, vol. abs/1602.02410, 2016.
[53] Colah. 'Understanding LSTM Networks.' http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
[54] A. Radford, 'Improving Language Understanding by Generative Pre-Training,' 2018.
[55] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,' in NAACL-HLT, 2019.
[56] Y. Kim, C. Denton, L. Hoang, and A. M. Rush, 'Structured Attention Networks,' ArXiv, vol. abs/1702.00887, 2017.
[57] D. Bahdanau, K. Cho, and Y. Bengio, 'Neural Machine Translation by Jointly Learning to Align and Translate,' CoRR, vol. abs/1409.0473, 2015.
[58] A. P. Parikh, O. Täckström, D. Das, and J. Uszkoreit, 'A Decomposable Attention Model for Natural Language Inference,' in EMNLP, 2016.
[59] A. Price, N. Patterson, R. Plenge, M. Weinblatt, N. Shadick, and D. M. Reich, 'Principal components analysis corrects for stratification in genome-wide association studies,' Nature Genetics, vol. 38, pp. 904-909, 2006.
[60] A. J. Bell and T. Sejnowski, 'The@ independent component“ of natural scenes are edge filters,' Vision Research, vol. 37, pp. 3327-3338, 1997.
[61] S. Xie, R. B. Girshick, P. Dollár, Z. Tu, and K. He, 'Aggregated Residual Transformations for Deep Neural Networks,' 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987-5995, 2017.
[62] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, 'Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,' ArXiv, vol. abs/1602.07261, 2017.
[63] S. Zagoruyko and N. Komodakis, 'Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer,' ArXiv, vol. abs/1612.03928, 2017.
[64] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, 'How transferable are features in deep neural networks?,' in NIPS, 2014.
[65] K. Simonyan and A. Zisserman, 'Very Deep Convolutional Networks for Large-Scale Image Recognition,' CoRR, vol. abs/1409.1556, 2015.
[66] J. Deng, J. Guo, and S. Zafeiriou, 'ArcFace: Additive Angular Margin Loss for Deep Face Recognition,' 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4685-4694, 2019.
[67] D. P. Kingma and J. Ba, 'Adam: A Method for Stochastic Optimization,' CoRR, vol. abs/1412.6980, 2015.
[68] G. Huang, Z. Liu, and K. Q. Weinberger, 'Densely Connected Convolutional Networks,' 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261-2269, 2017.
[69] I. Sutskever, O. Vinyals, and Q. V. Le, 'Sequence to Sequence Learning with Neural Networks,' in NIPS, 2014.
[70] T. Lei, Y. Zhang, S. I. Wang, H. Dai, and Y. Artzi, 'Simple Recurrent Units for Highly Parallelizable Recurrence,' in EMNLP, 2018.
[71] G. Tang, M. Müller, A. R. Gonzales, and R. Sennrich, 'Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures,' ArXiv, vol. abs/1808.08946, 2018.
[72] A. Vaswani, S. Bengio, E. Brevdo, F. Chollet, Aidan N. Gomez, S. Gouws, L. Jones, L. Kaiser, N. Kalchbrenner, N. Parmar, R. Sepassi, N. Shazeer, J. Uszkoreit., 'Tensor2Tensor for Neural Machine Translation,' in AMTA, 2018.
[73] P. Goyal, P. Dollár, R. B. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, 'Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour,' ArXiv, vol. abs/1706.02677, 2017.
[74] S. Roy, C. Roy, I. Fortin, C. Éthier-Majcher, P. Belin, and F. Gosselin, 'A dynamic facial expression database,' Journal of Vision, vol. 7, pp. 944-944, 2010.
[75] S. Walter, S. Gruss, H. Ehleiter, J. Tan, H. Traue, S. Crawcour, P. Werner, A. Al-Hamadi, and A. Andrade 'The biovid heat pain database data for the advancement and systematic validation of an automated pain recognition system,' 2013 IEEE International Conference on Cybernetics (CYBCO), pp. 128-131, 2013.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71438-
dc.description.abstract疼痛表情是衡量急診患者當下情況的一項重要指標。準確且有效的辨識疼痛強度,使醫護人員能夠以適當且準確得對待不同緊急程度的病患。自動疼痛識別系統可以降低成本及減輕人力短缺的問題,隨著深度神經網路發展,這些基於深度學習與電腦視覺的方法提供了自動化疼痛判斷的潛在解決方案在傳統的識別方法中傾向於基於單個幀來提取特徵,因爲沒有考慮整段影片中幀與幀之間的關聯,這可能導致不夠準確的預測。此外也與其他使用傳統常規的循環神經網絡的疼痛強度識別方法不同,在此篇論文中,我們建議不可以在訓練神經網路的過程中僅僅使用從臉部提取的特徵,由於不同程度的疼痛之間差距比較細微,因此更需要設計一個能夠專注於面部細節的神經網路來辨識各種強度不一的疼痛。我們的研究首先提取基於主動外觀模型演算法搜尋到的患者臉部的關鍵點,然後將預處理過的面部依次輸入到定制的帶有專注功能的殘差捲積神經網絡(CNN)中,將每個臉部的動作單元(Action Units)輸入另一個密集連接的捲積網路學習對應的特徵。我們將從捲積網路中提取到的特徵和先前的臉部關鍵點相連結作爲一個組合,並將整個一組時序影片對應的這些組合連接到Transformer序列神經網路,以預測不同等級的疼痛的強度水準。我們提供的方法在UNBC-McMaster肩膀疼痛資料數據集上達到了86.5% 準確率的表現,優於目前其他的同領域的相關方法。本文首先提出了一種結合關鍵點和動作單元與原始圖像特徵的作爲輸入序列,並使用兩個不同的特製的捲積神經網路及一個序列神經網路用於不同疼痛強度判斷的方法。zh_TW
dc.description.abstractPain expression is an indicator for patients’ current condition. Accurate and effective recognition of pain intensity is significant for medical personnel to treat and care patients properly and carefully. The viewing cost and shortage of human labors call for the need of automatic pain recognition. With the development of the deep neural networks, these techniques offer great potential for automatic pain intensity recognition. Unlike other state-of-the-art pain intensity recognition methods using conventional and regular recurrent neural network, in this thesis, we suggest that not only key landmarks of the face can be used in training but also the network should focus more on facial details to enhance the performance. This study first uses Point Distribution Model to extract key landmarks of the patient’s face, and then feed the sequence of preprocessed facial images into a customized Convolutional Neural Network (CNN) with attention mechanism to extract features and related action units to another densely connected network to learn features. Concatenation of the features from neural networks and landmarks are linked to the Transformer network to predict the pain intensity levels. Our proposed method is trained and tested on the UNBC-McMaster Shoulder Pain Expression Archive Database and reaches promising performance. This thesis first proposes a methodology in combining landmarks and action units with raw images and using Transformer network for pain intensity prediction in facial images sequence.en
dc.description.provenanceMade available in DSpace on 2021-06-17T06:00:44Z (GMT). No. of bitstreams: 1
U0001-1802202116515400.pdf: 3780853 bytes, checksum: 2a99e9f7d645a5999a40ff76e286d5c6 (MD5)
Previous issue date: 2021
en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
List of Figures viii
List of Tables x
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Related Works 4
1.2.1 Binary Prediction Solution 5
1.2.2 Multi-Level Prediction Solution 5
1.3 Objective and Contribution 6
1.4 Thesis Organization 9
Chapter 2 Preliminaries 10
2.1 Pain Intensity Scales 10
2.1.1 Facial Action Coding System (FACS) 10
2.1.2 Prkachin and Solomon Pain Intensity (PSPI) 13
2.2 Facial Landmark Detection 14
2.2.1 Active Appearance Model (AAM) 14
2.2.2 HOG with Cascade Classifier 15
2.3 Neural Networks 16
2.3.1 Convolutional Neural Networks 16
2.3.2 Recurrent Neural Network 23
2.3.3 Transformer 26
Chapter 3 Methodology 30
3.1 Landmarks Detection 30
3.2 Data Preprocessing 32
3.3 Features Extraction 33
3.3.1 Convolution with Residuals 33
3.3.2 Attention Mechanism 36
3.3.3 Fine-Tuning 39
3.3.4 Densely Connected Network 42
3.4 Prediction 45
3.4.1 Long-Short-Term-Memory (LSTM) 46
3.4.2 Transformer Versus Recurrent 49
Chapter 4 Experiments 51
4.1 Configuration 51
4.2 Implementation Details 51
4.3 Pain Dataset 53
4.4 Ablation Studies 57
4.5 Comparison Between State-Of-The-Art Works 59
Chapter 5 Conclusion and Future Works 61
References 62
dc.language.isoen
dc.subject深度學習zh_TW
dc.subject注意力機制zh_TW
dc.subject疼痛辨識zh_TW
dc.subject多模態zh_TW
dc.subjectPain intensity predictionen
dc.subjectDeep Learningen
dc.subjectAttention Mechanismen
dc.subjectMultimodalitiesen
dc.title基於時序多模態資料用於疼痛强度辨識的混合深度神經網路zh_TW
dc.titleHybrid Deep Neural Networks for Pain Intensity Estimation via Temporal Multimodalitiesen
dc.typeThesis
dc.date.schoolyear109-1
dc.description.degree碩士
dc.contributor.oralexamcommittee傅楸善(Chiou-Shann Fuh),洪一平(YP Hung),蔡居霖(Chu-Lin Tsai),蘇木春(Mu-Chun Su)
dc.subject.keyword疼痛辨識,深度學習,注意力機制,多模態,zh_TW
dc.subject.keywordPain intensity prediction,Deep Learning,Attention Mechanism,Multimodalities,en
dc.relation.page69
dc.identifier.doi10.6342/NTU202100747
dc.rights.note有償授權
dc.date.accepted2021-02-19
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
U0001-1802202116515400.pdf
  未授權公開取用
3.69 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved