Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84552
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成(Li-Chen Fu)
dc.contributor.authorYi-Cheng Yangen
dc.contributor.author楊逸崢zh_TW
dc.date.accessioned2023-03-19T22:15:22Z-
dc.date.copyright2022-10-19
dc.date.issued2022
dc.date.submitted2022-09-23
dc.identifier.citation[1] 衛生福利部中央健康保險署. 110 年分級醫療整體成效進度追蹤. https://www.nhi.gov.tw. Updated: 2022-02-27. [2] Patrick Lucey, Jeffrey F. Cohn, Kenneth M. Prkachin, Patricia E. Solomon, and Iain Matthews. Painful data: The unbc-mcmaster shoulder pain expression archive database. In 2011 IEEE International Conference on Automatic Face Gesture Recognition (FG), pages 57–64, 2011. [3] Junxi Feng, Xiaohai He, Qizhi Teng, Chao Ren, Honggang Chen, and Yang Li. Reconstruction of porous media from extremely limited information using conditional generative adversarial networks. Physical Review E, 100, 09 2019. [4] Jane Bromley, James Bentz, Leon Bottou, Isabelle Guyon, Yann Lecun, Cliff Moore, Eduard Sackinger, and Rookpak Shah. Signature verification using a ”siamese” time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7:25, 08 1993. [5] Fernando De la Torre, Wen-Sheng Chu, Xuehan Xiong, Xiaoyu Ding, and Jeffrey Cohn. Intraface. In IEEE International Conference on Automatic Face & Gesture Recognition and Workshops, 05 2015. [6] Faezeh Ghaderi, Shahin Banakar, and Shima Rostami. Effect of pre-cooling injection site on pain perception in pediatric dentistry:“a randomized clinical trial'. Dental research journal, 10:790–4, 11 2013. [7] Yichen Qian, Weihong Deng, and Jiani Hu. Unsupervised face normalization with extreme pose and expression in the wild. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9843–9850, 2019. [8] Brendan Klare, Benjamin Klein, Emma Taborsky, Austin Blanton, Jordan Cheney, Kristen C. Allen, Patrick Grother, Alan Mah, Mark Burge, and Anil K. Jain. Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1931–1939, 2015. [9] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. [10] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. [11] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. [12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advancesin neural information processing systems, 25:1097–1105, 2012. [13] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, International Conference on Learning Representations, 2015. [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [15] Amir Hossein Farzaneh and Xiaojun Qi. Facial expression recognition in the wild via deep attentive center loss. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 2401–2410, 2021. [16] Shan Li, Weihong Deng, and Junping Du. Reliable crowdsourcing and deep localitypreserving learning for expression recognition in the wild. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2584–2593, 2017. [17] Lei Ding, Hao Tang, and Lorenzo Bruzzone. Lanet: Local attention embedding to improve the semantic segmentation of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, PP, 05 2020. [18] Mary Pat Lynch. Pain as the fifth vital sign. Journal of intravenous nursing : the official publication of the Intravenous Nurses Society, 24 2:85–94, 2001. [19] Donald D. Price, Patricia A. Mcgrath, Amir A Rafii, and Barbara Buckingham. The validation of visual analogue scales as ratio scale measures for chronic and experimental pain. Pain, 17:45–56, 1983. [20] E Friesen and Paul Ekman. Facial action coding system: a technique for the measurement of facial movement. Palo Alto, 3(2):5, 1978. [21] Kenneth M Prkachin and Patricia E Solomon. The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain, 139(2):267– 274, 2008. [22] Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Transactions on Neural Networks and Learning Systems, 32:4793–4813, 2021. [23] Amitojdeep Singh, Sourya Sengupta, and Vasudevan Lakshminarayanan. Explainable deep learning models in medical image analysis. Journal of Imaging, 6, 2020. [24] Ran Gu, Guotai Wang, Tao Song, Rui Huang, Michael Aertsen, Jan A. Deprest, Sébastien Ourselin, Tom Vercauteren, and Shaoting Zhang. Ca-net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Transactions on Medical Imaging, 40:699–711, 2021. [25] Daniel Lopez Martinez, Ognjen Rudovic, and Rosalind Picard. Personalized automatic estimation of self-reported pain intensity from facial expressions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 2318–2327. IEEE, 2017. [26] J. T. Farrar, R. K. Portenoy, J. A. Berlin, J. L. Kinman, and B. L. Strom. Defining the clinically important difference in pain outcome measures. Pain, 88:287–294, 2000. [27] A. Moore, O. Moore, H. McQuay, and D. Gavaghan. Deriving dichotomous outcome measures from continuous data in randomised controlled trials of analgesics: Use of pain intensity and visual analogue scales. Pain, 69(3):311–315, 1997. [28] Kenneth D Craig. The facial expression of pain better than a thousand words? APS Journal, 1(3):153–162, 1992. [29] Randolph Cornelius. The science of emotion: Research and tradition in the psychology of emotion. Psyccritiques, 42, 01 1996. [30] Pau Rodríguez López, Guillem Cucurull, J. Sánchez Gonález, Josep Maria Gonfaus, Kamal Nasrollahi, Thomas Baltzer Moeslund, and F. Xavier Roca. Deep pain: Exploiting long short-term memory networks for facial expression classification. IEEE Transactions on Cybernetics, 52:3314–3324, 2017. [31] Mohammad Tavakolian and Abdenour Hadid. A spatiotemporal convolutional neural network for automatic pain intensity estimation from facial dynamics. International Journal of Computer Vision, 127(10):1413–1425, 2019. [32] Dianbo Liu, Peng Fengjiao, Rosalind Picard, et al. Deepfacelift: interpretable personalized models for automatic estimation of self-reported pain. In IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing, pages 1–16. PMLR, 2017. [33] Xiaojing Xu, Jeannie S Huang, and Virginia R De Sa. Pain evaluation in video using extended multitask learning from multidimensional measurements. In Machine Learning for Health Workshop, pages 141–154. PMLR, 2020. [34] Xiaojing Xu and Virginia R. de Sa. Personalized pain detection in facial video with uncertainty estimation. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pages 4163–4168, 2021. [35] Karan Sikka and Abhinav Dhall. Weakly supervised pain localization using multiple instance learning. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013, pages 1–8, 04 2013. [36] Adria Ruiz, Ognjen Rudovic, Xavier Binefa, and Maja Pantic. Multi-instance dynamic ordinal random fields for weakly-supervised pain intensity estimation. In ACCV, 2016. [37] P.A. Viola, John Platt, and C. Zhang. Multiple instance boosting for object detection. Proc. Adv. Neural Inf. Process. Syst., 18:1417–1426, 01 2006. [38] Minyoung Kim and Vladimir Pavlovic. Hidden conditional ordinal random fields for sequence classification. In ECML/PKDD, 2010. [39] Ruyi Ji, Zeyu Liu, Libo Zhang, Jian wei Liu, Xin Zuo, Yanjun Wu, Chen Zhao, Haofeng Wang, and Lin Yang. Multi-peak graph-based multi-instance learning for weakly supervised object detection. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 17:1 – 21, 2021. [40] Jian-Sheng Wu, Sheng-Jun Huang, and Zhi-Hua Zhou. Genome-wide protein function prediction through multi-instance multi-label learning. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 11:891–902, 2014. [41] Marvin Lerousseau, Maria Vakalopoulou, Marion Classe, Julien Adam, Enzo Battistella, Alexandre Carr’e, Théo Estienne, Théophraste Henry, Eric Deutsch, and Nikos Paragios. Weakly supervised multiple instance learning histopathological tumor segmentation. ArXiv, abs/2004.05024, 2020. [42] Thomas G. Dietterich, Richard H. Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1):31–71, 1997. [43] Thomas Gartner, Peter Flach, Adam Kowalczyk, and Alex Smola. Multi-instance kernels. Proceedings of 19th International Conference on Machine Learning, 11 2003. [44] Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. Support vector machines for multiple-instance learning. Advances in Neural Information Processing Systems, 15:561–568, 01 2002. [45] Christian Leistner, Amir Saffari, and Horst Bischof. Miforests: Multiple-instance learning with randomized trees. In ECCV, 2010. [46] Min-Ling Zhang and Zhi-Hua Zhou. Improve multi-instance neural networks through feature selection. Neural Processing Letters, 19:1–10, 02 2004. [47] Xinggang Wang, Yongluan Yan, Peng Tang, Xiang Bai, and Wenyu Liu. Revisiting multiple instance neural networks. Pattern Recognition, 74, 10 2016. [48] Amina Asif and Fayyaz ul Amir Afsar Minhas. An embarrassingly simple approach to neural multiple instance classification. Pattern Recognition Letters, 128:474–479, 2019. [49] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th International Conference on Machine Learning, 2010. [50] Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6546–6555, 2018. [51] Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. Learning spatio-temporal features with 3d residual networks for action recognition. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 3154–3160, 2017. [52] Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, and Yutaka Satoh. Would mega-scale datasets further enhance spatiotemporal 3d cnns? arXiv preprint arXiv:2004.04968, 2020. [53] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1:539–546 vol. 1, 2005. [54] Zhi-Hua Zhou. A brief introduction to weakly supervised learning. National Science Review, 5, 08 2017. [55] James Foulds and Eibe Frank. A review of multi-instance learning assumptions. The Knowledge Engineering Review, 25, 03 2010. [56] Shashank Jaiswal, Joy Egede, and Michel Valstar. Deep learned cumulative attribute regression. In 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018), pages 715–722, 2018. [57] Ahmed Bilal Ashraf, Simon Lucey, Jeffrey F Cohn, Tsuhan Chen, Zara Ambadar, Kenneth M Prkachin, and Patricia E Solomon. The painful face–pain expression recognition using active appearance models. Image and vision computing, 27(12):1788–1796, 2009. [58] Sheryl Brahnam, Chao-Fa Chuang, Frank Y Shih, and Melinda R Slack. Machine recognition and representation of neonatal facial displays of acute pain. Artificial intelligence in medicine, 36(3):211–222, 2006. [59] Feng Wang, Xiang Xiang, Chang Liu, Trac D Tran, Austin Reiter, Gregory D Hager, Harry Quon, Jian Cheng, and Alan L Yuille. Regularizing face verification nets for pain intensity regression. In 2017 IEEE International Conference on Image Processing (ICIP), pages 1087–1091. IEEE, 2017. [60] Joy Egede, Michel Valstar, and Brais Martinez. Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation. In 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017), pages 689–696. IEEE, 2017. [61] Diyala Erekat, Zakia Hammal, Maimoon Siddiqui, and Hamdi Dibeklioğlu. Enforcing multilabel consistency for automatic spatio-temporal assessment of shoulder pain intensity. In Companion Publication of the 2020 International Conference on Multimodal Interaction, ICMI ’20 Companion, page 156–164, New York, NY, USA, 2020. Association for Computing Machinery. [62] Amanda Williams. Facial expression of pain: An evolutionary account. The Behavioral and brain sciences, 25:439–55; discussion 455, 09 2002. [63] Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, and Gang Hua. Ordinal regression with multiple output cnn for age estimation. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4920–4928, 2016. [64] Wenzhi Cao, Vahid Mirjalili, and Sebastian Raschka. Consistent rank logits for ordinal regression with convolutional neural networks. arXiv preprint arXiv:1901.07884, 6, 2019. [65] Gwenole Quellec, Guy Cazuguel, Béatrice Cochener, and Mathieu Lamard. Multiple-instance learning for medical image and video analysis. IEEE Reviews in Biomedical Engineering, PP, 01 2017. [66] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6450–6459, 2018. [67] Martin Schiavenato and Kenneth Craig. Pain assessment as a social transaction beyond the ``gold standard''. The Clinical journal of pain, 26:667–76, 10 2010. [68] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32:8026–8037, 2019. [69] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014. [70] Shijun Wang, Matthew McKenna, Tan Nguyen, Joseph Burns, Nicholas Petrick, Berkman Sahiner, and Ronald Summers. Seeing is believing: Video classification for computed tomographic colonography using multiple-instance learning. IEEE transactions on medical imaging, 31:1141–53, 05 2012. [71] Gwénolé Quellec, Mathieu Lamard, Guy Cazuguel, Zakarya Droueche, Christian Roux, and Béatrice Cochener. Real-time retrieval of similar videos with application to computer-aided retinal surgery. 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 4465–4468, 2011. [72] Gwenole Quellec, Mathieu Lamard, Beatrice Cochener, and Guy Cazuguel. Realtime task recognition in cataract surgery videos using adaptive spatiotemporal polynomials. IEEE Transactions on Medical Imaging, 34:877–87, 04 2015. [73] Nataniel Ruiz, Eunji Chong, and James M. Rehg. Fine-grained head pose estimation without keypoints. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 2155–215509, 2018. [74] Fanglei Xue, Qiangchang Wang, and Guodong Guo. Transfer: Learning relationaware facial expression representations with transformers. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3581–3590, 2021. [75] Xiangyu Zhu, Zhen Lei, Junjie Yan, Dong Yi, and S. Li. High-fidelity pose and expression normalization for face recognition in the wild. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 787–796, 2015.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84552-
dc.description.abstract在經常人滿為患的醫院急診中,時間是最重要的資源之一,而為了有效運用黃金時間,判別病患緊急程度的檢傷流程則成為關鍵的一環。要如何在加速檢傷流程的同時維持判斷的準確、客觀一直是困難的課題,因此在本研究中,我們專注在與電子化檢傷相關的任務,期望一個完善的自動化系統最終能提高醫療資源使用的效率。 在台灣急診檢傷流程中,疼痛指數為其中一項重要指標,而人們在大多數情況下會將疼痛反映在臉部表情上,故我們著眼於電腦視覺方法,建立基於臉部影像之深度學習模型。在現行醫療體系下,疼痛指數為病患根據視覺類比表自行評估,然而其數值具有較強的主觀性,且該疼痛指數為序列級別的標籤。在影像分析任務上,相較於逐幀標註的指標,存在資料標註不精確的問題。更具體地說,患者檢傷過程中表情並非維持不變,例如疼痛類型為間歇性疼痛之患者,如何從檢傷影片中找出含有疼痛表現的片段十分關鍵。在我們的實驗中,我們提出了具有事例評價機制的深度學習模型。我們將較短的影片片段稱為事例,並輸入模型,藉由多事例學習訓練模型產生評價分數,以此尋找可能的關鍵畫面,最終改善模型表現與可解釋性。 總結來說,我們將建構一個可用於實際場域的AI輔助系統作為目標,設計了一個支援線上預測的疼痛指數預測系統。zh_TW
dc.description.abstractIn the emergency department (ED) of hospitals that are often overcrowded, time is one of the most valuable resources. To effectively utilize the golden time, the triage process for estimating the urgency of different patients becomes extremely crucial. Accelerating the triage process while maintaining the accuracy and objectivity of judgment has always been a dilemma. Therefore, in this research, we focus on the tasks related to the automatic triage system and hope that the system can improve the efficiency of utilizing medical resources. Considering pain level is one of the major indicators in the Taiwan triage process, and people usually reflect their pain on their facial expressions, we implement a deep learning model based on facial videos via computer vision methods. In the current medical system, the commonly used pain metric is Visual Analog Score (VAS), which is typically provided through patient self-report. However, VAS is a sequence-level subjective metric. In comparison with frame-level labels, sequence or video-level annotations are more inexact. More specifically, patients' facial expressions may change dramatically during the triage process. As a result, for patients suffering from intermittent pain, recognizing the durations and timings of painful expressions are essential for pain intensity estimation. In this thesis, short video clips are considered as instances and are input to the model. Via our proposed multiple-instance learning approaches, our model learns to appraise the value of instances. Based on the generated instance scores, we improve the performance and interpretability of our pain level assessment model. To sum up, in this thesis we pursue the goal of implementing our system in real clinical situations, so that an online inference framework for pain level estimation is provided.en
dc.description.provenanceMade available in DSpace on 2023-03-19T22:15:22Z (GMT). No. of bitstreams: 1
U0001-0109202216125900.pdf: 9802050 bytes, checksum: a5697ff265ade13b403ec4371cd9bafc (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS v LIST OF FIGURES viii LIST OF TABLES x Chapter 1 Introduction 1 1.1 Task Description 1 1.2 Motivation 3 1.3 Related Work 6 1.3.1 Pain Intensity Estimation 7 1.3.2 Multiple-instance Learning 8 1.4 Contribution 9 1.5 Thesis Organization 10 Chapter 2 Preliminaries 12 2.1 Convolutional Neural Network 12 2.1.1 Convolutional Layers 13 2.1.2 Pooling Layers 15 2.1.3 Activation Function 16 2.1.4 Residual Net 17 2.1.5 3D Residual Net 17 2.1.6 Siamese Network 18 2.2 Inexact Supervision 20 2.2.1 Weakly Supervised Learning 20 2.2.2 Multiple Instance Learning 20 2.3 Pain Intensity Metrics 21 2.3.1 Prkachin and Solomon Pain Intensity Scale 22 2.3.2 Visual Analog Scale 23 Chapter 3 MIL Models 25 3.1 System Overview 25 3.2 Network Architecture Design 27 3.2.1 Preprocessing 27 3.2.2 Backbone Network 27 3.3 Multiple Instance Learning 29 3.3.1 Problem Definition 30 3.3.2 Instance Score 30 3.3.3 Label-based Multiple Instance Learning 31 3.3.4 Uncertainty-based Multiple Instance Learning 33 3.3.5 Data sampling via Instance Score Table 35 3.3.6 Combination of LB-MIL and UB-MIL 37 3.4 Local Attention Embedding 38 3.5 Training and Inference 39 Chapter 4 Experiments 41 4.1 Datasets 41 4.1.1 UNBC-McMaster shoulder pain expression archive database 41 4.1.2 NTUH-ED Dataset 44 4.2 Implementation Details 45 4.3 Evaluation Metrics 47 4.4 Post Process via Instance Score 49 4.5 Experimental Result on UNBC-McMaster Dataset 50 4.5.1 Quantitative Results 50 4.5.2 Analysis of Loss Weights 52 4.5.3 Analysis of Multiple Instance Learning Approaches 53 4.5.4 Analysis of different length of instance 54 4.5.5 Analysis of data sampling via Instance Score Table 56 4.5.6 Pain Localization via Instance-level Prediction 58 4.6 Experimental Result on NTUH-ED Dataset 59 4.6.1 The Regression and Classification Loss Approach 59 4.6.2 Ablation Study 60 4.6.3 Pain Localization via Instance-level Prediction 61 4.6.4 Automatic Pain Intensity Estimation System in NTUH 62 4.7 Discussion and Future Work 64 Chapter 5 Conclusion 66 REFERENCE 68
dc.language.isoen
dc.title可評價事例之深度學習模型應用於臉部連續影像的疼痛指數預估zh_TW
dc.titleInstance Appraisable Deep Learning Model for Sequence-level Pain Intensity Estimation via Facial Videosen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee黃建華(Chien-Hua Huang),蔡居霖(Chu-Lin Tsai),張智星(Jyh-Shing Jang),王鈺強(Yu-Chiang Wang)
dc.subject.keyword疼痛指數,深度學習,電腦視覺,多事例學習,臉部表情,zh_TW
dc.subject.keywordPain Level,Deep learning,Computer Vision,Multiple Instance Learning,Facial Expression,en
dc.relation.page78
dc.identifier.doi10.6342/NTU202203069
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-09-23
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
dc.date.embargo-lift2025-09-19-
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
U0001-0109202216125900.pdf
  目前未授權公開取用
9.57 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved