Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 機械工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78579
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor黃漢邦zh_TW
dc.contributor.advisorHan-Pang Huangen
dc.contributor.author王榆昇zh_TW
dc.contributor.authorYu-Sheng Wangen
dc.date.accessioned2021-07-11T15:05:13Z-
dc.date.available2024-08-16-
dc.date.copyright2019-08-23-
dc.date.issued2019-
dc.date.submitted2002-01-01-
dc.identifier.citation[1] H. Admoni and B. Scassellati, "Social Eye Gaze in Human-Robot Interaction: A Review," Journal of Human-Robot Interaction, Vol. 6, No. 1, pp. 25-63, 2017.
[2] M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein Gan," arXiv preprint arXiv:1701.07875,2017.
[3] E. Bal, E. Harden, D. Lamb, A. V. Van Hecke, J. W. Denver, and S. W. Porges, "Emotion Recognition in Children with Autism Spectrum Disorders: Relations to Eye Gaze and Autonomic State," Journal of autism and developmental disorders, Vol. 40, No. 3, pp. 358-370, 2010.
[4] Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor, "The 2018 Pirm Challenge on Perceptual Image Super-Resolution," Proc. of Proceedings of the European Conference on Computer Vision (ECCV), pp. 1-22, 2018.
[5] J.-D. Boucher, U. Pattacini, A. Lelong, G. Bailly, F. Elisei, S. Fagel, P. F. Dominey, and J. Ventre-Dominey, "I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation," Frontiers in neurorobotics, Vol. 6, p. 3, 2012.
[6] G. Bradski and A. Kaehler, Learning Opencv: Computer Vision with the Opencv Library,Ed., " O'Reilly Media, Inc.", 2008.
[7] M. Brand, "Coupled Hidden Markov Models for Modeling Interacting Processes," Citeseer, 1997.
[8] M. Brand, N. Oliver, and A. Pentland, "Coupled Hidden Markov Models for Complex Action Recognition," Proc. of cvpr, Vol. 97, p. 994, 1997.
[9] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, "Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork," Proc. of 2005 IEEE/RSJ international conference on intelligent robots and systems, pp. 708-713, 2005.
[10] P. Burkert, F. Trier, M. Z. Afzal, A. Dengel, and M. Liwicki, "Dexpression: Deep Convolutional Neural Network for Expression Recognition," arXiv preprint arXiv:1509.05371,2015.
[11] C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, and S. Narayanan, "Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information," Proc. of Proceedings of the 6th international conference on Multimodal interfaces, pp. 205-211, 2004.
[12] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime Multi-Person 2d Pose Estimation Using Part Affinity Fields," Proc. of Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291-7299, 2017.
[13] H. H. Clark and S. E. Brennan, "Grounding in Communication," Perspectives on socially shared cognition, Vol. 13, No. 1991, pp. 127-149, 1991.
[14] E. Couper-Kuhlen, "A Sequential Approach to Affect: The Case of ‘Disappointment’," Talk in interaction: Comparative dimensions, pp. 94-123, 2009.
[15] C. Dong, C. C. Loy, K. He, and X. Tang, "Image Super-Resolution Using Deep Convolutional Networks," IEEE transactions on pattern analysis and machine intelligence, Vol. 38, No. 2, pp. 295-307, 2016.
[16] Y. Ebisawa and S.-I. Satoh, "Effectiveness of Pupil Area Detection Technique Using Two Light Sources and Image Difference Method," Proc. of Proceedings of the 15th Annual International Conference of the IEEE Engineering in Medicine and Biology Societ, pp. 1268-1269, 1993.
[17] P. Ekman, "An Argument for Basic Emotions," Cognition & emotion, Vol. 6, No. 3-4, pp. 169-200, 1992.
[18] P. Ekman, "Basic Emotions," Handbook of cognition and emotion, pp. 45-60, 1999.
[19] P. Ekman and W. V. Friesen, "Constants across Cultures in the Face and Emotion," Journal of personality and social psychology, Vol. 17, No. 2, p. 124, 1971.
[20] P. Ekman and W. V. Friesen, "The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding," semiotica, Vol. 1, No. 1, pp. 49-98, 1969.
[21] M. Elsabbagh, E. Mercure, K. Hudry, S. Chandler, G. Pasco, T. Charman, A. Pickles, S. Baron-Cohen, P. Bolton, and M. H. Johnson, "Infant Neural Sensitivity to Dynamic Eye Gaze Is Associated with Later Emerging Autism," Current biology, Vol. 22, No. 4, pp. 338-342, 2012.
[22] G. D. Forney, "The Viterbi Algorithm," Proceedings of the IEEE, Vol. 61, No. 3, pp. 268-278, 1973.
[23] M. Freeth, P. Chapman, D. Ropar, and P. Mitchell, "Do Gaze Cues in Complex Scenes Capture and Direct the Attention of High Functioning Adolescents with Asd? Evidence from Eye-Tracking," Journal of autism and developmental disorders, Vol. 40, No. 5, pp. 534-547, 2010.
[24] A. J. Fridlund, "Sociality of Solitary Smiling: Potentiation by an Implicit Audience," Journal of personality and social psychology, Vol. 60, No. 2, p. 229, 1991.
[25] E. Friesen and P. Ekman, "Facial Action Coding System: A Technique for the Measurement of Facial Movement," Palo Alto, Vol. 3,1978.
[26] I. Goodfellow, "Nips 2016 Tutorial: Generative Adversarial Networks," arXiv preprint arXiv:1701.00160,2016.
[27] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Nets," Proc. of Advances in neural information processing systems, pp. 2672-2680, 2014.
[28] I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, and D.-H. Lee, "Challenges in Representation Learning: A Report on Three Machine Learning Contests," Proc. of International Conference on Neural Information Processing, pp. 117-124, 2013.
[29] Z. M. Griffin and K. Bock, "What the Eyes Say About Speaking," Psychological science, Vol. 11, No. 4, pp. 274-279, 2000.
[30] E. T. Hall, The Hidden Dimension,Ed., vol. 609, Garden City, NY: Doubleday, 1910.
[31] M. Haris, G. Shakhnarovich, and N. Ukita, "Deep Back-Projection Networks for Super-Resolution," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1664-1673, 2018.
[32] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
[33] B. Heenan, S. Greenberg, S. Aghel-Manesh, and E. Sharlin, "Designing Social Greetings in Human Robot Interaction," Proc. of Proceedings of the 2014 conference on Designing interactive systems, pp. 855-864, 2014.
[34] A. Hepburn and J. Potter, "Crying Receipts: Time, Empathy, and Institutional Practice," Research on Language and Social Interaction, Vol. 40, No. 1, pp. 89-116, 2007.
[35] J. Hui. (2018). Gan — Wasserstein Gan & Wgan-Gp. from https://medium.com/@jonathan_hui/gan-wasserstein-gan-wgan-gp-6a1a2aa1b490
[36] W. W. Ismail, M. Hanif, S. Mohamed, N. Hamzah, and Z. I. Rizman, "Human Emotion Detection Via Brain Waves Study by Using Electroencephalogram (Eeg)," International Journal on Advanced Science, Engineering and Information Technology, Vol. 6, No. 6, pp. 1005-1011, 2016.
[37] F. Jelinek, L. Bahl, and R. Mercer, "Design of a Linguistic Statistical Decoder for the Recognition of Continuous Speech," IEEE Transactions on Information Theory, Vol. 21, No. 3, pp. 250-256, 1975.
[38] A. Jolicoeur-Martineau, "The Relativistic Discriminator: A Key Element Missing from Standard Gan," arXiv preprint arXiv:1807.00734,2018.
[39] B.-H. Juang, "Maximum-Likelihood Estimation for Mixture Multivariate Stochastic Observations of Markov Chains," AT&T technical journal, Vol. 64, No. 6, pp. 1235-1249, 1985.
[40] M. Kampmann and L. Zhang, "Estimation of Eye, Eyebrow and Nose Features in Videophone Sequences," Proc. of International workshop on very low bitrate video coding (VLBV 98), pp. 101-104, 1998.
[41] A. Kar, "Skeletal Tracking Using Microsoft Kinect," Methodology, Vol. 1, No. 1, p. 11, 2010.
[42] T. Karras, S. Laine, and T. Aila, "A Style-Based Generator Architecture for Generative Adversarial Networks," Proc. of Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019.
[43] R. Kelley, A. Tavakkoli, C. King, M. Nicolescu, M. Nicolescu, and G. Bebis, "Understanding Human Intentions Via Hidden Markov Models in Autonomous Mobile Robots," Proc. of Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, pp. 367-374, 2008.
[44] A. Kendon, Conducting Interaction: Patterns of Behavior in Focused Encounters,Ed., vol. 7, CUP Archive, 1990.
[45] K.-N. Kim and R. Ramakrishna, "Vision-Based Eye-Gaze Tracking for Human Computer Interface," Proc. of IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 99CH37028), Vol. 2, pp. 324-329, 1999.
[46] D. E. King, "Dlib-Ml: A Machine Learning Toolkit," Journal of Machine Learning Research, Vol. 10, No. Jul, pp. 1755-1758, 2009.
[47] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks," Proc. of Advances in neural information processing systems, pp. 1097-1105, 2012.
[48] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681-4690, 2017.
[49] E. Lee, J. I. Kang, I. H. Park, J.-J. Kim, and S. K. An, "Is a Neutral Face Really Evaluated as Being Emotionally Neutral?," Psychiatry research, Vol. 157, No. 1-3, pp. 77-85, 2008.
[50] H. Y. Lee. (2018). Generative Adversarial Network (Gan). from https://www.youtube.com/watch?v=DQNNMiAP5lw&list=PLJV_el3uVTsMq6JEFPW35BCiOQTsoqwNw
[51] G. Leslie, R. Picard, and S. Lui, "An Eeg and Motion Capture Based Expressive Music Interface for Affective Neurofeedback," Proc. of The 1st International Workshop on Brain-Computer Interfacing, 2015.
[52] D. Li, D. Winfield, and D. J. Parkhurst, "Starburst: A Hybrid Algorithm for Video-Based Eye Tracking Combining Feature-Based and Model-Based Approaches," Proc. of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)-Workshops, pp. 79-79, 2005.
[53] L. Liporace, "Maximum Likelihood Estimation for Multivariate Observations of Markov Sources," IEEE Transactions on Information Theory, Vol. 28, No. 5, pp. 729-734, 1982.
[54] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The Extended Cohn-Kanade Dataset (Ck+): A Complete Dataset for Action Unit and Emotion-Specified Expression," Proc. of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 94-101, 2010.
[55] K. Mase, "Recognition of Facial Expression from Optical Flow," IEICE TRANSACTIONS on Information and Systems, Vol. 74, No. 10, pp. 3474-3483, 1991.
[56] A. Mollahosseini, D. Chan, and M. H. Mahoor, "Going Deeper in Facial Expression Recognition Using Deep Neural Networks," Proc. of 2016 IEEE Winter conference on applications of computer vision (WACV), pp. 1-10, 2016.
[57] A. Mollahosseini, B. Hasani, and M. H. Mahoor, "Affectnet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild," IEEE Transactions on Affective Computing,2017.
[58] B. Mutlu, J. Forlizzi, and J. Hodgins, "A Storytelling Robot: Modeling and Evaluation of Human-Like Gaze Behavior," Proc. of 2006 6th IEEE-RAS International Conference on Humanoid Robots, pp. 518-523, 2006.
[59] T. Nakano, K. Tanaka, Y. Endo, Y. Yamane, T. Yamamoto, Y. Nakano, H. Ohta, N. Kato, and S. Kitazawa, "Atypical Gaze Patterns in Children and Adults with Autism Spectrum Disorders Dissociated from Developmental Changes in Gaze Behaviour," Proceedings of the Royal Society B: Biological Sciences, Vol. 277, No. 1696, pp. 2935-2943, 2010.
[60] B. Noris, J. Nadel, M. Barker, N. Hadjikhani, and A. Billard, "Investigating Gaze of Children with Asd in Naturalistic Settings," PloS one, Vol. 7, No. 9, p. e44144, 2012.
[61] A. Peréz, M. L. Córdoba, A. Garcia, R. Méndez, M. Munoz, J. L. Pedraza, and F. Sanchez, "A Precise Eye-Gaze Detection and Tracking System,"2003.
[62] A. Peräkylä and J. Ruusuvuori, "Facial Expression in an Assessment," Video analysis methodology and methods: qualitative audiovisual data analysis in sociology,2006.
[63] R. W. Picard, Affective Computing,Ed., MIT press, 2000.
[64] L. R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proceedings of the IEEE, Vol. 77, No. 2, pp. 257-286, 1989.
[65] J. Sidnell and T. Stivers, The Handbook of Conversation Analysis,Ed., vol. 121, Wiley Online Library, 2013.
[66] C. L. Sidner, C. D. Kidd, C. Lee, and N. Lesh, "Where to Look: A Study of Human-Robot Engagement," Proc. of Proceedings of the 9th international conference on Intelligent user interfaces, pp. 78-84, 2004.
[67] P. J. Silvia, "Interest—the Curious Emotion," Current Directions in Psychological Science, Vol. 17, No. 1, pp. 57-60, 2008.
[68] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," arXiv preprint arXiv:1409.1556,2014.
[69] K. K. Singh, U. Ojha, and Y. J. Lee, "Finegan: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery," Proc. of Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6490-6499, 2019.
[70] B. A. Smith, Q. Yin, S. K. Feiner, and S. K. Nayar, "Gaze Locking: Passive Eye Contact Detection for Human-Object Interaction," Proc. of Proceedings of the 26th annual ACM symposium on User interface software and technology, pp. 271-280, 2013.
[71] M. Stamp, "A Revealing Introduction to Hidden Markov Models," Department of Computer Science San Jose State University, pp. 26-56, 2004.
[72] Y.-l. Tian, T. Kanade, and J. F. Cohn, "Recognizing Lower Face Action Units for Facial Expression Analysis," Proc. of Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), pp. 484-490, 2000.
[73] M. Tomasello, B. Hare, H. Lehmann, and J. Call, "Reliance on Head Versus Eyes in the Gaze Following of Great Apes and Human Infants: The Cooperative Eye Hypothesis," Journal of human evolution, Vol. 52, No. 3, pp. 314-320, 2007.
[74] G. A. Van Kleef, "How Emotions Regulate Social Life: The Emotions as Social Information (Easi) Model," Current directions in psychological science, Vol. 18, No. 3, pp. 184-188, 2009.
[75] D. Vasquez, T. Fraichard, and C. Laugier, "Growing Hidden Markov Models: An Incremental Tool for Learning and Predicting Human and Vehicle Motion," The International Journal of Robotics Research, Vol. 28, No. 11-12, pp. 1486-1506, 2009.
[76] G. Ventilii, "Understanding Human Gaze as a Nonverbal Communication Cue in Human-Robot Interaction," Master Thesis, Department of Mechanical Engineering, National Taiwan University, 2018.
[77] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy, "Esrgan: Enhanced Super-Resolution Generative Adversarial Networks," Proc. of Proceedings of the European Conference on Computer Vision (ECCV), pp. 0-0, 2018.
[78] S. Wilkinson and C. Kitzinger, "Surprise as an Interactional Achievement: Reaction Tokens in Conversation," Social psychology quarterly, Vol. 69, No. 2, pp. 150-182, 2006.
[79] A. Yamazaki, K. Yamazaki, Y. Kuno, M. Burdelski, M. Kawashima, and H. Kuzuoka, "Precision Timing in Human-Robot Interaction: Coordination of Head Movement and Utterance," Proc. of Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 131-140, 2008.
[80] C. Yu, P. Schermerhorn, and M. Scheutz, "Adaptive Eye Gaze Patterns in Interactions with Human and Artificial Agents," ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 1, No. 2, p. 13, 2012.
[81] M. Zheng, A. Moon, E. A. Croft, and M. Q.-H. Meng, "Impacts of Robot Head Gaze on Robot-to-Human Handovers," International Journal of Social Robotics, Vol. 7, No. 5, pp. 783-798, 2015.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78579-
dc.description.abstract人類的眼睛和情緒是一種強力的非語言溝通工具 : 除了表達一個人的興趣、專注和意圖外,也被用來管理面對面的互動。然而,在人機互動的領域中,鮮少應用實際的整合了眼神和情緒。本論文致力於建立一個能自動偵測、感應、理解人類的眼神和情緒並採取相對應的行動的智慧系統,以增進人機互動的流暢性,使機器人的行為能更像人類。此外,藉由辨識人類情緒,機器人能擁有更好的自主能力適時地對人類提供幫助。
掛載於實驗室開發的輪型機器人上的此系統,在運作時會先擷取目標的臉部影像,利用生成對抗網路對進入的影像做超解析,並使用卷積神經網路進行眼神和情緒的辨識,最後透過漸進式隱馬可夫模型評估目標的意圖並採取相對應的行動。
zh_TW
dc.description.abstractHuman eyes and emotions represent strong non-verbal communication tools. They not only give cues about interest, attention, and the intention of people, but also manage several kinds of social face-to-face interactions. However, only a few applications take into account human gaze and emotions in the field of human-robot interaction (HRI). This thesis aims to create a comprehensive intelligent system to automatically sense, understand, and react to human eye gaze and emotions, in order to both improve HRI smoothness and make robots behave more human-like. Besides, by identifying emotional information, ro-bots might be able to automatically offer help to people in need.
The online system, mounted on a mobile robot, detects human faces from 2D images, uses super-resolution algorithms on a generative adversarial network (GAN) to improve the facial input, recognizes the gaze and emotional behavior based on convolutional neu-ral networks (CNN), and uses a novel incremental variant of a hidden Markov model (iCHMM) to estimate the intention of the people interacting with the robot. Finally, with the proposed interaction model, the robot can also act according to its own intentions.
en
dc.description.provenanceMade available in DSpace on 2021-07-11T15:05:13Z (GMT). No. of bitstreams: 1
ntu-108-R06522828-1.pdf: 3624131 bytes, checksum: 544594722dbbeee6084d54b97c8fe79f (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents誌謝 i
摘要 iii
Abstract v
List of Tables xi
List of Figures xiii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Contributions 3
1.3 Organization of Thesis 5
Chapter 2 Social Gaze and Emotion in Interaction 9
2.1 Basic Emotions, Interest and Novelty 12
2.2 Emotions in Interactions 14
2.3 Social Gaze in Interaction 18
2.4 Intention Expressed by Gaze and Emotions 22
Chapter 3 Sensory Systems 23
3.1 3D Reference System 26
3.2 People Detection 27
3.3 Eye Trackers 29
3.4 Face Image Super-Resolution 32
3.4.1 Generative Adversarial Network (GAN) 32
3.4.2 Wasserstein GAN (WGAN) 37
3.4.3 Super-Resolution GAN (SRGAN) 38
3.4.4 SRWGAN 42
3.5 Emotion Classification 44
Chapter 4 Interaction Model 47
4.1 Hidden Markov Model 48
4.1.1 Basic Notations and Problems 49
4.1.2 Solution to Problem 1 51
4.1.3 Solution to Problem 2 55
4.1.4 Solution to Problem 3 58
4.1.5 Scaling 60
4.1.6 Continuous Observation Densities 63
4.2 Coupled Hidden Markov Model (CHMM) 64
4.3 Incremental CHMM (iCHMM) 65
4.4 Overall System Architecture 67
Chapter 5 Deployment and Experiments 69
5.1 Hardware Platform 69
5.2 Implementation 71
5.2.1 FER-2013 and Interest 71
5.2.2 SRWGAN Model and Emotion Classifier Training 72
5.2.3 Design of Human Interaction Model 74
5.2.4 Additional Softwares and GUI 76
5.3 Results 79
5.3.1 Super-Resolution and Emotion Classification 79
5.3.2 Human Interaction Model 85
Chapter 6 Conclusions and Future Works 91
6.1 Conclusions 91
6.2 Future Works 92
References 93
-
dc.language.isoen-
dc.subject影像超解析zh_TW
dc.subject人類意圖zh_TW
dc.subject機器視覺zh_TW
dc.subject情緒辨識zh_TW
dc.subject人機互動zh_TW
dc.subjectHuman Intentionen
dc.subjectRobotic Visionen
dc.subjectHuman Behavior Understandingen
dc.subjectHRIen
dc.title運用情緒和眼神建立人機互動zh_TW
dc.titleHuman-Robot Interaction through Emotions and Eyes Gazeen
dc.typeThesis-
dc.date.schoolyear107-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee傅楸善;周瑞仁;劉益宏zh_TW
dc.contributor.oralexamcommitteeChiou-Shann Fuh;Jui-Jen Chou;Yi-Hung Liuen
dc.subject.keyword人機互動,人類意圖,機器視覺,情緒辨識,影像超解析,zh_TW
dc.subject.keywordHRI,Human Behavior Understanding,Human Intention,Robotic Vision,en
dc.relation.page99-
dc.identifier.doi10.6342/NTU201903718-
dc.rights.note未授權-
dc.date.accepted2019-08-15-
dc.contributor.author-college工學院-
dc.contributor.author-dept機械工程學系-
dc.date.embargo-lift2024-08-23-
顯示於系所單位:機械工程學系

文件中的檔案:
檔案 大小格式 
ntu-107-2.pdf
  未授權公開取用
3.54 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved