請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/18844
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 黃漢邦(Han-Pang Huang) | |
dc.contributor.author | Wei-Zhi Lin | en |
dc.contributor.author | 林威志 | zh_TW |
dc.date.accessioned | 2021-06-08T01:37:43Z | - |
dc.date.copyright | 2020-09-22 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-19 | |
dc.identifier.citation | References [1] 'Fujitsu Begins Limited Sales of Service Robot 'Enon' for Task Support in Offices and Commercial Establishments,' Fujutsu. 2005. <Https://Www.Fujitsu.Com/Global/About/Resources/News/Press-Releases/2005/0913-01.Html>. Retrieved 22 07 2020 [2] 'Sam, Your Robotic Concierge,' Luvozo, 2020. <Https://Luvozo.Com/>. Retrieved 22 07 2020 [3] “UCLA Dataset,” Northwestern. 2014. <Http://Users.Eecs.Northwestern.Edu/~Jwa368/My_Data.Html>. Retrieved 24 10 2018 [4] A. Agrawal and N. K. Mishra, 'Fusion Based Emotion Recognition System,' International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, pp. 727-732, 2016. [5] H. S. Ahn, M. H. Lee, and B. A. MacDonald, 'Healthcare Robot Systems for a Hospital Environment: Carebot and Receptionbot,' Proceeding of 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, pp. 571-576, 2015. [6] D. Akimov and R. D. Atkinson, 'Robot Assisted Sensing, Control and Manufacture in Automobile Industry,' The Journal of IoT in Social, Mobile, Analytics, and Cloud, Vol. 1, No. 3, pp. 180-187, 2019. [7] A. Aristidou, P. Charalambous, and Y. Chrysanthou, 'Emotion Analysis and Classification: Understanding the Performers’ Emotions Using the Lma Entities,' Computer Graphics Forum, Vol. 34, No. 6, pp. 262-276, 2015. [8] C. Bartneck, M. J. Lyons, and M. Saerbeck, 'The Relationship between Emotion Models and Artificial Intelligence,' Proceeding of the SAB2008 Workshop on The Role of Emotion in Adaptive Behavior and Cognitive Robotics, Osaka, Japan, pp. 1-12, 2008. [9] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, 'Actions as Space-Time Shapes,' Proceeding of the 10th IEEE International Conference on Computer Vision Beijing, China, Vol. 2, pp. 1395-1402, 2005. [10] J.-D. Boucher, U. Pattacini, A. Lelong, G. Bailly, F. Elisei, S. Fagel, P. F. Dominey, and J. Ventre-Dominey, 'I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation,' Front Neurorobot, Vol. 6, No. 3, pp. 1-11, 2012. [11] C. Breazeal, 'Social Interactions in Hri: The Robot View,' IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 32, No. 2, pp. 181-186, 2004. [12] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, 'Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork,' Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Alta., Canada, 2005. [13] Z. Cao, T. Simon, S. Wei, and Y. Sheikh, 'Realtime Multi-Person 2d Pose Estimation Using Part Affinity Fields,' IEEE Conference on Computer Vision and Pattern Recognition Honolulu, HI, USA, pp. 1302-1310, 2017. [14] Z. Cheng, L. Qin, Y. Ye, Q. Huang, and Q. Tian, 'Human Daily Action Analysis with Multi-View and Color-Depth Data,' Proceeding of European Conference on Computer Vision Workshops and Demonstrations, Florence, Italy, pp. 52-61, 2012. [15] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, 'Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling,' Proceeding of Neural Information Processing Systems: Deep Learning and Representation Learning Workshop, Montréal, QC, Canada, pp. 1-9, 2014. [16] S. Y. Chung, 'Spatial Understanding and Motion Planning for a Mobile Robot,' Department of Mechanic Engineering, National Taiwan University, 2010. [17] S. Y. Chung and H. P. Huang, 'Robot Motion Planning in Dynamic Uncertain Environments,' Advance Robotics, Vol. 25, pp. 849-870, 2011. [18] H. H. Clark and S. E. Brennan, 'Grounding in Communication,' in Perspectives on Socially Shared Cognition, 1991 pp. 127-149. [19] M. Daoudi, S. Berretti, P. Pala, Y. Delevoye, and A. D. Bimbo, 'Emotion Recognition by Body Movement Representation on the Manifold of Symmetric Positive Definite Matrices,' Proceeding of International Conference on Image Analysis and Processing, Springer, Cham, pp. 550-560, 2017. [20] S. Deshmukh, M. Patwardhan, and A. Mahajan, 'Survey on Real-Time Facial Expression Recognition Techniques,' IET Biometrics, Vol. 5, No. 3, pp. 155 - 163, 2016. [21] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel, 'Benchmarking Deep Reinforcement Learning for Continuous Control,' Proceeding of International Conference Machine Learning, New York City, NY, USA, pp. 1329-1338, 2016. [22] D. Dukes, F. Clément, and C. Audrin, 'Looking Beyond the Static Face in Emotion Recognition, the Informative Case of Interest,' Visual Cognition, Vol. 25, No. 4, pp. 575-588, July 2017. [23] P. Ekman and D. Cordaro, 'What Is Meant by Calling Emotions Basic,' Emotion Review, Vol. 3, No. 4, pp. 364-370, 2011. [24] P. Ekman and W. V. Friesen, 'Constants across Cultures in the Face and Emotion,' Journal of Personality and Social Psychology, Vol. 17, No. 2, pp. 124-129, 1971. [25] P. Ekman and E. L. Rosenberg, What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (Facs), USA: Oxford University Press, 1997. [26] P. Gaussier, S. Moga, J. P. Banquet, and M. Quoy, 'From Perception-Action Loop to Imitation Processes: A Bottom-up Approach of Learning by Imitation,' Applied Artificial Intelligence, Vol. 12, No. 7, pp. 701-727, 1998. [27] G. Gordon, 'Social Behaviour as an Emergent Property of Embodied Curiosity: A Robotics Perspective,' Philosophical Transactions of the Royal Society B, Vol. 374, No. 1771, pp. 1-7, 2019. [28] Z. M. Griffin and K. Bock, 'What the Eyes Say About Speaking,' Psychological Science, Vol. 11, No. 4, pp. 274-279, August 2000. [29] C. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, C. Schmid, and J. Malik, 'Ava: A Video Dataset of Spatio-Temporally Localized Atomic Visual Actions,' Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, USA, pp. 1-15, 2018. [30] H. Gunes, O. Celiktutan, and E. Sariyanidi, 'Live Human–Robot Interactive Public Demonstrations with Automatic Emotion and Personality Prediction,' Philosophical Transactions of the Royal Society B, Vol. 374, No. 1771, pp. 1-8, 2019. [31] H. Gunes and M. Piccardi, 'Affect Recognition from Face and Body: Early Fusion Vs. Late Fusion,' IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, pp. 3437-3443, 2005. [32] H. Gunes, B. Schuller, M. Pantic, and R. Cowie, 'Emotion Representation, Analysis and Synthesis in Continuous Space: A Survey,' Proceeding of IEEE International Conference on Automatic Face Gesture Recognition and Workshops, Santa Barbara, CA, USA, pp. 827-837, 2011. [33] S. Gupta, V. Tolani, J. Davidson, S. Levine, R. Sukthankar, and J. Malik, 'Cognitive Mapping and Planning for Visual Navigation,' International Journal of Computer Vision, Vol. 128, pp. 1311-1330, 2019. [34] E. T. Hall, The Hidden Dimension, Garden City, N.Y.: Doubleday, 1966. [35] E. T. Hall, 'A System for the Notation of Proxemic Behavior,' American Anthropologist, Vol. 65, No. 5, pp. 1003-1026, 1963. [36] H. v. Hasselt, A. Guez, and D. Silver, 'Deep Reinforcement Learning with Double Q-Learning,' Proceeding of Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, pp. 2094-2100, 2016. [37] X. Huang, Q. Cao, and X. Zhu, 'Mixed Path Planning for Multi-Robots in Structured Hospital Environment,' The Journal of Engineering, Vol. 14, No. 2, pp. 512 - 516, 2019. [38] N. Justesen, P. Bontrager, J. Togelius, and S. Risi, 'Deep Learning for Video Game Playing,' IEEE Transactions on Games, Vol. 12, No. 1, pp. 1-20, 2019. [39] T. Kanda, D. F. Glas, M. Shiomi, and N. Hagita, 'Abstracting Peoples Trajectories for Social Robots to Proactively Approach Customers,' IEEE Transactions on Robotics, Vol. 25, No. 6, pp. 1382-1396, 2009. [40] H. Kato and S. Kato, 'Emotion Estimation for Humanoid Movements Using Laban’s Features and Random Forests,' IEEE 6th Global Conference on Consumer Electronics (GCCE), Nagoya, Japan, pp. 729-730, 2017. [41] Y. Kato, T. Kanda, and H. Ishiguro, 'May I Help You? - Design of Human-Like Polite Approaching Behavior,' 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, pp. 35-42, 2015. [42] A. Kendon, Conducting Interaction: Patterns of Behavior in Focused Encounters, vol. 7, CUP Archive, 1990. [43] R. Kitchin, 'Cognitive Maps,' in International Encyclopedia of the Social Behavioral Sciences P. B. E. N. Smelser, Ed. Pergamon, Turkey; Oxford, UK, 2001 pp. 2120-2124. [44] B. C. Ko, 'A Brief Review of Facial Emotion Recognition Based on Visual Information,' Sensors, Vol. 18, No. 2, pp. 1-20, 2018. [45] S. Koeing and M. Likhachev, 'Fast Replanning for Navigation in Unknown Terrain,' Transactions on Robotics, Vol. 21, No. 3, pp. 354-363, 2005. [46] R. v. Laban and L. Ullmann, The Mastery of Movement, 3rd Ed., 3Ed, London, UK: Macdonald Evans, 1971. [47] M. Lagarde, P. Andry, P. Gaussier, S. Boucenna, and L. Hafemeister, 'Proprioception and Imitation: On the Road to Agent Individuation,' Studies in Computational Intelligence, Vol. 264, No. 1, pp. 43-63, 2010. [48] P. Laroque, N. Gaussier, N. Cuperlier, M. Quoy, and P. Gaussier, 'Cognitive Map Plasticity and Imitation Strategies to Improve Individual and Social Behaviors of Autonomous Agents,' Paladyn, Journal of Behavioral Robotics, Vol. 1, No. 1, pp. 25-36, 2010. [49] H. Li, Q. Zhang, and D. Zhao, 'Deep Reinforcement Learning-Based Automatic Exploration for Navigation in Unknown Environment,' IEEE Transactions on Neural Networks and Learning Systems, Vol. 31, No. 6, pp. 2064-2076, 2020. [50] Z. Liu and W. Wang, 'A Coherent Semantic Mapping System Based on Parametric Environment Abstraction and 3d Object Localization,' European Conference on Mobile Robots, Barcelona, Spain, pp. 234-239, 2013. [51] B. Luo, D. Liu, and H.-N. Wu, 'Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems with Critic-Only Structure,' IEEE Transactions on Neural Networks and Learning Systems, Vol. 29, No. 6, pp. 2099 - 2111, 2018. [52] M. Marszalek, I. Laptev, and C. Schmid, 'Actions in Context,' Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, pp. 2929–2936, 2010. [53] D. Martinelli, A. L. Sousa, M. E. Augusto, V. C. Kalempa, A. S. d. Oliveira, R. F. Rohrich, and M. A. Teixeira, 'Remote Control for Mobile Robots Using Gestures Captured by the Rgb Camera and Recognized by Deep Learning Techniques,' Proceeding of Latin American Robotics Symposium (LARS), Brazilian Symposium on Robotics (SBR) and Workshop on Robotics in Education (WRE), Rio Grande, Brazil, pp. 98-103, 2019. [54] G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, C. Hofsten, K. Rosander, M. Lopes, J. Santos-Victor, A. Bernardino, and L. Montesano, 'The Icub Humanoid Robot: An Open-Systems Platform for Research in Cognitive Development,' Neural Networks, Vol. 23, No. 8-9, pp. 1125-1134, 2010. [55] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, 'Human-Level Control through Deep Reinforcement Learning,' Nature, Vol. 518, No. 7540, pp. 529-533, 2015. [56] B. Mutlu, J. Forlizzi, and J. Hodgins, 'A Storytelling Robot: Modeling and Evaluation of Human-Like Gaze Behavior,' Proceeding of 6th IEEE-RAS International Conference on Humanoid Robots, Genova, Italy, pp. 518-523, 2006. [57] A. Nüchter and J. Hertzberg, 'Towards Semantic Maps for Mobile Robots,” Robotics and Autonomous System,' Robotics and Autonomous Systems, Vol. 56, No. 11, pp. 915-926, 2008. [58] T. Nakata, T. Sato, and T. Mori, 'Expression of Emotion and Intention by Robot Body Movement,' Proceeding of the 5th International Conference on Autonomous System, pp. 1-8, 1998. [59] D. Nguyen, K. Nguyen, S. Sridharan, A. Ghasemi, D. Dean, and C. Fookes, 'Deep Spatio-Temporal Features for Multimodal Emotion Recognition,' IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, pp. 1215-1223, 2017. [60] T. T. Nguyen, N. D. Nguyen, and S. Nahavandi, 'Deep Reinforcement Learning for Multiagent Systems: A Review of Challenges, Solutions, and Applications,' IEEE Transactions on Cybernetics, Vol. Early Access, pp. 1-14, 2020. [61] B. Ni, G. Wang, and P. Moulin, 'Rgbd-Hudaact: A Color-Depth Video Database for Human Daily Activity Recognition,' Proceeding of IEEE International Conference on Computer Vision Workshops, Barcelona, Spain, pp. 1147–1153, 2011. [62] M. A. Nicolaou, H. Gunes, and M. Pantic, 'Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space,' IEEE Transactions on Affective Computing, Vol. 2, No. 2, pp. 92 - 105, 2011. [63] F. Niroui, K. Zhang, Z. Kashino, and G. Nejat, 'Deep Reinforcement Learning Robot for Search and Rescue Applications: Exploration in Unknown Cluttered Environments,' IEEE Robotics and Automation Letters, Vol. 4, No. 2, pp. 610-617, April 2019. [64] O. Nocentini, L. Fiorini, G. Acerbi, A. Sorrentino, G. Mancioppi, and F. Cavallo, 'A Survey of Behavioral Models for Social Robots,' Robotics, Vol. 8, No. 54, pp. 1-35, July 2019. [65] Y. Noguchi and T. Maki, 'Path Planning Method Based on Artificial Potential Field and Reinforcement Learning for Intervention Auvs,' Proceeding of IEEE Underwater Technology (UT), Kaohsiung, Taiwan, Taiwan, pp. 1-6, 2019. [66] J. O’Keefe and D. H. Conway, 'Hippocampal Place Units in the Freely Moving Rat: Why They Fire Where They Fire,' Experimental Brain Research, Vol. 31, No. 4, pp. 573-590, 1978. [67] L. N. J. O’Keefe, The Hippocampus as a Cognitive Map, Oxford,UK: Oxford University: 1987. [68] A. K. Pandey and R. Gelin, 'A Mass-Produced Sociable Humanoid Robot: Pepper: The First Machine of Its Kind,' IEEE Robotics Automation Magazine, Vol. 25, No. 3, pp. 40-48, 2018. [69] A. I. Panov, K. S. Yakovlev, and R. Suvorov, 'Grid Path Planning with Deep Reinforcement Learning: Preliminary Results,' Procedia Computer Science, Vol. 123, pp. 347-353, 2018. [70] A. Parmiggiani, M. Maggiali, L. Natale, F. Nori, A. Schmitz, N. G. Tsagarakis, J. Santos-Victor, F. Becchi, G. Sandini, and G. Metta, 'The Design of the iCub,' International Journal of Humanoid Robotics, Vol. 9, No. 4, pp. 1-23, 2012. [71] R. Pieters, M. Racca, A. Veronese, and V. Kyrki, 'Human-Aware Interaction: A Memory-Inspired Artificial Cognitive Architecture,' Cognitive Robot Architectures, Vol. 1855, pp. 38-39, 2017. [72] S. Poriaa, E. Cambria, A. Hussain, and G.-B. Huang, 'Towards an Intelligent Framework for Multimodal Affective Aata Analysis,' Neural Networks, Vol. 63, pp. 104-116, March 2015. [73] T. J. Prescott, D. Camilleri, U. Martinez-Hernandez, A. Damianou, and N. D. Lawrence, 'Memory and Mental Time Travel in Humans and Social Robots,' Philosophical Transactions of the Royal Society B, Vol. 374, No. 1771, pp. 1-11, 2019. [74] M. Qbadou, M. H. Zaggaf, I. SALhi, and K. Mansouri, 'Multilingual Verbal Interaction between Humans and Robots - Modeling and Implementation,' Proceeding of International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Meknes, Morocco, 2020. [75] H. Qie, D. Shi, T. Shen, X. Xu, Y. Li, and L. Wang, 'Joint Optimization of Multi-Uav Target Assignment and Path Planning Based on Multi-Agent Reinforcement Learning,' IEEE Access, Vol. 7, pp. 146264-146272, 2019. [76] P. Rane, V. Mhatre, and L. Kurup, 'Study of a Home Robot: Jibo,' International Journal of Engineering Research Technology, Vol. 3, No. 10, pp. 490-493, 2014. [77] H. Ranganathan, S. Chakraborty, and S. Panchanathan, 'Multimodal Emotion Recognition Using Deep Learning Architectures,' Proceeding of IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, pp. 1-9, 2016. [78] G. Rizzolatti and L. Craighero, 'The Mirror-Neuron System,' Annual Review of Neuroscience, Vol. 27, No. 1, pp. 169-192, 2004. [79] S. J. Russell, Artificial Intelligence a Modern Approach, vol. 4th ed, Boston: Pearson, 2018. [80] R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, 'Model-Based and Learned Semantic Object Labeling in 3d Point Cloud Maps of Kitchen Environments,' IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, pp. 3601-3608, 2009. [81] E. Saad, M. A. Neerincx, and K. V. Hindriks, 'Welcoming Robot Behaviors for Drawing Attention,' Proceeding of 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea (South), pp. 636-637, 2019. [82] F. Sargolini, 'Conjunctive Representation of Position, Direction, and Velocity in Entorhinal Cortex,' Science, Vol. 312, No. 5774, pp. 758-762, 2006. [83] S. Satake, K. Hayashi, K. Nakatani, and T. Kanda, 'Field Trial of an Information-Providing Robot in a Shopping Mall,' Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, pp. 1832-1839, 2015. [84] C. Schuldt, I. Laptev, and B. Caputo, 'Recognizing Human Actions: A Local Svm Approach,' Proceeding of the 17th International Conference on Pattern Recognition, Cambridge, UK, pp. 32-36, 2004. [85] L. L. Senior, K. Z. Yue Tan, Shiwen Liu, K. Zhang, and X. S. Shen, 'Deep Reinforcement Learning for Autonomous Internet of Things: Model, Applications and Challenges,' IEEE Communications Surveys Tutorials, Vol. Early Access, pp. 1-40, 2020. [86] C. Sidner, C. D. Kidd, C. Lee, and N. Lesh, 'Where to Look: A Study of Human-Robot Engagement,' Proceeding of the 2004 International Conference on Intelligent User Interfaces, Funchal, Madeira, Portugal, pp. 78-84, 2004. [87] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. v. d. Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, 'Mastering the Game of Go with Deep Neural Networks and Tree Search,' Nature, Vol. 529, pp. 484-489, 2016. [88] K. Simonyan and A. Zisserman, 'Very Deep Convolutional Networks for Large-Scale Image Recognition,' Proceeding of International Conference on Learning Representations, San Diego, CA, USA, pp. 1-8, 2015. [89] S. Singh, S. A. Velastin, and H. Ragheb, 'Muhavi: A Multicamera Human Action Video Dataset for the Evaluation of Action Recognition Methods,' Proceeding of IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, pp. 48-55, 2010. [90] B. Smith, Q. Yin, S. K. Feiner, and S. K. Nayar, 'Gaze Locking: Passive Eye Contact Detection for Human–Object Interaction,' Proceeding of 26th annual ACM symposium on User interface software and technology, St. Andrews Scotland, United Kingdom, pp. 271-280, 2013. [91] R. K. Srivastava, K. Greff, and J. Schmidhuber, 'Training Very Deep Networks,' Proceeding of the 28th International Conference on Neural Information Processing Systems, Montréal, QC, Canada, pp. 2377-2385, 2015. [92] R. Sugawara, T. Wada, J. Liu, and Z. Wang, 'Walking Characteristics Extraction and Behavior Patterns Estimation by Using Similarity with Human Motion Map,' IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, pp. 2047-2052, 2015. [93] M. Sugeno, 'Theory of Fuzzy Integrals and Its Applications,' Ph.D. Thesis, Tokyo Institute of Technology, Tokyo Institute of Technology, 1974. [94] H. Tang, R. Yan, and K. C. Tan, 'Cognitive Navigation by Neuro-Inspired Localization, Mapping and Episodic Memory,' IEEE Transactions on Cognitive and Developmental Systems, Vol. 10, No. 3, pp. 751-761, 2017. [95] K. Tang, Y. Tie, T. Yang, and L. Guan, 'Multimodal Emotion Recognition (Mer) System,' IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), Toronto, ON, Canada, pp. 1-6, 2014. [96] M. Tomasello, B. Hare, H. Lehmann, and J. Call, 'Reliance on Head Versus Eyes in the Gaze Following of Great Apes and Human Infants: The Cooperative Eye Hypothesis,' Journal of human evolution, Vol. 52, No. 3, pp. 314-320, 2007. [97] E. Tsardoulias, K. Iliakopoulou, A. Kargakos, and L. Petrou, 'A Review of Global Path Planning Methods for Occupancy Grid Maps Regardless of Obstacle Density,' Journal of Intelligent and Robotic System, Vol. 84, pp. 829-858, 2016. [98] P. Tzirakis, G. Trigeorgis, M. A. Nicolaou, B. Schuller, and S. Zafeiriou, 'End-to-End Multimodal Emotion Recognition Using Deep Neural Networks,' IEEE Journal of Selected Topics in Signal Processing, Vol. 11, No. 8, pp. 1301-1309, December 2017. [99] J. Wagner, E. Andre, F. Lingenfelser, and J. Kim, 'Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data,' IEEE Transactions on Affective Computing, Vol. 2, No. 4, pp. 206 - 218, June 2011. [100] S. Wang and Q. Ji, 'Video Affective Content Analysis: A Survey of State-of-the-Art Methods,' IEEE Transactions on Affective Computing, Vol. 6, No. 4, pp. 410-430, 2015. [101] Z. Wang, P. Jensfelt, and J. Folkesson, 'Building a Human Behavior Map from Local Observations,' 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, pp. 64-70, 2016. [102] D. Weinland, R. Ronfard, and E. Boyer, 'Free Viewpoint Action Recognition Using Motion History Volumes,' Computer Vision and Image Understanding, Vol. 104, No. 2, pp. 249-257, 2006. [103] P. W. Wu, 'Generalized Spatial Behavior Cognition Model and Its Applications for Intelligent Robot,' Department of Mechanical Engineering, National Taiwan University, 2011. [104] W. M. Wundt, Grundzüge Der Physiologischen Psychologie, Leipzig, Germany: Engelman, 1905. [105] S. Xiao, Z. Wang, and J. Folkesson, 'Unsupervised Robot Learning to Predict Person Motion,' IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, pp. 691-696, 2015. [106] C. Xu, P. Du, Z. Feng, Z. Meng, T. Cao, and C. Dong, 'Multi-Modal Emotion Recognition Fusing Video and Audio,' Applied Mathematics Information Sciences, Vol. 7, No. 2, pp. 455-462, March 2013. [107] A. Yamazaki, K. Yamazaki, Y. Kuno, M. Burdelski, M. Kawashima, and H. Kuzuoka, 'Precision Timing in Human-Robot Interaction: Coordination of Head Movement and Utterance,' Proceeding of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, pp. 131-140, 2008. [108] Y. Yamazaki, M. Yamamoto, and N. Nagata, 'Estimation of Emotional State in Personal Fabrication: Analysis of Emotional Motion Based on Laban Movement Analysis,' International Conference on Culture and Computing (Culture and Computing), Kyoto, Japan, pp. 71-74, 2017. [109] T. Yan, Y. Zhang, and B. Wang, 'Path Planning for Mobile Robot's Continuous Action Space Based on Deep Reinforcement Learning,' Proceeding of International Conference on Big Data and Artificial Intelligence (BDAI), Beijing, China, pp. 42-46, 2018. [110] W. Yan, C. Weber, and S. Wermter, 'A Neural Approach for Robot Navigation Based on Cognitive Map Learning,' Proceeding of International Joint Conference on Neural Networks, Brisbane, QLD, Australia, pp. 1-8, 2012. [111] G. Yang, S. Wang, and J. Yang, 'Desire-Driven Reasoning for Personal Care Robots,' IEEE Access, Vol. 7, pp. 75203 - 75212, 2019. [112] K. Yokoyama and K. Morioka, 'Autonomous Mobile Robot with Simple Navigation System Based on Deep Reinforcement Learning and a Monocular Camera,' Proceeding of IEEE/SICE International Symposium on System Integration, Honolulu, Hawaii, USA, pp. 525-530, 2020. [113] H. Zacharatos, C. Gatzoulis, and Y. L. Chrysanthou, 'Emotion Recognition Based on Body Movement Analysis: A Survey,' IEEE Computer Graphics and Applications, Vol. 34, No. 6, pp. 35 - 45, 2014. [114] J. Zeng, R. Ju, L. Qin, Y. Hu, Q. Yin, and C. Hu, 'Navigation in Unknown Dynamic Environments Based on Deep Reinforcement Learning,' Sensors, Vol. 19, pp. 1-18, 2019. [115] S. Zhalehpour, Z. Akhtar, and C. E. Erdem, 'Multimodal Emotion Recognition with Automatic Peak Frame Selection,' Proceeding of IEEE International Symposium on Innovations in Intelligent Systems and Applications (INISTA), Alberobello, Italy, pp. 1-6, 2014. [116] Y. Zhang and L. Zhang, 'Semi-Feature Level Fusion for Bimodal Affect Regression Based on Facial and Bodily Expressions,' International Conference on Autonomous Agents and Multiagent Systems, Richland, SC, pp. 1557-1565, 2015. [117] S. Zhao, H. Yao, X. Sun, P. Xu, X. Liu, and R. Ji, 'Video Indexing and Recommendation Based on Affective Analysis of Viewers,' Proceeding of 19th ACM International Conference on Multimedia, New York, United States, pp. 1473-1476, 2011. [118] M. Zheng, A. J. Moon, E. A. Croft, and M. Q. H. Meng, 'Impacts of Robot Head Gaze on Robot-to-Human Handovers,' International Journal of Social Robotics, Vol. 7, No. 5, pp. 783-798, 2015. [119] H. Zhou and M. Jiang, 'Building a Grid-Point Cloud-Semantic Map Based on Graph for the Navigation of Intelligent Wheelchair,' 21st International Conference on Automation and Computing (ICAC), Glasgow, UK, pp. 267-274, 2015. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/18844 | - |
dc.description.abstract | 隨著科技的進步,社交機器人開始逐步進入人們的生活。目前許多研究團隊致力於讓機器人適應環境、了解人類做事方式並且與人們合作做事。為了要讓機器人能夠更進一步讓人們接受,機器人需要具備理解人類行為的能力。 本論文目的旨在發展一套機器人具備人類行為認識與意圖認知的認知架構,並使用上述資訊讓機器人自主行動。在人類行為中,我們提出了多模態情緒辨識系統、人類行為認知地圖。情緒辨識系統中,我們結合臉部與身體資訊並將其結合給予更加穩定的辨識結果,以此理解人類的情緒。在人類行為認知地圖中,記錄且辨識人類在特定地點常有的行為,紀錄空間與人類行為的關係。藉著於意圖偵測中觀察人類注視方向與距離、速度等觀測值來建立對話時候的意圖模型。最後在機器人的行動規劃中,使用深度增強學習演算法學習空間影響與人類行為認知地圖的資訊,並給予機器人自主行走的能力。 最後本論文展示結合上述各功能的認知架構之演算法。使得機器人能夠展現出預測行人意圖並主動給予協助的能力,同時機器人也能表現出符合社會規範的行為。 | zh_TW |
dc.description.abstract | In recent years, social robotics have developed and entered our lives. In fact, social robotics are acting more and more like humans. Many researchers are focused on developing different behaviors which can let robots adapt to various environments, better understand human decisions, and further collaborate with people. To reach this goal and have the robot be more easily accepted as part of human society, robots must have the ability to understand human behavior. This dissertation attempts to develop a cognitive architecture which combines the understanding of human behaviors and intentions. In the human behavior understanding module, a multimodal emotion recognition system and a human behavior cognitive map are proposed. The multimodal emotion system is built by fusing facial and bodily information to achieve a more stable recognition result. In the human behavior cognitive map, many human actions are recognized and record relationships between space and human actions in a specific environment. Furthermore, the intention model during conversation is established by observing the human gaze direction, distance between human and robot, and walking velocity toward the robot. The motion planning method is built by deep reinforcement understanding, which learns the spatial effective and human behavior cognitive map, giving the robot autonomous walking ability. Finally, the experiment demonstrates the cognitive architecture created by combining the above functions. As a result, the robot exhibits the ability to understand human intentions and offers to actively help with people’s needs, and the robot can also behave in accordance with social norms. | en |
dc.description.provenance | Made available in DSpace on 2021-06-08T01:37:43Z (GMT). No. of bitstreams: 1 U0001-1808202014115000.pdf: 5355435 bytes, checksum: 815e924c4e01f0fd142a0aeaaa2f43fb (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 致謝 i 摘要 iii Abstract v List of Tables xi List of Figures xiii Chapter 1 Introduction 1 1.1 Social Robots 1 1.2 Cognitive Architecture 4 1.2.1 Multimodal Emotion Recognition 5 1.2.2 Cognitive Map 6 1.2.3 Intention Model 8 1.2.4 Motion Planning following Social Norms 9 1.3 The Framework of the Dissertation 10 1.4 Dissertation Statement and Contribution 12 Chapter 2 Multimodal Emotion Recognition System 13 2.1 Emotion Recognition System 13 2.2 Facial and Bodily Modalities 16 2.2.1 Facial expression features 17 2.2.2 Bodily expression features 18 2.2.3 Feature Normalization 23 2.3 Recognition and Fusion Strategies 24 2.3.1 Emotion recognition 24 2.3.2 Multimodal fusion 25 2.4 Experimental Results 27 2.4.1 Data collection and Training 27 2.4.2 Experimental results 30 2.5 Conclusion 38 Chapter 3 Construction of Human Behavior Cognitive Map for Robots 41 3.1 Introduction 41 3.2 Human Behavior Recognition 43 3.2.1 2D Pose Estimation 44 3.2.2 Body Behavior Recognition Model 47 3.2.3 Hand Behavior Identification Model 52 3.3 Behavior Cognitive Map 54 3.3.1 Recording Behavior 55 3.3.2 Structure of Behavior Cognitive Map 59 3.3.3 Behavior Identification 61 3.4 Experimental Results 63 3.4.1 Human Behavior Recognition Experiments 63 3.4.2 Experiments Involving Abnormal Behaviors 67 3.4.3 Summary 71 3.5 Conclusion 72 Chapter 4 Human Robot Interaction with Intention Model 75 4.1 Introduction 75 4.2 Gaze Model 79 4.3 Human Attention Model 84 4.4 HMM-based Human Model 87 4.4.1 Coupled Hidden Markov Models 88 4.4.2 Update Rules and Online Learning 91 4.4.3 State Estimation 93 4.4.4 Overall Structure 94 4.5 Experiments 95 4.5.1 Human Attention Model 95 4.5.2 iCHMM Model 97 4.6 Conclusion 103 Chapter 5 Motion Planning based on Social Norm Map 105 5.1 Introduction 105 5.2 Social Norm Map 107 5.2.1 Generalized Spatial Behavior Cognitive Model 107 5.2.2 Human behavior cognitive map 112 5.2.3 Establishing the SNM 115 5.3 Deep Reinforcement Learning 116 5.4 Experiments 119 5.5 Conclusion 126 Chapter 6 Conclusion and Future Works 127 6.1 Summary 127 6.1.1 Human Behavior Understanding 127 6.1.2 Intention Model 128 6.1.3 SNM-based Path Planning 128 6.2 Future Works 129 6.2.1 Life-long learning 129 6.2.2 Extended HBCM 129 6.3 Conclusion 130 References 131 Biography 143 | |
dc.language.iso | en | |
dc.title | 移動機器人之認知結構:人類行為與意圖的研究 | zh_TW |
dc.title | Cognitive Architecture for Mobile Robots: Studies of Human Behavior and Intention | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 傅楸善(Chiou-Shann Fuh),李蔡彥(Tsai-Yen Li),王鈺強(Yu-Chiang Frank Wang),劉益宏(Yi-Hung Liu) | |
dc.subject.keyword | 認知架構,人類行為認識,多模態情緒模型,意圖模型,運動規劃,深度增強學習, | zh_TW |
dc.subject.keyword | Cognitive Architecture,Human Behavior Understanding,Multimodal Emotion Model,Intention Model,Motion Planning,Deep Reinforcement Learning, | en |
dc.relation.page | 145 | |
dc.identifier.doi | 10.6342/NTU202003967 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2020-08-19 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1808202014115000.pdf 目前未授權公開取用 | 5.23 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。