請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56193
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 傅立成(Li-Chen Fu) | |
dc.contributor.author | Ting-Sheng Chu | en |
dc.contributor.author | 朱庭升 | zh_TW |
dc.date.accessioned | 2021-06-16T05:18:27Z | - |
dc.date.available | 2017-08-21 | |
dc.date.copyright | 2014-08-21 | |
dc.date.issued | 2014 | |
dc.date.submitted | 2014-08-16 | |
dc.identifier.citation | [1] S. H. Tseng, J.-H. Hua, S.-P. Ma, and L.-C. Fu, “Human awareness based robot performance learning in a social environment,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, 2013, pp. 4291–4296.
[2] V. Wood and J. F. Robertson, “Friendship and kinship interaction: Differential effect on the morale of the elderly,” Journal of Marriage and the Family, pp. 367–375, 1978. [3] K. Wada and T. Shibata, “Robot therapy in a care house-its sociopsychological and physiological effects on the residents,” in Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on. IEEE, 2006, pp.3966–3971. [4] ——, “Social effects of robot therapy in a care house-change of social network of the residents for two months,” in Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2007, pp. 1250–1255. [5] K. Lee, G. Kaloutsakis, and J. Couch, “Towards social-therapeutic robots: How to strategically implement a robot for social group therapy?” in Computational Intelligence in Robotics and Automation (CIRA), 2009 IEEE International Symposium on. IEEE, 2009, pp. 60–65. [6] A. Vinciarelli, M. Pantic, D. Heylen, C. Pelachaud, I. Poggi, F. D’Errico, and M. Schroder, “Bridging the gap between social animal and unsocial machine: A survey of social signal processing,” Affective Computing, IEEE Transactions on, vol. 3, no. 1, pp. 69–87, 2012. [7] E. T. Hall, R. L. Birdwhistell, B. Bock, P. Bohannan, A. R. Diebold Jr, M. Durbin, M. S. Edmonson, J. Fischer, D. Hymes, S. T. Kimball et al., “Proxemics [and comments and replies],” Current anthropology, pp. 83–108, 1968. [8] T. Amaoka, H. Laga, S. Saito, and M. Nakajima, “Personal space modeling for human-computer interaction,” in Entertainment Computing–ICEC 2009. Springer, 2009, pp. 60–72. [9] S. R. Langton, R. J. Watt, and V. Bruce, “Do the eyes have it? cues to the direction of social attention,” Trends in cognitive sciences, vol. 4, no. 2, pp. 50–59, 2000. [10] M. Pateraki, M. Sigalas, G. Chliveros, and P. rahanias, “Visual human-robot communication in social settings,” in the Workshop on Semantics, Identification and Control of Robot-Human-Environment Interaction, held within the IEEE International Conference on Robotics and Automation (ICRA), 2013. [11] A. Gaschler, S. Jentzsch, M. Giuliani, K. Huth, J. de Ruiter, and A. Knoll, “Social behavior recognition using body posture and head pose for human-robot interaction,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 2128–2133. [12] A. Gaschler, K. Huth, M. Giuliani, I. Kessler, J. de Ruiter, and A. Knoll, “Modelling state of interaction from head poses for social human-robot interaction,” in Proc. of the Gaze in Human-Robot Interaction Workshop, HRI, 2012. [13] T. Baur, I. Damian, P. Gebhard, K. Porayska-Pomsta, and E. Andre, “A job interview simulation: Social cue-based interaction with a virtual character,” in Social Computing (SocialCom), 2013 International Conference on. IEEE, 2013, pp. 220–227. [14] K. Otsuka, Y. Takemae, and J. Yamato, “A probabilistic inference of multipartyconversation structure based on markov-switching models of gaze patterns, head directions, and utterances,” in Proceedings of the 7th international conference on Multimodal interfaces. ACM, 2005, pp. 191–198. [15] K. Otsuka, H. Sawada, and J. Yamato, “Automatic inference of cross-modal nonverbal interactions in multiparty conversations: who responds to whom, when, and how? from gaze, head gestures, and utterances,” in Proceedings of the 9th international conference on Multimodal interfaces. ACM, 2007, pp. 255–262. [16] S. Banerjee and A. I. Rudnicky, “Using simple speech–based features to detect the state of a meeting and the roles of the meeting participants,” 2004. [17] D. B. Jayagopi, H. Hung, C. Yeo, and D. Gatica-Perez, “Modeling dominance in group conversations using nonverbal activity cues,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 17, no. 3, pp. 501–513, 2009. [18] T. Yu, S.-N. Lim, K. Patwardhan, and N. Krahnstoever, “Monitoring, recognizing and discovering social networks,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 1462–1469. [19] V. Arnaboldi, M. Conti, A. Passarella, and F. Pezzoni, “Analysis of ego network structure in online social networks,” in Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Confernece on Social Computing (SocialCom). IEEE, 2012, pp. 31–40. [20] ——, “Ego networks in twitter: an experimental analysis,” in INFOCOM, 2013 Proceedings IEEE. IEEE, 2013, pp. 3459–3464. [21] X. Fang and J. Zhan, “Task-oriented social ego network generation via dynamic collaborator selection,” in Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Confernece on Social Computing (SocialCom). IEEE, 2012, pp. 41–50. [22] J. Sun and Y. Zhu, “Microblogging personalized recommendation based on ego networks,” in Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM International Joint Conferences on, vol. 1. IEEE, 2013, pp. 165–170. [23] A. Sharma, M. Gemici, and D. Cosley, “Friends, strangers, and the value of ego networks for recommendation,” Proc. ICWSM, 2013. [24] H.-K. Peng, J. Zhu, D. Piao, R. Yan, and Y. Zhang, “Retweet modeling using conditional random fields,” in Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on. IEEE, 2011, pp. 336–343. [25] D. Bohus and E. Horvitz, “Facilitating multiparty dialog with gaze, gesture, and speech,” in International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. ACM, 2010, p. 5. [26] D. Klotz, J. Wienke, J. Peltason, B. Wrede, S. Wrede, V. Khalidov, and J.-M. Odobez, “Engagement-based multi-party dialog with a humanoid robot,” in Proceedings of the SIGDIAL 2011 Conference. Association for Computational Linguistics, 2011, pp. 341–343. [27] G. Skantze, S. Al Moubayed, J. Gustafson, J. Beskow, and B. Granstrom, “Furhat at robotville: A robot head harvesting the thoughts of the public through multi-party dialogue,” in Proceedings of IVA-RCVA. Santa Cruz, CA, 2012. [28] Z. Yumak and N. Magnenat-Thalmann, “Multi-party interaction with a virtual character and a human-like robot,” in Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology. ACM, 2013, pp. 153–156. [29] M. E. Foster, S. Keizer, Z. Wang, and O. Lemon, “Machine learning of social states and skills for multi-party human-robot interaction,” in Proceedings of the workshop on Machine Learning for Interactive Systems (MLIS 2012), 2012, p. 9. [30] M. E. Foster, A. Gaschler, M. Giuliani, A. Isard, M. Pateraki, and R. Petrick, “Two people walk into a bar: Dynamic multi-party social interaction with a robot agent,” in Proceedings of the 14th ACM international conference on Multimodal interaction. ACM, 2012, pp. 3–10. [31] S. Keizer, M. E. Foster, O. Lemon, A. Gaschler, and M. Giuliani, “Training and evaluation of an mdp model for social multi-user human-robot interaction,” in Proceedings of the 14th Annual SIGdial Meeting on Discourse and Dialogue, 2013. [32] C. T. Chou, J.-Y. Li, M.-F. Chang, and L. C. Fu, “Multi-robot cooperation based human tracking system using laser range finder,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011, pp. 532–537. [33] J. M. Saragih, S. Lucey, and J. F. Cohn, “Face alignment through subspace constrained mean-shifts,” in Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 1034–1041. [34] (2012) Head pose estimation with opencv & opengl revisited. [Online]. Available: http://www.morethantechnical.com/2012/10/17/head-pose-estimation-with-opencv-opengl-revisited-w-code/ [35] V. Lepetit, F. Moreno-Noguer, and P. Fua, “Epnp: An accurate o (n) solution to the pnp problem,” International journal of computer vision, vol. 81, no. 2, pp. 155–166, 2009. [36] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the fifth annual workshop on Computational learning theory. ACM, 1992, pp. 144–152. [37] (2014) Machine learning - lecture 15 support vector machines. [Online]. Available: http://www.sussex.ac.uk/Users/christ/crs/ml/lec08a.html [38] (2014) Dtrge svm - support vector machines. [Online]. Available: http://www.dtreg.com/svm.htm [39] J. Lafferty, A. McCallum, and F. C. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” 2001. [40] B. T. C. G. D. Koller, “Max-margin markov networks,” 2003. [41] X. He, R. S. Zemel, and M. Carreira-Perpindn, “Multiscale conditional random fields for image labeling,” in Computer vision and pattern recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE computer society conference on, vol. 2. IEEE, 2004, pp. II–695. [42] C. Sutton and A. McCallum, An introduction to conditional random fields for relational learning. Introduction to statistical relational learning. MIT Press, 2006, vol. 2. [43] L. K. Guerrero, “Attachment-style differences in intimacy and involvement: A test of the four-category model,” Communications Monographs, vol. 63, no. 4, pp. 269–292, 1996. [44] R. Heslin, T. D. Nguyen, and M. L. Nguyen, “Meaning of touch: The case of touch from a stranger or same sex person,” Journal of Nonverbal Behavior, vol. 7, no. 3, pp. 147–157, 1983. [45] (2013) Openni library. [Online]. Available: http://www.openni.org/ [46] (2014) Opencv library. [Online]. Available: http://opencv.org/ [47] T. Allison, A. Puce, and G. McCarthy, “Social perception from visual cues: role of the sts region,” Trends in cognitive sciences, vol. 4, no. 7, pp. 267–278, 2000. [48] (2014) Facetracker open source code. [Online]. Available: https://github.com/kylemcdonald/FaceTracker [49] (2014) Head pose estimator open source code. [Online]. Available: https://github.com/royshil/HeadPosePnP [50] (2014) Portaudio library. [Online]. Available: http://www.portaudio.com/ [51] C.-C. Chang and C.-J. Lin, “Libsvm: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, p. 27, 2011. [52] L. C. Freeman, “Centrality in social networks conceptual clarification,” Social networks, vol. 1, no. 3, pp. 215–239, 1979. [53] ——, “A set of measures of centrality based on betweenness,” Sociometry, pp. 35–41, 1977. [54] M. A. Carreira-Perpinan and G. E. Hinton, “On contrastive divergence learning,” in Proceedings of the tenth international workshop on artificial intelligence and statistics. Citeseer, 2005, pp. 33–40. [55] S. B. Wang, A. Quattoni, L. Morency, D. Demirdjian, and T. Darrell, “Hidden conditional random fields for gesture recognition,” in Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 2. IEEE, 2006, pp. 1521–1527. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56193 | - |
dc.description.abstract | 社交互動是維持人們社交關係的一種重要方式,而人們的社交關係對於人與人間的人際關係也有著直接的影響,並且對於人們的心理狀態以及生理狀態都扮演著一個重要的影響因素,尤其是對於年紀較大的年長者們。由於近年來機器人領域的快速發展,透過機器人的輔助來加強人們的社交互動已經是可以被社會大眾所期待的。因此我們希望藉由推薦合適的社交活動並且提供相對應的輔助來賦與機器人增進人們社交互動的能力。
以此方向為目標,在本碩士論文之中我們開發一創新的活動推薦系統能夠適用於此種社交輔助機器人,且透過自我社交網路的分析可推論出適當的社交活動並應用於多人的環境之下。 在此系統之中我們首先提出一新穎的概念,即為結合機器人的視角及第一人稱視角來建立自我社交網路。根據我們所進行的案例研究,本論文提出四類社交互動的特徵用來感知人與人之間的互動親密程度以做為社交網路中的資訊。接著我們以先前所建立的自我社交網路做為基礎,創建一社交活動推薦模型用以推薦合適之社交活動。最後,透過在本論文中針對於各個部分進行詳細的實驗,本系統被證明為有能力可以推論以及推薦適當的社交活動並且提供合適的服務來輔助人們的社交互動。 | zh_TW |
dc.description.abstract | Social interaction is an important means for maintaining our social relationship. It directly affects humans' interpersonal relationship, acting as an important factors which influence humans' mental status as well as physiological condition especially for elders. Owing to vast developments in the field of robotics in recent years, robotic assistance to enhance social interactions among humans is now a general expectation. For this reason, we hope to endow robots with an ability to help humans promote social interactions through recommending of appropriate social activities and providing of corresponding assistance.
With this as our aim, in this thesis we develop an innovative activity recommendation system for such social assistive robot based on the ego social network analysis in multi-human environment. At first, a novel idea to combine the first-person camera and the robot camera to construct an ego social network is introduced. Four types of social interaction features for perceiving the intimacy level are proposed subsequently based on a user study we have conducted. Afterwards, a social activity recommendation model is presented in order to recommend appropriate activities cooperating with the former ego social network analysis. Finally, through the evaluation by several conducted experiments, we demonstrate that our system have the ability to reason and offer the pertinent assistance for humans' social interactions. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T05:18:27Z (GMT). No. of bitstreams: 1 ntu-103-R01922048-1.pdf: 24846395 bytes, checksum: 55e53437f0a74f19b622afd820743f9b (MD5) Previous issue date: 2014 | en |
dc.description.tableofcontents | Contents
口試委員會審定書 i 誌謝 ii 摘要 iv Abstract v Contents vi List of Figures ix List of Tables xi 1 Introduction 1 1.1 Motivation 1 1.2 Objective and Contributions 3 1.3 Related Work 4 1.3.1 Perceiving Intimacy 5 1.3.2 Ego Social Network 6 1.3.3 Multi-human Social Activity Recommendation 7 1.4 System Overview 7 1.5 Thesis Organization 9 2 Preliminaries 10 2.1 Robot Perception System 10 2.1.1 Human Detection and Tracking 11 2.1.2 Face Direction Estimation 13 2.2 Support Vector Machine 15 2.2.1 Linear Support Vector Machine 15 2.2.2 Kernel Support Vector Machine 18 2.3 Conditional Random Field 20 2.3.1 Chain Structure Conditional Random Field 21 2.3.2 Network Structure Conditional Random Field 22 3 Ego Social Network Construction 24 3.1 User Study of Social Interaction 24 3.1.1 Proxemics Social Signal 25 3.1.2 Non-verbal Social Signal 25 3.1.3 Verbal Social Signal 26 3.1.4 Temporal Social Signal 27 3.2 First-Person Camera 27 3.3 Robots View and First-Person View Cooperation 28 3.3.1 Global Information 28 3.3.2 Local Information 29 3.3.3 Information Fusion 30 3.4 Social Interaction Feature 32 3.4.1 Proxemics Interaction Feature 32 3.4.2 Non-verbal Interaction Feature 33 3.4.3 Verbal Interaction Feature 36 3.4.4 Temporal Interaction Feature 37 3.5 Intimacy Perceiving 37 3.6 Ego Social Network Construction 38 4 Activity Recommendation Model 41 4.1 Multi-human Social Activity 41 4.2 Ego Social Network Analysis 42 4.3 Hidden Conditional Random Field 43 4.3.1 General Hidden Conditional Random Field 43 4.3.2 Activity Recommendation Model using Hidden Conditional Random Field 45 4.4 Learning and Inference Method 48 4.5 Activity Recommendation List 49 5 Evaluation 51 5.1 Reliability of Sensing Component 51 5.1.1 Body Direction 52 5.1.2 Face Direction 53 5.1.3 Social Flags 54 5.2 Perceiving Intimacy for Network Construction 55 5.2.1 Subjects and Environment 56 5.2.2 Comparison of Different Types of Social Interaction Features 58 5.2.3 Result of Selected Features 59 5.3 Evaluation of Activity Recommendation Model 60 5.3.1 Social Interaction Data Collection 60 5.3.2 Accuracy of Inference for Individual Task 62 5.3.3 Mean Average Precision of Recommendation List 63 5.4 Overall Scenario Testing 66 5.4.1 First Scenario 66 5.4.2 Second Scenario 67 5.4.3 Third Scenario 69 6 Conclusion and Future Works 71 6.1 Conclusion 71 6.2 Future Works 73 References 74 | |
dc.language.iso | en | |
dc.title | 應用於多人環境下以自我社交網路分析達成社交輔助機器人之活動推薦系統 | zh_TW |
dc.title | Activity Recommendation System for Social Assistive Robot Based on Ego Social Network Analysis in Multi-Human Environment | en |
dc.type | Thesis | |
dc.date.schoolyear | 102-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 胡竹生(Jwu-Sheng Hu),宋開泰(Kai-Tai Song),王傑智(Chieh-Chih Wang),羅仁權(Ren C. Luo) | |
dc.subject.keyword | 社交機器人,社交輔助,自我社交網路,活動推薦,第一人稱視角攝影機, | zh_TW |
dc.subject.keyword | Social Robot,Social Assistance,Ego Social Network,Activity Recommendation,First-Person Camera, | en |
dc.relation.page | 80 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2014-08-17 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-103-1.pdf 目前未授權公開取用 | 24.26 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。