Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/43599
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor李明穗
dc.contributor.authorChia-Han Changen
dc.contributor.author張家翰zh_TW
dc.date.accessioned2021-06-15T02:24:07Z-
dc.date.available2009-08-19
dc.date.copyright2009-08-19
dc.date.issued2009
dc.date.submitted2009-08-18
dc.identifier.citation[1] Ekman, P., Friesen, W.V.: Unmasking the face. Prentice-Hall, Englewood Cliffs (1975)
[2] Picard, R.W. Affective Computing. MIT Press, Cambridge, MA, 1997.
[3] Argyle, M. (1988). Bodily communication. New York: Methuen & Co. Ltd
[4] Klein, R.M., Pontefract, A.: Does oculomotor readiness mediate cognitive control of visual attention: Revisited! Attention and performance 15, 333–350 (1994)
[5] Al-Oayedi, A., Clark, A.F.: An algorithm for face and facial-feature location based on gray-scale information and facial geometry. In: Proceedings of International Conference on Image Processing and Its Applications, vol. 2, pp. 625–629 (1999)
[6] Liversedge, S.P., Findlay, J.M.: Saccadic eye movements and cognition. Trends in Cognition Sciences 4, 6–14 (2000)
[7] Sirohey, S., Rosenfeld, A.: Eye detection in a face image using linear and nonlinear filters. Pattern Recognition 34, 1367–1391 (2001)
[8] Oviatt, S. User-centered modeling and evaluation of multimodal interfaces. In Proceedings of the IEEE, Vol.91 (9), 2003, 1457-1468.
[9] Goldstein, R.B., Peli, E., Lerner, S., Luo, G.: Eye Movements While Watching a Video: Comparisons Across Viewer Groups. Vision Science Society (2004)
[10] Viola, P., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision 57(2), 137–154 (2004)
[11] Kapoor, A. and Picard, R.W. Multimodal affect recognition in learning environments. In Proceedings of the 13th annual ACM International Conference on Multimedia, 2005, 677- 682.
[12] Kim, J., Andre, E., Rehm, M., Vogt, T., and Wagner, J. Integrating information from speech and physiological signals to achieve emotional sensitivity. In Proceedings of Interspeech, 2005.
[13] Chen, H.T., Chang, H.W., Liu, T.L.: Local Discriminant Embedding and Its Variants. CVPR 2, 846–853 (2005)
[14] Sebe, N., Cohen, I., Gevers, T., and Huang, T. Emotion recognition based on joint visual and audio cues. In Proceedings of the International Conference on Pattern Recognition (ICPR 2006), 2006, 1136-1139.
[15] Zeng, Z., Hu, Y., Fu, Y., Huang, T.S., Roisman, Z.W., and Wen, Z. Audio-visual emotion recognition in adult attachment interview. In Poceedings of the 8th International Conference on Multimodal Interfaces (ICMI ’06), 2006, 139- 145.
[16] Lee Chia-Hsun Jackie, Wetzel Jon, Selker Ted, Enhancing interface design using attentive interaction design toolkit, ACM SIGGRAPH 2006 Educators program, July 30-August 03, 2006, Boston, Massachusetts
[17] Bernhaupt Regina et al.: Using Emotion in Games: Emotional Flowers. ACM ACE 2007
[18] Germeys, F., d’Ydewalle, G.: The psychology of film: perceiving beyond the cut. Psychological Research 71, 458–466 (2007)
[19] Maat, L. and Pantic, M. Gaze-X: adaptive affective multimodal interface for single-user office scenarios. In Artificial Intelligence for Human Computing, Vol.4451, 2007, 251-271.
[20] Chang, W.Y., Chen, C.S., Hung, Y.P.: Analyzing Facial Expression by Fusing Manifolds. In: Proceedings of Asian Conference on Computer Vision Conference (2007)
[21] Melder Willem A. et al.: Affective Multimodal Mirror: Sensing and Eliciting Laughter. ACM HCM 2007
[22] Chou, C.N., Real-time Three-stage Eye Feature Extraction and its Applications, Master thesis in CSIE at National Taiwan University, 2009.
[23] WEB/Wikipedia, http://en.wikipedia.org/wiki/AIDA_(marketing)
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/43599-
dc.description.abstract在使用者與3D魔幻水晶球(將虛擬的3D文物呈現在透明的水晶球內,並且讓使用者可以直接用手在水晶球上操作虛擬的文物)的互動過程中,我們觀察到使用者有許多有趣的反應,我們想將每位使用者有趣的反應收集起來並彙整成個人的專屬回憶。因此我們提出一個以多個人類認知模型為基礎的“興趣量測系統”來讓電腦更了解使用者的反應,每當使用者與電腦互動時,興趣量測系統會即時地根據使用者的反應來量測使用者的興趣程度。其中,使用者反應包含了眼球的移動、眨眼、頭部的移動以及臉部的表情。另外我們提出一個實驗來證實興趣量測系統的確能夠量測出使用者的興趣程度。最後會舉出兩個整合範例,第一是在3D魔幻水晶球互動系統中依據使用者的反應將有趣的畫面整合為使用者的專屬回憶,第二是將使用者在觀看自己所拍攝的影片時的反應資訊加入影片剪輯系統中並自動剪輯出MV型式的精華影片。我們的應用及實驗證實了本論文所提出的性趣量測系統的確可以量測使用者的興趣,並且在應用中激發出更合適的互動模式。zh_TW
dc.description.abstractIn users’ interaction process of the 3D Magic Crystal Ball, an interactive visual display system which allows users to see a 3D virtual artifact appearing inside a transparent glass ball and to manipulate it with bared hands, we find that there are many interesting reactions of users. We want to collect each user’s interesting reactions and combine them into personal memory. Therefore, we propose the Interest Meter, a system making computer understand user’s reactions, based on multimodal interfaces for measuring user’s interest in real time. The Interest Meter takes account of users’ spontaneous reactions when users interact with computers. In this work, we analyze the variations of user’s eye movement, blinking, head motion, and facial expression when the user interacts with computers. Furthermore, we propose the method of combining those signals into interest level and verify that it works in our experiment. There are two integrated application in our thesis. First, produces the personal memory of each user according to the user’s reactions to the Magic Crystal Ball. Second, edits the MV-style home video auto-matically by the user’s reactions during watching home videos. According to our experiments and applications, it shows that the Interest Meter can measure user’s interest and make a great improvement of the interac-tion.en
dc.description.provenanceMade available in DSpace on 2021-06-15T02:24:07Z (GMT). No. of bitstreams: 1
ntu-98-R96944023-1.pdf: 3262087 bytes, checksum: 034fb1f831d35709bfec91eaf430efe7 (MD5)
Previous issue date: 2009
en
dc.description.tableofcontents1 Introduction 1
2 Related Work 5
2.1 Unimodal Interface V.S Multimodal Interface 5
2.2 Multimodality in Emotion Recognition 6
2.3 Real-Time Affective Multimodal Interactive Applications 7
3 System Framework 12
4 Interest Meter 15
4.1 Attention Model 15
4.1.1 Head Motion Detection 16
4.1.2 Blinking Detection 17
4.1.3 Saccade Detection 18
4.1.4 Attention Score Computing 19
4.2 Emotion Model 20
4.2.1 Facial Expression Recognition 21
4.2.2 Emotion Score Computing 26
4.3 Information Fusion 27
4.3.1 Interest Score Computing 29
4.3.2 Weighting Adjustment 29
4.4 Experiments 32
5 Applications 37
5.1 Magic Crystal Ball 37
5.2 MV-Style Home Video Automatic Editing System 41
6 Conclusion and Future Work 43
6.1 Conclusion 43
6.2 Future Work 43
Bibliography 45
dc.language.isoen
dc.title基於人類認知模型的即時使用者興趣計量系統及其應用zh_TW
dc.titleA Real-Time User Interest Meter Based on Human Cognitive Model and its Applicationsen
dc.typeThesis
dc.date.schoolyear97-2
dc.description.degree碩士
dc.contributor.coadvisor洪一平
dc.contributor.oralexamcommittee歐陽明,林國平,石勝文
dc.subject.keyword人機互動,情意運算,人臉表情辨識,人眼偵測,人類認知模型,zh_TW
dc.subject.keywordHuman Computer Interaction,Affective Computing,Eyes Detection,Human Facial Expression Recognition,Human Cognitive Model,en
dc.relation.page47
dc.rights.note有償授權
dc.date.accepted2009-08-18
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-98-1.pdf
  目前未授權公開取用
3.19 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved