Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/25776
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor王勝德(Sheng-De Wang)
dc.contributor.authorYi-Chih Liuen
dc.contributor.author劉奕志zh_TW
dc.date.accessioned2021-06-08T06:29:29Z-
dc.date.copyright2006-07-28
dc.date.issued2006
dc.date.submitted2006-07-26
dc.identifier.citation[1] P. Ekman and W. V. Friesen, Manual for the facial Action Coding System. Palo Alto: Consulting
Psychologists, 1978
[2] Parke, F. I. ”Parameterized models for facial animation”, IEEE Computer Graphics and Applications
2(11), pp. 61-68,1982.
[3] Waters K.”A Physical model of facial Tissue and muscle Articulation Derived from Computer
Tomography Data” SPIE Visualization in Biomedical Computing, Vol.1808,pp.574-583. 1992.
[4] H. Ohta, H. Saji, and H. Nakatani, “Recognition of facial expressions using muscle-based feature
models, ”(in Japanese), Proc. of 14th ICPR, pp.1379-1381, 1998
[5] Terzopoulos, D and K. Waters , Techniques for realistic facial modeling and animation. Proc.
Computer Animation `91, pp 59-74, 1991.
[6] Hitoshi Saji, Atsushi Kimura, Hiroshi Ohta and Hiromasa Nakatani, “Tracking the Motions of Facial
Components Using Anatomical Knowledge” ,Vol. J80-D-II, No. 8, pp. 2119-2128, 1997.
[7] J.P. Lewis,M. Cordner, N. Fong, Pose Space Deformation: A Unified Approach to shape Interpolation
and Skeleton Drive Deformation, SIGGRAPH 00 Proceedings, pp165-172, 2000.
[8] Waters K.”A Muscle Model for Animating Three-dimensional Facial Expression”, Proceedings of
SIGGRAPH `87,pp. 17-24,July 1987.
[9] Yi-Chih Liu, Hajime Sato and Jun Ohya, ”Comparative Studies of 3D Face Modeling”, (in Japanese),
IEICE, pp. 160, ,September,2001.
[10] D. Terzopoulos and K. Waters. “Analysis and synthesis of facial image sequences using physical and
anatomical models”. In IEEE Transactions on Pattern Analysis and Machine Intelligence, pp
569-579. JUNE 1993.
[11] L .Williams. “Performance-driven facial animation”. In SIGGRAPH 90 Conference Proceedings,
volume 24, pp 235-242, August 1990.
[12] B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin. “Making faces”. In SIGGRAPH 98
Conference Proceedings, pp 55-66. ACM SIGGRAPH, July 1998.
[13] Perlin, K. “An image synthesizer”. ACM Computer Graphics 19(3),pp287-296, 1985.
[14] Ekman, P. and W. Friesen. “Unmasking the Face: A guide to recognizing emotions from facial clues”.
Prentice-Hall, Englewood Cliffs, N.J. ,1975.
[15] Platt, S. M., “A structural model of the human face”. Ph.D. Dissertation, University of Pennsylvania,
1985.
[16] Frederic I. Parke, Keith Waters , “Computer Facial Animation”, A K Peters Ltd,1996
[17] Hajime Sera, Shigeo Morishima, Demetri Terzopous, “Physics-based Muscle Model for Mouth Shape
Control” , Proceedings of Robot and Human Communication 96 (ROMAN`96) ,pp. 207-212, 1996.
[18] http://trant.sgi.com/opengl
[19]T.Beier and S.Neely, “Feature-based Image metamorphosis”, SIGGRAPH’92 Conference Proceedings,
pp.35-42, 1992.
[20] S.M. Seitz and C. R. Dyer, ”View Morphing”, SIGGRAOH ’96 Conference Proceedings, pp.21-30,
1996.
[21] D.T. Chen, A. State, D. Banks, “Interactive Shape Metamorphosis”, 1995 Symposium on Interactive
3D Graphics, pp.43-44, 1995.
[22] Y. Aoki, S. Hashimoto, “Physical Facial Model Based on 3D-CT Data for Facial Image Analysis and
Synthesis ”, International Conference on Automatic Face and Gesture Recognition, pp448-pp453,
1998.
[23] T. Ishikawa, H. Sera, S. Morishima ,and D. Terzopoulos , “Facial image reconstruction by Estimated
Muscle Parameter”, International Conference on Automatic Face and Gesture Recognition ,pp
342-pp347, 1998.
[24] K. Aizawa, H. Harashima and T. Sato, “ Model-based Analysis Synthesis Image Coding (MBASIC)
System For a Person’s Face”, Signal Processing: Image Communication 1 ,pp 139-152, 1989.
[25] Yang MH, Kriegman D, Ahuja N. Detecting faces in images: A survey. IEEE Trans. on Pattern
Analysis and Machine Intelligence, vo1. 24, no. 1, pp.34-58, January, 2002..
[26] G. Yang and T. S. Huang, “Human Face Detection in Complex Background,” Pattern Recognition,
vol. 27, no. 1, pp. 53-63, 1994.
[27] K.C. Yow and R. Cipolla, “Feature-Based Human Face Detection,” Image and Vision Computing,
vol. 15, no. 9, pp. 713-735, 1997.
[28] D. Chai and A. Bouzerdoum ,”A Bayesian Approach to Skin Color Classification in YCbCr Color
Space” IEEE Region Ten Conference (TENCON’2000) ,Kuala Lumpur, Malaysia, vol.II, pp.421-424,
Sep.2000.
[29] I. Craw, D. Tock, and A. Bennett, “Finding Face Features,” Proc. Second European Conf. Computer
Vision, pp. 92-96, 1992.
[30] H. Rowley, S. Baluja, and T. Kanade, “Neural Network-Based Face Detection,” IEEE Trans. Pattern
Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[31] J. Yang and A. Waibel, “A Real-Time Face Tracker,” Proc. Third Workshop Applications of
Computer Vision, pp. 142-147, 1996.
[32] A. Rajagopalan, K. Kumar, J. Karlekar, R. Manivasakan, M. Patil, U. Desai, P. Poonacha, and S.
Chaudhuri, “Finding Faces in Photographs,” Proc. Sixth IEEE Int’l Conf. Computer Vision, pp. 640-
645, 1998.
[33] http://www.icg.isy.liu.se/candide/
-71-
[34] J. Ahlberg, CANDIDE-3 – an updated parameterized face, Report No. LiTH-ISY-R-2326, Image
Coding Group, Dept. of EE, Linköping University, Sweden, January 2001.
[35] MPEG Working Group on Visual, International Standard on Coding of Audio-Visual Objects, Part 2
(Visual), ISO-14496-2, 1999.
[36] M. Kampmann and J. Ostermann, “Automatic adaptation of a face model in a layered coder with an
object-based analysis –synthesis layer and a knowledge-based layer,” Signal Processing: Image
Commun., vol. 9, no.3, pp.201-220, March 1997.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/25776-
dc.description.abstract能做出表情變化的人臉模型在3D遊戲,電影,線上交談系統,網路虛擬表現和影像會議系統上極為重要。現在大部分商業用途上的方法是用3D scanner來掃瞄一個真人並做出人臉模型。但是致命的缺點是一般人幾乎不可能使用價格昂貴的3D Scanner。
在這篇論文中我們提出了一個用便宜的數位相機和電腦來做出人臉模型的方法。只用了兩張照片就可有效的生成逼真的的人臉表情動畫。這個模型模擬表情時候是用肌肉模型為基礎。臉部肌肉參數可由攝取到的image sequences來得到。
此外我們提出了一個臉部偵測的演算法。首先我們使用YCbCr來偵測出大概的人臉區域。接著利用人臉的對稱性和眼睛,嘴巴等的灰階值特性來找出我們要的特徵點。當表情變化的時候,根據特徵點的位置我們可以計算出變化量。最後根據變化量我們可以產生逼真的人臉動畫。
我們在WinsowsXP上驗證此系統的可行性。我們並調整人臉模型的多邊型數目和顏面筋的彈性係數來產生高品質的人臉動畫。最後我們得到合理的成果並且希望將來可將此技術用在手機上視訊會議系統。
zh_TW
dc.description.abstractAnimated face models are essential to 3D games, movies, online chat, virtual presence and video conferencing. Nowadays, some commercially available tools make use of 3D laser scanners to acquire facial images. However, the drawbacks of 3D laser scanners are their costs and they are not widely used. In this thesis, we present a method using inexpensive computers and video cameras to produce face
models directly from images acquired by cameras. This is an efficient approach to synthesize realistic facial expressions from only two facial images on a 3D facial muscle model. This model is capable of simulating facial dynamics through the muscle-based computation. The facial muscle parameters can be estimated from captured image sequences.
Moreover, a face detection algorithm is proposed in this thesis. At first, YCbCr skin color model is used to detect the possible face area of the image. Second, we can obtain the feature points of the face by the symmetry of a face and the gray level characteristics of eyes and mouth. According to the positions of the feature points on the facial image, we can measure the quantity of transformation of the face when an expression appears. Finally, we can synthesize the realistic facial animations based on these.
To prove its feasibility, we implemented the system on a Windows XP pc. We clarified conditions that could achieve high quality animations by optimizing the number of polygons that form the 3D face model and the stiffness values applied to the spring models embedded in the face model. Reasonable qualities for facial expression animations were obtained. We hope this method can be applied to video conference systems on mobile phones in the future.
en
dc.description.provenanceMade available in DSpace on 2021-06-08T06:29:29Z (GMT). No. of bitstreams: 1
ntu-95-P93921004-1.pdf: 1670900 bytes, checksum: 99253d554ac2569ce14cb10d7c837891 (MD5)
Previous issue date: 2006
en
dc.description.tableofcontents中文摘要 …………………………………………………… I
ABSTRACT ……………………………………………………. II
ACKNOWLEDGEMENTS ……………………………………………………. III
CONTENTS …………………………...……………………… IV
LIST OF FIGURES ……………………………………………………… VI
LIST OF TABLES …..…………………………..……………………… VIII
Chapter1Introduction
1.1 Motivation …………………………………………………… 1
1.2 Objectives ……………………………………………………. 2
1.3 Related Works ……………………………………………………. 3
1.4 System Overview …..……………………….…………………….… 6
1.5 Thesis organization …………………………………………………… 8
Chapter 2 Face modeling
2.1 Facial Muscle Model …………………………………………………. 9
2.2 Regions of face model …………………………………………………. 13
2.3 Automatically Modeling …………………………………………………. 15
Chapter 3 System Implementation
3.1 The proposed system …………………………… 19
3.2 System Implementation …………………………… 20
3.3 Comparison of 3D face modeling tools …………………………… 21
3.4 Face Tracking ..………………….……… 24
3.5 Displacement of Feature Points and Non-feature Points …………..………… 25
3.6 Facial Expressions …………………………… 2 6
3.7 Facial Animations ……………………………. 27
Chapter 4 Face Detection and Facial Feature Extraction
4.1 Color-based Approach …………………………… 31
4.2 Sobel Filter and Wavelet transformation ……………………………. 35
4.3 Symmetry-based Approach …………………………… 39
4.4 Facial Feature Extraction ..………………….……… 40
4.5 Estimation of the movement of the feature points …………………………… 42
-VChapter
5 Experimental Results and Discussion
5.1 Relationship Between Quality and Numbers of Polygons …………………… 44
5.2 Relationship Between Quality and Spring Constant ….………………… 47
5.3 Results of Facial Expression Animations ………….………… 48
5.4 Comparisons with Candide ……………………. 63
5.5 Discussion …………...……… 66
Chapter 6 Conclusion and Future Work
6.1 Conclusion ……………………………………………………………… 67
6.2 Future Work ……………………………………………………………… 68
REFERENCES ……………………………… 69
dc.language.isoen
dc.subject臉部表情zh_TW
dc.subject人臉偵測zh_TW
dc.subject肌肉模型zh_TW
dc.subject3D人臉模型zh_TW
dc.subjectmuscle modelen
dc.subject3D Face Modelen
dc.subjectfacial expressionen
dc.subjectface detectionen
dc.title以偵測特徵變形來建構逼真3D臉部動畫zh_TW
dc.titleRealistic 3D Facial Animation Using Parameter-based Deformationen
dc.typeThesis
dc.date.schoolyear94-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鄭士康,傅楸善,李嘉晃
dc.subject.keyword3D人臉模型,臉部表情,人臉偵測,肌肉模型,zh_TW
dc.subject.keyword3D Face Model,facial expression,face detection,muscle model,en
dc.relation.page71
dc.rights.note未授權
dc.date.accepted2006-07-26
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-95-1.pdf
  未授權公開取用
1.63 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved