Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/46594
Title: 利用動作元素主導模式之角色對嘴動畫
Animating Lip-Sync Characters with Dominated Animeme Models
Authors: Yu-Mei Chen
陳裕美
Advisor: 陳炳宇(Bing-Yu Chen)
Keyword: 對嘴動畫,說話動畫,臉部動畫,
speech animation,lip-sync speech animation,facial animation,
Publication Year : 2010
Degree: 碩士
Abstract: 說話動畫在傳統上被視為一個相當重要卻極富困難度的研究主題,並且由於臉部肌肉間複雜的肌理結構與快速的變化,使得對嘴動畫更具挑戰性。
截至目前為止有許多對嘴動畫的相關研究已經被提出,但其中並沒有很快速而且有效率的方法。在本論文中我們提出了一個有效率的機制:針對指定的角色模型,給予聲音和台詞來生成對嘴動畫。在本系統中使用動畫控制信號作為訓練資料,首先將訓練資料分群並個別利用最大期望演算法的方式學習出動作元素主導模式(dominated animeme model),此動作元素主導模式分為兩部分:一為多項式型態的動畫元素,另一為相對應的高斯函數,主要用來模擬協同構音的相互影響。最後,給定聲音與台詞,即可運用動作元素主導模式來生成新的動畫控制信號已達到對嘴動畫的效果。本論文的結果能保留角色模型本身的形狀特色,並且由於生成動畫控制信號所花費的時間接近即時,此項技術能夠廣泛的使用在對嘴動畫的樣板、多國語言對嘴動畫、大量的動畫製作等應用。
Speech animation is traditionally considered as important but tedious work for most applications, especially when taking lip synchronization (lip-sync) into consideration, because the muscles on the face are complex and interact dynamically. Although there are several methods proposed to ease the burden on artists to create facial and speech animation, almost none are fast and efficient. In this thesis, we introduce a framework for synthesizing lip-sync character speech animation from a given speech sequence and its corresponding text. Starting from clustering the training data and training the dominated animeme models for every group in each kind of phoneme by learning the animation control signals of the character through an EM-style optimization approach, and further decomposing the dominated animeme models to the polynomial-fitted animeme models and corresponding dominance functions while taking coarticulation into account. Finally, given a novel speech sequence and its corresponding text, a lip-sync character animation can be synthesized in a very short time with the dominated animeme models. The synthesized lip-sync animation can even preserve exaggerated characteristics of the character’s facial geometry. Moreover, since our method can synthesize an acceptable and robust lip-sync animation in almost realtime, it can be used for many applications, such as lip-sync animation prototyping, multilingual animation reproduction, avatar speech, mass animation production, etc.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/46594
Fulltext Rights: 有償授權
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-99-1.pdf
  Restricted Access
21.14 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved