Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/102273
Title: 基於高斯潑濺之即時虛擬臉部動畫生成與控制
Real time Facial Animation Generation and Control via Gaussian Splatting
Authors: 陳乙馨
I-Hsin Chen
Advisor: 洪一平
Yi-Ping Hung
Keyword: 高斯潑濺,臉部動畫臉部參數模型三維臉部即時虛擬人臉控制
Gaussian Splatting,Facial AnimationFacial Blendshapes3D Head AvatarReal-time Avatar Control
Publication Year : 2026
Degree: 碩士
Abstract: 本研究提出一套即時臉部動畫控制系統,利用商用動作捕捉軟體的數據,驅動並生成具備高度擬真感的三維高斯頭像 (3D Gaussian head avatars)。本框架的核心為一個高效的迴歸模型,將由動補軟體取得的融合變形 (blendshape) 參數映射至參數化人臉模型的表情空間中,從而實現對高斯頭像的直接控制,無需進行耗時的迭代最佳化。此外,我們的設計引入了一種輕量級的表情正規化機制,透過促進語意解耦 (semantically disentangled) 以及保留個人特徵的形變,進一步提升了動畫的穩定性與表現力。

在詳盡的實驗評估中(採用 ARKit 作為動態捕捉來源,並以 FLAME 作為目標參數化模型),結果顯示本方法在動畫品質與系統延遲上,均顯著優於現有基於擬合 (fitting-based) 與基於迴歸 (regression-based) 的基準方法。本系統同時支援即時串流與離線重演 (offline reenactment),為虛擬會議與遠距社交互動提供高效且即時的虛擬人控制方案。
We present a real-time system for animating photorealistic 3D Gaussian head avatars driven by motion data from consumer facial mocap systems. At the core of our framework is an efficient regression model that maps mocap-derived blendshape parameters to the expression space of parametric face models, enabling direct control over Gaussian avatars without iterative optimization. Our design incorporates a lightweight expression regularization mechanism that improves stability and expressiveness by encouraging semantically disentangled, identity-specific deformations.

Through extensive evaluations—implemented using ARKit as the mocap source and FLAME as the target parametric model—we show that our method outperforms both fitting-based and regression-based baselines in animation quality and latency. The system supports both live streaming and offline reenactment, enabling efficient, real-time avatar control for virtual meetings and social telepresence.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/102273
DOI: 10.6342/NTU202600916
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2026-05-01
Appears in Collections:資訊網路與多媒體研究所

Files in This Item:
File SizeFormat 
ntu-114-2.pdf
Access limited in NTU ip range
24.1 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved