Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97708
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平zh_TW
dc.contributor.advisorYi-Ping Hungen
dc.contributor.author鞠睿陽zh_TW
dc.contributor.authorRui-Yang Juen
dc.date.accessioned2025-07-11T16:16:55Z-
dc.date.available2025-07-12-
dc.date.copyright2025-07-11-
dc.date.issued2025-
dc.date.submitted2025-07-07-
dc.identifier.citationR. Abdal, H.¬Y. Lee, P. Zhu, M. Chai, A. Siarohin, P. Wonka, and S. Tulyakov. 3davatargan: Bridging domains for personalized editable avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4552–4562, 2023.
R. Abdal, Y. Qin, and P. Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4432–4441, 2019.
R. Abdal, W. Yifan, Z. Shi, Y. Xu, R. Po, Z. Kuang, Q. Chen, D.¬Y. Yeung, and G. Wetzstein. Gaussian shell maps for efficient 3d human generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9441–9451, 2024.
Y. Alaluf, O. Patashnik, and D. Cohen¬Or. Restyle: A residual¬based stylegan en¬coder via iterative refinement. In Proceedings of the IEEE/CVF International Con¬ference on Computer Vision, pages 6711–6720, 2021.
Y. Alaluf, O. Patashnik, Z. Wu, A. Zamir, E. Shechtman, D. Lischinski, and D. Cohen¬Or. Third time's the charm? image and video editing with stylegan3. In European Conference on Computer Vision, pages 204–220, 2022.
Y. Alaluf, O. Tov, R. Mokady, R. Gal, and A. Bermano. Hyperstyle: Stylegan inversion with hypernetworks for real image editing. In Proceedings of the IEEE/CVF conference on computer Vision and pattern recognition, pages 18511–18521, 2022.
S. Bharadwaj, Y. Zheng, O. Hilliges, M. J. Black, and V. Fernandez­Abrevaya. Flare: Fast learning of animatable and relightable mesh avatars. ACM Trans. Graph., 42(6):15, 2023.
E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. J. Guibas, J. Tremblay, S. Khamis, et al. Efficient geometry­aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16123–16133, 2022.
K. C. Chan, X. Wang, X. Xu, J. Gu, and C. C. Loy. Glean: Generative latent bank for large­factor image super­resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14245–14254, 2021.
Y. Chen, L. Wang, Q. Li, H. Xiao, S. Zhang, H. Yao, and Y. Liu. Monogaussianavatar: Monocular gaussian point­based head avatar. In ACM SIGGRAPH 2024 Conference Papers, pages 1–9, 2024.
T. M. Dinh, A. T. Tran, R. Nguyen, and B.­S. Hua. Hyperinverter: Improving stylegan inversion via hypernetwork. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11389–11398, 2022.
G. Fox, A. Tewari, M. Elgharib, and C. Theobalt. Stylevideogan: A temporal generative model using a pretrained stylegan. In British Machine Vision Conference, 2021.
R. Gal, O. Patashnik, H. Maron, A. H. Bermano, G. Chechik, and D. Cohen­Or. Stylegan­nada: Clip­guided domain adaptation of image generators. ACM Transactions on Graphics (TOG), 41(4):1–13, 2022.
X. Gao, C. Zhong, J. Xiang, Y. Hong, Y. Guo, and J. Zhang. Reconstructing personalized semantic facial nerf models from monocular video. ACM Transactions on Graphics (TOG), 41(6):1–12, 2022.
W. Guilluy, A. Beghdadi, and L. Oudre. A performance evaluation framework for video stabilization methods. In 2018 7th European Workshop on Visual Information Processing (EUVIP), pages 1–6, 2018.
E. Härkönen, A. Hertzmann, J. Lehtinen, and S. Paris. Ganspace: Discovering interpretable gan controls. Advances in neural information processing systems, 33:9841– 9850, 2020.
Y. Hong, B. Peng, H. Xiao, L. Liu, and J. Zhang. Headnerf: A real­time nerf­based parametric head model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20374–20384, 2022.
S. Hyun and J.­P. Heo. Gsgan: Adversarial learning for hierarchical generation of 3d gaussian splats. Advances in Neural Information Processing Systems, 37:67987– 68012, 2024.
J. G. James, D. Jain, and A. Rajwade. Globalflownet: Video stabilization using deep distilled global motion estimates. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5078–5087, 2023.
W. Jang, G. Ju, Y. Jung, J. Yang, X. Tong, and S. Lee. Stylecarigan: caricature generation via stylegan feature map modulation. ACM Transactions On Graphics (TOG), 40(4):1–16, 2021.
W. Jang, Y. Jung, H. Kim, G. Ju, C. Son, J. Son, and S. Lee. Toonify3d: Stylegan­based 3d stylized face generator. In ACM SIGGRAPH 2024 Conference Papers, pages 1–11, 2024.
Y. Jiang, Z. Huang, X. Pan, C. C. Loy, and Z. Liu. Talk­to­edit: Fine­grained facial editing via dialog. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13799–13808, 2021.
W. Jin, N. Ryu, G. Kim, S.­H. Baek, and S. Cho. Dr. 3d: Adapting 3d gans to artistic drawings. In SIGGRAPH Asia 2022 Conference Papers, pages 1–8, 2022.
T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila. Alias­free generative adversarial networks. Advances in neural information processing systems, 34:852–863, 2021.
T. Karras, S. Laine, and T. Aila. A style­based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401–4410, 2019.
T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110–8119, 2020.
V. Kazemi and J. Sullivan. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1867–1874, 2014.
B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis. 3d gaussian splatting for real­time radiance field rendering. ACM Trans. Graph., 42(4):139–1, 2023.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
T. Kirschstein, S. Giebenhain, J. Tang, M. Georgopoulos, and M. Nießner. Gghead: Fast and generalizable 3d gaussian heads. In SIGGRAPH Asia 2024 Conference Papers, pages 1–11, 2024.
Y. Lan, X. Meng, S. Yang, C. C. Loy, and B. Dai. Self­supervised geometry­aware encoder for style­based 3d gan inversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20940–20949, 2023.
L. Li, Y. Li, Y. Weng, Y. Zheng, and K. Zhou. Rgbavatar: Reduced gaussian blendshapes for online modeling of head avatars. arXiv preprint arXiv:2503.12886, 2025.
T.Li,T.Bolkart,M.J.Black,H.Li,andJ.Romero.Learningamodeloffacialshape and expression from 4d scans. ACM Trans. Graph., 36(6):194–1, 2017.
F.­L. Liu, S.­Y. Chen, Y.­K. Lai, C. Li, Y.­R. Jiang, H. Fu, and L. Gao. Deepfacevideoediting: Sketch­based deep editing of face videos. ACM Transactions on Graphics (TOG), 41(4):1–16, 2022.
S. Ma, Y. Weng, T. Shao, and K. Zhou. 3d gaussian blendshapes for head avatar animation. In ACM SIGGRAPH 2024 Conference Papers, pages 1–10, 2024.
L. Marcenaro, G. Vernazza, and C. S. Regazzoni. Image stabilization algorithms for video­surveillance applications. In Proceedings 2001 International Conference on Image Processing, volume 1, pages 349–352, 2001.
B.Mildenhall,P.P.Srinivasan,M.Tancik,J.T.Barron,R.Ramamoorthi,andR.Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021.
C. Morimoto and R. Chellappa. Evaluation of image stabilization algorithms. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, volume 5, pages 2789–2792, 1998.
U. Ojha, Y. Li, J. Lu, A. A. Efros, Y. J. Lee, E. Shechtman, and R. Zhang. Few­shot image generation via cross­domain correspondence. In Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition, pages 10743–10752, 2021.
R. Or­El, X. Luo, M. Shan, E. Shechtman, J. J. Park, and I. Kemelmacher­Shlizerman. Stylesdf: High­resolution 3d­consistent image and geometry generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13503–13513, 2022.
G. Parmar, Y. Li, J. Lu, R. Zhang, J.­Y. Zhu, and K. K. Singh. Spatially­adaptive multilayer selection for gan inversion and editing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11399–11409, 2022.
O. Patashnik, Z. Wu, E. Shechtman, D. Cohen­Or, and D. Lischinski. Styleclip: Text­driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2085–2094, 2021.
J. N. Pinkney and D. Adler. Resolution dependent gan interpolation for controllable image synthesis between domains. In NeurIPS Workshop on Machine Learning for Creativity and Design, 2020.
S. Qian, T. Kirschstein, L. Schoneveld, D. Davoli, S. Giebenhain, and M. Nießner. Gaussianavatars: Photorealistic head avatars with rigged 3d gaussians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20299–20309, 2024.
E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen­Or. Encoding in style: a stylegan encoder for image­to­image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2287–2296, 2021.
D. Roich, R. Mokady, A. H. Bermano, and D. Cohen­Or. Pivotal tuning for latent­based editing of real images. ACM Transactions on graphics (TOG), 42(1):1–13, 2022.
Z. Shao, Z. Wang, Z. Li, D. Wang, X. Lin, Y. Zhang, M. Fan, and Z. Wang. Splattingavatar: Realistic real­time human avatars with mesh­embedded gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1606–1616, 2024.
Y. Shen, J. Gu, X. Tang, and B. Zhou. Interpreting the latent space of gans for semantic face editing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9243–9252, 2020.
Y. Shen and B. Zhou. Closed­form factorization of latent semantics in gans. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1532–1540, 2021.
A. Tewari, M. Elgharib, F. Bernard, H.­P. Seidel, P. Pérez, M. Zollhöfer, and C. Theobalt. Pie: Portrait image embedding for semantic control. ACM Transactions on Graphics (TOG), 39(6):1–14, 2020.
O. Tov, Y. Alaluf, Y. Nitzan, O. Patashnik, and D. Cohen­Or. Designing an encoder for stylegan image manipulation. ACM Transactions on Graphics (TOG), 40(4):1– 14, 2021.
R.Tzaban,R.Mokady,R.Gal,A.Bermano,andD.Cohen­Or.Stitchitintime:Gan­based facial editing of real videos. In SIGGRAPH Asia 2022 Conference Papers, pages 1–9, 2022.
Y. Viazovetskyi, V. Ivashkin, and E. Kashin. Stylegan2 distillation for feed­forward image manipulation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, pages 170–186. Springer, 2020.
T. Wang, Y. Zhang, Y. Fan, J. Wang, and Q. Chen. High­fidelity gan inversion for image attribute editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11379–11388, 2022.
J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative­adversarial modeling. Advances in neural information processing systems, 29, 2016.
J. Xiang, X. Gao, Y. Guo, and J. Zhang. Flashavatar: High­fidelity head avatar with efficient gaussian embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1802–1812, 2024.
J. Xu, H.­w. Chang, S. Yang, and M. Wang. Fast feature­based video stabilization without accumulative global motion estimation. IEEE Transactions on Consumer Electronics, 58(3):993–999, 2012.
Y. Xu, B. Chen, Z. Li, H. Zhang, L. Wang, Z. Zheng, and Y. Liu. Gaussian head avatar: Ultra high­fidelity head avatar via dynamic gaussians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931– 1941, 2024.
Y. Xu, L. Wang, X. Zhao, H. Zhang, and Y. Liu. Avatarmav: Fast 3d head avatar reconstruction using motion­aware neural voxels. In ACM SIGGRAPH 2023 Conference Papers, pages 1–10, 2023.
S. Yang, L. Jiang, Z. Liu, and C. C. Loy. Pastiche master: Exemplar­based high­resolution portrait style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7693–7702, 2022.
S. Yang, L. Jiang, Z. Liu, and C. C. Loy. Vtoonify: Controllable high­resolution portrait video style transfer. ACM Transactions on Graphics (TOG), 41(6):1–15, 2022.
S. Yang, L. Jiang, Z. Liu, and C. C. Loy. Styleganex: Stylegan­based manipulation beyond cropped aligned faces. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21000–21010, 2023.
X. Yao, A. Newson, Y. Gousseau, and P. Hellier. A latent transformer for disentangled face editing in images and videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13789–13798, 2021.
T. Yenamandra, A. Tewari, F. Bernard, H.­P. Seidel, M. Elgharib, D. Cremers, and C. Theobalt. i3dmm: Deep implicit 3d morphable model of human heads. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12803–12813, 2021.
J. Zhang, Y. Lan, S. Yang, F. Hong, Q. Wang, C. K. Yeo, Z. Liu, and C. C. Loy. Deformtoon3d: Deformable 3d toonification from neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9110– 9120, 2023.
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586–595, 2018.
Y. Zheng, V. F. Abrevaya, M. C. Bühler, X. Chen, M. J. Black, and O. Hilliges. Im avatar: Implicit morphable head avatars from videos. In Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition, pages 13545–13555, 2022.
Y. Zheng, W. Yifan, G. Wetzstein, M. J. Black, and O. Hilliges. Pointavatar: Deformable point­based head avatars from videos. In Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition, pages 21057–21067, 2023.
P. Zhou, L. Xie, B. Ni, and Q. Tian. Cips­3d: A 3d­aware generator of gans based on conditionally­independent pixel synthesis. arXiv preprint arXiv:2110.09788, 2021.
J. Zhu, Y. Shen, D. Zhao, and B. Zhou. In­domain gan inversion for real image editing. In European conference on computer vision, pages 592–608. Springer, 2020.
W. Zielonka, T. Bolkart, and J. Thies. Towards metrical reconstruction of human faces. In European Conference on Computer Vision, pages 250–269, 2022.
W. Zielonka, T. Bolkart, and J. Thies. Instant volumetric head avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4574–4584, 2023.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97708-
dc.description.abstract3D Gaussian Blendshapes 的提出,使得從單目視頻中即時重建可驅動動畫的頭像成為可能。另一方面,基於 StyleGAN 框架的 Toonify 方法在人臉圖像風格化領域中亦展現出廣泛應用潛力。為實現風格化的三維高斯混合形變(Gaussian Blenshapes)頭像建模,我們提出一種高效的兩階段框架 ToonifyGB。在第一階段(風格化視頻生成),我們採用改良的 StyleGAN 模型對輸入視頻進行風格轉換,突破了需於固定解析度下裁剪與對齊人臉的限制,從而產生更穩定且一致的風格化視頻輸出。此穩定輸出有助於後續階段更準確地捕捉面部表情細節,進一步提升混合形變的建模效率與動畫品質。第二階段(基於高斯混合形變的三維風格化頭像合成)中,我們從風格化視頻中學習風格化頭像的一個中性模型以及一組表情混合形變。這些混合形變在保留真實人臉表情特徵的同時,結合中性模型,可實現多樣且動態的風格化頭像生成。我們在多個基準數據集上,針對兩種不同風格(奧術"Arcane"與皮克斯"Pixar")進行實驗,結果證明所提出的方法在視覺一致性與表情驅動能力上均具備良好的通用性與表現力。zh_TW
dc.description.abstractThe introduction of 3D Gaussian blendshapes has enabled the real-time reconstruction of animatable head avatars from monocular video.
Toonify, a StyleGAN-based method, is widely adopted for facial image stylization. To extend Toonify for synthesizing diverse stylized 3D head avatars using Gaussian blendshapes, we propose an efficient two-stage framework, ToonifyGB. In Stage 1 (stylized video generation), we adopt an improved StyleGAN to generate the stylized video from the input video frames, which overcomes the limitation of requiring fixed resolution, cropping aligned faces as preprocessing for normal StyleGAN. This process produces a more stable stylized video, which helps Gaussian blendshapes more accurately capture high-frequency facial details, which facilitates the synthesis of high-quality animations in the next stage. In Stage 2 (Gaussian blendshapes synthesis), we learn a stylized neutral head model and a set of expression blendshapes from the generated stylized video.
By combining the neutral head model with expression blendshapes, ToonifyGB can efficiently render stylized avatars with arbitrary expressions. We validate the effectiveness of ToonifyGB on benchmark datasets using two representative styles: Arcane and Pixar.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-11T16:16:55Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-07-11T16:16:55Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書……………………………………………………………………i
誌謝……………………………………………………………………………………ii
摘要…………………………………………………………………………………iii
Abstract………………………………………………………………………………iv
Contents………………………………………………………………………………vi
List of Figures………………………………………………………………………viii
List of Tables…………………………………………………………………………ix
Chapter 1 Introduction………………………………………………………………1
Chapter 2 Related Work……………………………………………………………3
2.1 StyleGAN-based Face Generation……………………………………………3
2.2 Toonification and Video Generation……………………………………………4
2.3 3D Face Generation……………………………………………………………5
2.4 3D Head Avatar Reconstruction………………………………………………6
Chapter 3 Method……………………………………………………………………8
3.1 ToonifyGB Framework……………………………………………………………8
3.2 Stylized Video Generation………………………………………………………9
3.3 Gaussian Blendshapes Synthesis………………………………………………12
3.4 Loss Function……………………………………………………………………13
Chapter 4 Experiments………………………………………………………………15
4.1 Baseline and Dataset……………………………………………………………15
4.2 Evaluation Metrics………………………………………………………………16
4.2.1 Generated Video Metrics………………………………………………………16
4.2.2 Synthesized Avatar Metrics……………………………………………………17
4.3 Implementation Details…………………………………………………………18
4.4 Quantitative Comparison…………………………………………………………19
4.4.1 Video Stabilization……………………………………………………………19
4.4.2 3D Head Avatar………………………………………………………………20
4.5 Qualitative Comparison…………………………………………………………22
4.6 Visualization……………………………………………………………………24
4.6.1 Generated Video Details………………………………………………………24
4.6.2 Gaussian Blendshapes…………………………………………………………26
4.7 Ablation Study…………………………………………………………………29
4.7.1 Facial Alignment and Cropping………………………………………………29
4.7.2 Source Videos for Driving Animation……………………………………… 29
4.8 Limitation and Discussion………………………………………………………30
Chapter 5 Conclusion………………………………………………………………32
References……………………………………………………………………………33
-
dc.language.isoen-
dc.subject生成對抗網路zh_TW
dc.subject高斯潑濺zh_TW
dc.subject高斯混合形變zh_TW
dc.subject三維頭像重建zh_TW
dc.subject三維風格化頭像zh_TW
dc.subject臉部動畫zh_TW
dc.subject生成式人工智慧zh_TW
dc.subjectFacial Animationen
dc.subjectGenerative AIen
dc.subjectGenerative Adversarial Networken
dc.subjectGaussian Splattingen
dc.subjectGaussian Blendshapeen
dc.subject3D Head Reconstructionen
dc.subject3D Stylized Head Avataren
dc.title基於StyleGAN的高斯混合形變之三維風格化頭像建模zh_TW
dc.titleStyleGAN-based Gaussian Blendshapes for 3D Stylized Head Avatarsen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee朱宏國;葛如鈞;歐陽明;王碩仁zh_TW
dc.contributor.oralexamcommitteeHung-Kuo Chu;Ju-Chun Ko;Ouhyoung Ming;Shoue-Jen Wangen
dc.subject.keyword生成式人工智慧,生成對抗網路,高斯潑濺,高斯混合形變,三維頭像重建,三維風格化頭像,臉部動畫,zh_TW
dc.subject.keywordGenerative AI,Generative Adversarial Network,Gaussian Splatting,Gaussian Blendshape,3D Head Reconstruction,3D Stylized Head Avatar,Facial Animation,en
dc.relation.page43-
dc.identifier.doi10.6342/NTU202501155-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-07-07-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊網路與多媒體研究所-
dc.date.embargo-lift2025-07-12-
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf122.76 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved