Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91128
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳炳宇zh_TW
dc.contributor.advisorBing-Yu Chenen
dc.contributor.author張凱華zh_TW
dc.contributor.authorKai-Hua Changen
dc.date.accessioned2023-11-13T16:08:20Z-
dc.date.available2023-11-14-
dc.date.copyright2023-11-13-
dc.date.issued2023-
dc.date.submitted2023-10-04-
dc.identifier.citation[1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners.Advances in neural information processing systems, 33:1877–1901, 2020.
[2] C.-H. Cheng, C.-C. Chang, Y.-H. Chen, Y.-L. Lin, J.-Y. Huang, P.-H. Han, J.-C. Ko,and L.-C. Lee. Gravitycup: a liquid-based haptics for simulating dynamic weight in virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pages 1–2, 2018.
[3] V. Chheang, R. Marquez-Hernandez, M. Patel, D. Rajasekaran, S. Sharmin, G. Caulfield, B. Kiafar, J. Li, and R. L. Barmaki. Towards anatomy education with generative ai-based virtual assistants in immersive virtual reality environments. arXiv preprint arXiv:2306.17278, 2023.
[4] I. Endo, K. Takashima, M. Inoue, K. Fujita, K. Kiyokawa, and Y. Kitamura. Modularhmd: a reconfigurable mobile head-mounted display enabling ad-hoc peripheral interactions with the real world. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 100–117, 2021.
[5] C. Fang, Y. Zhang, M. Dworman, and C. Harrison. Wireality: Enabling complex tangible geometries in virtual reality with worn multi-string haptics. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–10, 2020.
[6] J. Gao, T. Shen, Z. Wang, W. Chen, K. Yin, D. Li, O. Litany, Z. Gojcic, and S. Fidler. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35:31841 31854, 2022.
[7] S. Günther, F. Müller, D. Schön, O. Elmoghazy, M. Mühlhäuser, and M. Schmitz. Therminator: Understanding the interdependency of visual and on-body thermal feedback in virtual reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–14, 2020.
[8] HTC Corporation. Setting Up for the First Time, 2021. Accessed: 2021-09-01.
[9] Y.-Y. Hu, Y.-F. Jan, K.-W. Tseng, Y.-S. Tsai, H.-M. Sung, J.-Y. Lin, and Y.-P. Hung. abio: Active bi olfactory display using subwoofers for virtual reality. In Proceedings of the 29th ACM International Conference on Multimedia, pages 2065–2073, 2021.
[10] F. Kreuk, G. Synnaeve, A. Polyak, U. Singer, A. Défossez, J. Copet, D. Parikh, Y. Taigman, and Y. Adi. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352, 2022.
[11] H. Liu, Z. Chen, Y. Yuan, X. Mei, X. Liu, D. Mandic, W. Wang, and M. D. Plumbley. Audioldm: Text-to audio generation with latent diffusion models. arXiv preprint arXiv:2301.12503, 2023.
[12] E. Maggioni, R. Cobden, D. Dmitrenko, K. Hornbæk, and M. Obrist. Smell space: mapping out the olfactory design space for novel interactions. ACM Transactions on Computer-Human Interaction (TOCHI), 27(5):1–26, 2020.
[13] Meta Platforms, Inc. You Got a Quest 2: Here’s How to Set It Up, 2021. Accessed: 2021-09-01.
[14] Midjourney. Midjourney - AI Powered 2D Image Generation Service, 2021. Accessed: 2021-09-01.
[15] J. Roberts, A. Banburski-Fahey, and J. Lanier. Surreal vr pong: Llm approach to game design.
[16] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
[17] A. L. Simeone, E. Velloso, and H. Gellersen. Substitutional reality: Using the physical environment to design virtual reality experiences. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3307–3316, 2015.
[18] M. Slater and A. Steed. A virtual presence counter. Presence, 9(5):413–434, 2000.
[19] Y. Tao and P. Lopes. Integrating real-world distractions into virtual reality. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pages 1–16, 2022.
[20] S.-Y. Teng, C.-L. Lin, C.-h. Chiang, T.-S. Kuo, L. Chan, D.-Y. Huang, and B.-Y. Chen. Tilepop: Tile type pop-up prop for virtual reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pages 639– 649, 2019.
[21] J. Von Willich, M. Funk, F. Müller, K. Marky, J. Riemann, and M. Mühlhäuser. You invaded my tracking space! using augmented virtuality for spotting passersby in room-scale virtual reality. In Proceedings of the 2019 on Designing Interactive Systems Conference, pages 487–496, 2019.
[22] C.-H. Wang, B.-Y. Chen, and L. Chan. Realitylens: A user interface for blending customized physical world view into virtual reality. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pages –11, 2022.
[23] E. L. Waterworth and J. A. Waterworth. Focus, locus, and sensus: The three dimensions of virtual experience. CyberPsychology & Behavior, 4(2):203–213, 2001.
[24] K.-T. Yang, C.-H. Wang, and L. Chan. Sharespace: Facilitating shared use of the physical space by both vr head-mounted display and external users. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pages 499–509, 2018.
[25] Y. Yixian, K. Takashima, A. Tang, T. Tanno, K. Fujita, and Y. Kitamura. Zoomwalls: Dynamic walls that simulate haptic infrastructure for room-scale vr world. In Proceedings of the 33rd annual ACM symposium on user interface software and technology, pages 223–235, 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91128-
dc.description.abstract本研究針對虛擬現實體驗中的“中斷存在”問題展開探討。“中斷存在”指的是來自現實世界的背景噪音、風、氣味等干擾,可能干擾用戶的沉浸感,使其注意力從虛擬世界轉移到現實感官輸入上。為解決此問題,我們引入了生成式人工智慧作為適應性故事敘述者,將現實世界的干擾融入VR體驗中。

我們將來自感應器的現實世界干擾數據輸入生成式人工智慧,以產生在VR體驗中維持沉浸感所需的情景解釋。生成式人工智慧通過學習大量數據和使用先進的神經網絡,能夠理解複雜的關係、情境的細微差異和風格的微妙之處。這使其能夠有效地生成相關內容,提升了故事的連貫性和效率。

我們提出了一個通用的系統架構,並設計了一個原型進行實驗驗證。研究結果顯示,生成式人工智慧在解決“中斷存在”問題方面具有顯著的潛力。然而,我們也意識到存在技術上的一些限制,並探討了系統改進的可能途徑。

最終,我們相信通過無縫地整合物理世界和虛擬世界,我們可以創造出統一的現實,為用戶提供更具沉浸感的體驗。生成式人工智慧的運用將為虛擬現實技術的未來發展開啟新的可能性。
zh_TW
dc.description.abstractBreak in Presence (BiP) inevitably occurs during virtual reality (VR) experiences, reluctantly bringing real-world distractions that can potentially disrupt user immersion. However, converting diverse and unpredictable real-world distractions into VR events is challenging. In this paper, we tackle the BiP issue by introducing generative artificial intelligence (GenAI) as an adaptive storyteller.

In our work, we input real-world distractions into GenAI as a form of measured data from sensors, to produce credible scenario explanation in VR experience to maintain immersion by GenAI’s adaptive creativity. We demonstrate how GenAI can interpret distractions and seamlessly integrate them into the virtual experience.

This paper highlights GenAI's potential in addressing BiP in VR experiences. We present a general system architecture, a prototype, and user study results. We also recognize technical limitations and discuss avenues for system improvement. We believe in seamlessly integrating the physical and virtual realms to create united realities for a more immersive experience.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-11-13T16:08:20Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-11-13T16:08:20Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements iii
摘要 v
Abstract vii
Contents ix
List of Figures xi
List of Tables xv
Chapter 1 Introduction 1
Chapter 2 Related work 5
2.1 Presence and Break-in-Presence in VR Experience . . . . . . . . . . 5
2.2 GenAI in virtual generation and interaction . . . . . . . . . . . . . . 7
Chapter 3 Background: General Integrating System 9
3.1 Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Deployer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 4 Our Approach : Adaptive Storyteller by GenAI 15
4.1 Storyteller as Integrator . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Multiple levels of Interpreter . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Multiple levels of Deployer . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 5 User Study : Different Level of Interpreter and Deployer 21
5.1 Interpretations level . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Deployment restriction . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3 Study procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.4 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.5 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.6 Qualitative feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 6 Discussion 39
Chapter 7 Conclusion 41
References 43
-
dc.language.isoen-
dc.subject虛擬實境zh_TW
dc.subject生成式人工智慧zh_TW
dc.subject生成式人工智慧zh_TW
dc.subject虛擬實境zh_TW
dc.subjectGenerative AIen
dc.subjectVirtual realityen
dc.subjectGenerative AIen
dc.subjectVirtual realityen
dc.title透過基於生成式人工智慧的適應性敘述者整合現實干 擾於沉浸式虛擬事件zh_TW
dc.titleIntegrating Real World Distraction into Immersive Virtual Event through Generative AI-based Adaptive Storytelleren
dc.typeThesis-
dc.date.schoolyear112-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee詹力韋;鄭龍磻;蔡欣叡;韓秉軒zh_TW
dc.contributor.oralexamcommitteeLi-Wei Chan;Lung-Pan Cheng;Hsin-Ruey Tsai;Ping-Hsuan Hanen
dc.subject.keyword虛擬實境,生成式人工智慧,zh_TW
dc.subject.keywordVirtual reality,Generative AI,en
dc.relation.page46-
dc.identifier.doi10.6342/NTU202304244-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-10-05-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2024-03-01-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-1.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
6.15 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved