請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98329完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 洪一平 | zh_TW |
| dc.contributor.advisor | Yi-Ping Hung | en |
| dc.contributor.author | 吳欣翰 | zh_TW |
| dc.contributor.author | Xin-Han Wu | en |
| dc.date.accessioned | 2025-08-01T16:14:33Z | - |
| dc.date.available | 2025-08-02 | - |
| dc.date.copyright | 2025-08-01 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-30 | - |
| dc.identifier.citation | [1].Meta Inc., "What is the metaverse?," 2023.
[2].Y. Feng, D.C. Duives, and S.P. Hoogendoorn. Wayfinding behaviour in a multi-level building: A comparative study of HMD VR and desktop VR. Advanced Engineering Informatics, 51:101475, Jan. 2022. [3].E. A.-L. Lee and K. W. Wong. Learning with desktop virtual reality: Low spatial ability learners are more positively affected. Computers & Education, 79:49–58, Oct. 2014. [4].E.A.-L. Lee, K.W. Wong, and C.C. Fung. How does desktop virtual reality enhance learning outcomes? a structural equation modeling approach. Computers & Education, 55(4):1424–1442, Dec. 2010. [5].P. Srivastava, A. Rimzhim, P. Vijay, S. Singh, and S. Chandra. Desktop VR is better than non-ambulatory HMD VR for spatial learning. Frontiers in Robotics and AI, July. 2019. [6].F. Xue, R. Guo, S. Yao, L. Wang, and K.-L. Ma, “From artifacts to outcomes: Comparison of HMD VR, desktop, and slides lectures for food microbiology laboratory instruction,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), New York, NY, USA, pp. 446:1–446:17, Apr. 2023. [7].X.-H. Wu, H.-R. Qian, P.-H. Hsu, I.-F. Huang, Y.-P. Hung, and Y. Huang, “User interaction for WebGL-based desktop metaverse,” in Proceedings of the IEEE International Conference on Intelligent Metaverse Technologies & Applications (IEEE iMETA), Dubai, United Arab Emirates, pp. 119–126, 2024. [8].X.-H. Wu, Y.-S. Tsai, I.-F. Huang, and Y.-P. Hung, “Study of conversational agents in VR guiding experience,” in Proceedings of the International Conference on the AI Revolution: Research, Ethics, and Society (AIR-RES), Las Vegas, USA, Apr. 14–16, 2025. [9].X.-H. Wu, K.-K. Chu, Y.-P. Hung, and Y. N. Huang, “Visual guidance in interactive virtual reality,” in Proceedings of the International Conference on Intelligent Metaverse Technologies & Applications (IEEE iMETA), Tartu, Estonia, pp. 1–8, Sept. 2023. [10].Lik-Hang Lee, Tristan Braud, Pengyuan Zhou, Lin Wang, Dianlei Xu, Zijun Lin, Abhishek Kumar, Carlos Bermejo, and Pan Hui. All One Needs to Know about Metaverse: A Complete Survey on Techno- logical Singularity, Virtual Ecosystem, and Research Agenda. CoRR, abs/2110.05352, 2021. [11]. X.-H. Wu, H.-J. Chien, Y.-P. Hung, and Y. N. Huang, “Applications of interactive style transfer to virtual gallery,” in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), Shanghai, China, pp. 875–876, Mar. 2023. [12].B. D. Carolis, N. Macchiarulo, and C. Valenziano. Marta: A virtual guide for the national archaeological museum of taranto. In AVI2CH, 2022. [13].J. Kwang Ko, D. W. Koo, and M. S. Kim. A novel affinity enhancing method for human robot interaction - preliminary study with proactive docent avatar. 2021 21st International Conference on Control, Automation and Systems (ICCAS), pages 1007–1011, 2021. [14].A. Bönsch, D. Hashem, J. Ehret, and T. W. Kuhlen, “Being guided or having exploratory freedom: User preferences of a virtual agent’s behavior in a museum,” in Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents (IVA ’21), Virtual Event (Japan), pp. 33–40, Sept. 2021. [15].K. R. Dillman, T. T. H. Mok, A. Tang, L. Oehlberg, and A. Mitchell, “A visual interaction cue framework from video game environments for augmented reality,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montreal, QC, Canada, pp. 1–12, Apr. 2018. [16].D. Moura and L. Bartram, “Investigating players’ responses to wayfinding cues in 3D video games,” in CHI ’14 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’14), New York, NY, USA, pp. 1513–1518, Apr. 2014. [17]. M. Lankes, A. Haslinger, and C. Wolff, “gEYEded: Subtle and challenging gaze-based player guidance in exploration games,” Multimodal Technologies and Interaction, vol. 3, no. 3, p. 61, Aug. 2019. [18].M. Lankes and A. Haslinger, “Lost & found: Gaze-based player guidance feedback in exploration games,” in Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (CHI PLAY ’19), New York, NY, USA, pp. 483–492, Oct. 2019. [19].I. Iacovides, A. Cox, R. Kennedy, P. Cairns, and C. Jennett, “Removing the HUD: The impact of non-diegetic game elements and expertise on player involvement,” in Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’15), New York, NY, USA, pp. 13–22, Oct. 2015. [20].L. T. Nielsen, M. B. Møller, S. D. Hartmeyer, T. C. M. Ljung, N. C. Nilsson, R. Nordahl, and S. Serafin, “Missing the point: An exploration of how to guide users’ attention during cinematic virtual reality,” in Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (VRST ’16), New York, NY, USA, pp. 229–232, Nov. 2016. [21].Y.-C. Lin, Y.-J. Chang, H.-N. Hu, H.-T. Cheng, C.-W. Huang, and M. Sun, “Tell me where to look: Investigating ways for assisting focus in 360° video,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), Denver, CO, USA, pp. 2535–2545, May 2017. [22].S. Rothe, H. Hußmann, and M. Allary, “Diegetic cues for guiding the viewer in cinematic virtual reality,” in Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST ’17), New York, NY, USA, Nov. 2017. [23].M. Speicher, C. Rosenberg, D. Degraen, F. Daiber, and A. Krüger, “Exploring visual guidance in 360-degree videos,” in Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video (TVX ’19), New York, NY, USA, pp. 1–12, June 2019. [24].D. Lange, T. C. Stratmann, U. Gruenefeld, and S. Boll, “HiveFive: Immersion preserving attention guidance in virtual reality,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), Honolulu, HI, USA, pp. 1–13, Apr. 2020. [25].F. Bork, C. Schnelzer, U. Eck, and N. Navab, “Towards efficient visual guidance in limited field-of-view head-mounted displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 11, pp. 2983–2992, Nov. 2018. [26].S. Cosgrove and J. J. LaViola, “Visual guidance methods in immersive and interactive VR environments with connected 360° videos,” in 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, pp. 652–653, Mar. 2020. [27].A. S. Muhammad, S. C. Ahn, and J.-I. Hwang, “Active panoramic VR video play using low latency step detection on smartphone,” in 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, pp. 196–199, Jan. 2017. [28].A. Yoshimura, A. Khokhar, and C. W. Borst, “Eye-gaze-triggered visual cues to restore attention in educational VR,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 1255–1256, Mar. 2019. [29].F. Biocca, A. Tang, C. Owen, and F. Xiao, “Attention funnel: Omnidirectional 3D cursor for mobile augmented reality platforms,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’06), Montreal, QC, Canada, pp. 1115–1122, Apr. 2006. [30].S. Sylaiou and C. A. Fidas, “Virtual humans in museums and cultural heritage sites,” Applied Sciences, vol. 12, no. 15, p. 7589, 2022. [31].G. B. Petersen, A. Mottelson, and G. Makransky, “Pedagogical agents in educational VR: An in the wild study,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21), Yokohama, Japan, 2021. [32].F. Weidner, G. Boettcher, S. A. Arboleda, C. Diao, L. Sinani, C. Kunert, C. Gerhardt, W. Broll, and A. Raake, “A systematic review on the visualization of avatars and agents in AR & VR displayed using head-mounted displays,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, pp. 2596–2606, 2023. [33].M. Saito, “Effects of presentation modalities in virtual museum guides on agent impressions and painting evaluations,” in Proceedings of the 11th International Conference on Human-Agent Interaction (HAI 2023), 2023. [34].S. Sylaiou, V. Kasapakis, D. Gavalas, and E. Dzardanova, “Avatars as storytellers: Affective narratives in virtual museums,” Personal and Ubiquitous Computing, pp. 1–13, 2020. [35].G. Trichopoulos, M. Konstantakis, G. Caridakis, A. Katifori, and M. Koukouli, “Crafting a museum guide using ChatGPT-4,” Big Data and Cognitive Computing, 2023. [36].M. Duguleană, V.-A. Briciu, I.-A. Duduman, and O. M. Machidon, “A virtual assistant for natural interactions in museums,” Sustainability, vol. 12, no. 15, p. 5985, 2020. [37].L. Kruse, F. Mostajeran, and F. Steinicke, “The influence of virtual agent visibility in virtual reality cognitive training,” in Proceedings of the 2023 ACM Symposium on Spatial User Interaction (SUI ’23), Sydney, Australia, pp. 14:1–14:9, Oct. 2023. [38].K. Zibrek, E. Kokkinara, and R. McDonnell, “The effect of realistic appearance of virtual characters in immersive environments — Does the character’s personality play a role?” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 4, pp. 1681–1690, Apr. 2018. [39].J. W. Woodworth, N. G. Lipari, and C. W. Borst, “Evaluating teacher avatar appearances in educational VR,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, pp. 1235–1236, Mar. 2019. [40].S. Schmidt, G. Bruder, and F. Steinicke, “Effects of virtual agent and object representation on experiencing exhibited artifacts,” Computers & Graphics, vol. 83, pp. 1–10, 2019. [41].R. Rzayev, G. Karaman, K. Wolf, N. Henze, and V. Schwind, “The effect of presence and appearance of guides in virtual reality exhibitions,” in Proceedings of Mensch und Computer 2019 – Tagungsband, Hamburg, Germany, pp. 11–20, Sept. 2019. [42].S. Schmidt, G. Bruder, and F. Steinicke, “Effects of embodiment on generic and content‑specific intelligent virtual agents as exhibition guides,” in International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT ‑ EGVE 2018), Limassol, Cyprus, pp. 161–169, Nov. 2018. [43].I. Wang, J. Smith, and J. Ruiz, “Exploring virtual agents for augmented reality,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, Scotland, UK, pp. 281:1–281:12, May 2019. [44].K. Tsitseklis, G. Stavropoulou, A. Zafeiropoulos, A. Thanou, and S. Papavassiliou, “RECBOT: Virtual Museum navigation through a chatbot assistant and personalized recommendations,” in Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’23), pp. 388–396, Jun. 2023. [45].N. Norouzi, K. Kim, J. Hochreiter, M. Lee, S. Daher, G. Bruder, and G. Welch, “A systematic survey of 15 years of user studies published in the Intelligent Virtual Agents conference,” in Proceedings of the 18th International Conference on Intelligent Virtual Agents (IVA ’18), Sydney, Australia, pp. 17–22, Nov. 2018. [46].T. Hirzle, F. Müller, F. Draxler, M. Schmitz, P. Knierim, and K. Hornbæk, “When XR and AI meet – a scoping review on extended reality and artificial intelligence,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), Hamburg, Germany, pp. 730:1–730:45, Apr. 2023. [47].V. Chheang, S. Sharmin, R. Márquez-Hernández, M. Patel, D. Rajasekaran, G. Caulfield, B. Kiafar, J. Li, P. Kullu, and R. L. Barmaki, “Towards anatomy education with generative AI-based virtual assistants in immersive virtual reality environments,” in Proc. 2024 IEEE Int. Conf. Artif. Intell. eXtended Virtual Reality (AIxVR), Los Angeles, CA, USA, pp. 21–30, Jan. 2024. [48].R. Pausch, J. Snoddy, R. Taylor, S. Watson, and E. Haseltine, “Disney’s Aladdin: First steps toward storytelling in virtual reality,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’96), New Orleans, LA, USA, pp. 193–203, Aug. 1996. [49].L. T. Nielsen, M. B. Møller, S. D. Hartmeyer, T. C. M. Ljung, N. C. Nilsson, R. Nordahl, and S. Serafin, “Missing the point: An exploration of how to guide users’ attention during cinematic virtual reality,” in Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (VRST ’16), New York, NY, USA, pp. 229–232, Nov. 2016. [50].A. Sheikh, A. Brown, Z. Watson, and M. Evans, “Directing attention in 360-degree video,” in Proceedings of the IBC 2016 Conference, Amsterdam, Netherlands, pp. 1–6, 2016. [51].J. Gugenheimer, D. Wolf, G. Haas, S. Krebs, and E. Rukzio, “SwivrChair: A motorized swivel chair to nudge users’ orientation for 360 degree storytelling in virtual reality,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), San Jose, CA, USA, pp. 1996–2000, May 2016. [52].T. Drey, P. Jansen, F. Fischbach, J. Frommel, and E. Rukzio, “Towards progress assessment for adaptive hints in educational virtual reality games,” in Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20), New York, NY, USA, pp. 1–9, Apr. 2020. [53].Musée du Louvre, "Mona Lisa: Beyond the Glass," CreationVR, 23 October 2019. [Online]. Available: https://www.louvre.fr/en/explore/life-at-the-museum/mona-lisa-beyond-the-glass-the-louvre-s-first-virtual-reality-experience. [54].National Museum of Prehistory. MR360 Museum, 2021. [55].Epic Games. Fortnite, 2017. [56].B.-Y. Han. AR museum archaeological experience, 09 2021. [57].P.-H. Han, Y.-S. Chen, I.-S. Liu, Y.-P. Jang, L. Tsai, A. Chang, and Y.-P. Hung, “A compelling virtual tour of the Dunhuang cave with an immersive head-mounted display,” IEEE Computer Graphics and Applications, vol. 40, no. 1, pp. 40–55, 2019. [58].C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” SIGGRAPH Comput. Graph., vol. 21, no. 4, pp. 25–34, Aug. 1987. [59].Lionhead Studios. Fable III, 2010. [60].H. K. Kim, J. Park, Y. Choi, and M. Choe, “Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment,” Applied Ergonomics, vol. 69, pp. 66–73, 2018. [61].J. Brooke et al., “SUS—a quick and dirty usability scale,” in Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7, 1996. [62].S. G. Hart and L. E. Staveland, “Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research,” in Advances in Psychology, vol. 52, pp. 139–183, Elsevier, 1988. [63].H. H. Vilhjálmsson, S. Kopp, and S. Marsella, “Editorial for special issue on intelligent virtual agents,” Autonomous Agents and Multi-Agent Systems, vol. 27, pp. 197–199, 2013. [64].A.-S. Milcent, A. Kadri, and S. Richir, “Using facial expressiveness of a virtual agent to induce empathy in users,” International Journal of Human-Computer Interaction, vol. 38, pp. 240–252, 2021. [65].O. Bălan, Ș. Cristea, G. Moise, L. Petrescu, S. Ivașcu, A. D. B. Moldoveanu, F. Moldoveanu, and M. Leordeanu, "Ether – An assistive virtual agent for acrophobia therapy in virtual reality," in HCI International 2020 – Late Breaking Papers: Virtual and Augmented Reality, Springer, pp. 12–25, 2020. [66].L. P. Vardoulakis, L. Ring, B. Barry, C. L. Sidner, and T. W. Bickmore, “Designing relational agents as long term social companions for older adults,” in International Conference on Intelligent Virtual Agents, 2012. [67].L. Wanner et al., “Design of a knowledge-based agent as a social companion,” Procedia Computer Science, vol. 121, pp. 920–926, 2017. [68].R. Chaturvedi, S. Verma, R. Das, and Y. K. Dwivedi, "Social companionship with artificial intelligence: Recent trends and future avenues," Technological Forecasting and Social Change, vol. 193, p. 122634, 2023. [69].W. B. Rayward and M. Twidale, “From docent to cyberdocent: Education and guidance in the virtual museum,” Arch. Mus. Informatics, vol. 13, pp. 23–53, 1999. [70].J. Geigel, K. S. Shitut, J. Decker, A. Doherty, and G. D. Jacobs, "The digital docent: XR storytelling for a living history museum," in Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology (VRST '20), Virtual Event, Canada, pp. 1–3, 2020. [71]. T. W. Bickmore, L. M. Pfeifer, and D. Schulman, “Relational agents improve engagement and learning in science museum visitors,” in Proceedings of the 10th International Conference on Intelligent Virtual Agents (IVA 2011), Reykjavik, Iceland, pp. 55–67, 2011. [72].U. Spierling, P. Winzer, and E. Massarczyk, “Experiencing the presence of historical stories with location-based augmented reality,” in International Conference on Interactive Digital Storytelling (ICIDS 2017), Cham, Switzerland, Lecture Notes in Computer Science vol. 10690, pp. 49–62, 2017. [73].C. Wang and Y. Zhu, “A survey of museum applied research based on mobile augmented reality,” Computational Intelligence and Neuroscience, vol. 2022, pp. 1–22, 2022. [74].Choudhary, Z., Gottsacker, M., Kim, K., Schubert, R., Stefanucci, J., Bruder, G., Welch, G.F.: Revisiting Distance Perception with Scaled Embodied Cues in Social Virtual Reality. In: 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pp. 788–797 (2021) [75].V. A. R. Ecosystem. Guidebot - ar indoor navigation for museums, Sep. 2020. [76].E. T. Hall. The hidden dimension. 1966. [77].T. W. Schubert, “The sense of presence in virtual environments: A three-component scale measuring spatial presence, involvement, and realness,” Zeitschrift für Medienpsychologie, vol. 15, no. 2, pp. 69–71, 2003. [78].C. Harms and F. Biocca, “Internal consistency and reliability of the Networked Minds Social Presence Measure,” unpublished manuscript, Michigan State University, East Lansing, MI, USA, 2006. [79].J. B. Brooke. Sus: A ’quick and dirty’ usability scale. 1996. [80].C. Kyrlitsias and D. Michael-Grigoriou, “Social interaction with agents and avatars in immersive virtual environments: A survey,” Frontiers in Virtual Reality, vol. 3, 2022. [81].F. Biocca, C. Harms, and J. K. Burgoon, “Toward a more robust theory and measure of social presence: Review and suggested criteria,” Presence: Teleoperators & Virtual Environments, vol. 12, no. 5, pp. 456–480, 2003. [82].P. Milgram, H. Takemura, A. Utsumi, and F. Kishino, “Augmented reality: A class of displays on the reality-virtuality continuum,” in Telemanipulator and Telepresence Technologies, vol. 2351, pp. 282–292, 1995. [83].C. Boletsis, “The new era of virtual reality locomotion: A systematic literature review of techniques and a proposed typology,” Multimodal Technologies and Interaction, vol. 1, no. 24, 2017. [84].L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423, 2016. [85].X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1501–1510, Oct. 2017. [86].X. Li, S. Liu, J. Kautz, and M.-H. Yang, “Learning linear transformations for fast image and video style transfer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3809–3818, June 2019. [87].D. Y. Park and K. H. Lee, “Arbitrary style transfer with style-attentional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5880–5888, June 2019. [88].M. Ruder, A. Dosovitskiy, and T. Brox, “Artistic style transfer for videos,” in B. Rosenhahn and B. Andres, Eds., Pattern Recognition, Cham, Switzerland: Springer International Publishing, pp. 26–36, 2016. [89].S. Liu, T. Lin, D. He, F. Li, M. Wang, X. Li, Z. Sun, Q. Li, and E. Ding, “Adaattn: Revisit attention mechanism in arbitrary neural style transfer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6649–6658, Oct. 2021. [90].H. Chen, L. Zhao, Z. Wang, H. Zhang, Z. Zuo, A. Li, W. Xing, and D. Lu, “Artistic style transfer with internal-external learning and contrastive learning,” in M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., Advances in Neural Information Processing Systems, vol. 34, pp. 26561–26573, Curran Associates, Inc., 2021. [91].A. Mordvintsev, N. Pezzotti, L. Schubert, and C. Olah, “Differentiable image parameterizations,” Distill, vol. 3, Jul. 2018. [92].J. Joshua, “Information bodies: Computational anxiety in Neal Stephenson’s Snow Crash,” Interdisciplinary Literary Studies, vol. 19, no. 1, pp. 17–47, 2017. [93].Yugandhara R. Y. Metaverse Market Size, Share, Growth, and Trends Report 2023, Mar. 2023. [94].H. Duan, J. Li, S. Fan, Z. Lin, X. Wu, and W. Cai, “Metaverse for social good: A university campus prototype,” in Proceedings of the 29th ACM International Conference on Multimedia (MM ’21), pp. 153–161, Oct. 2021. [95].Y. Itoh, Y. Chen, K. Iida, M. Shiiki, and K. Mitsubuchi, “Experiment of metaverse learning method using anatomical 3D object,” in Proceedings of the 8th International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI ’09), pp. 289–294, 2009. [96].S. Mystakidis, “Metaverse,” Encyclopedia, vol. 2, no. 1, pp. 486–497, 2022. [97].N. Blum, S. Lachapelle, and H. Alvestrand, “WebRTC,” Communications of the ACM, vol. 64, no. 8, pp. 50–54, Jul. 2021. [98].A. Diaz, “Disturbing reports of sexual assaults in the metaverse: ‘it’s a free show’,” New York Post. [99].S. Cole and E. Balcetis, “Chapter three - motivated perception for self-regulation: How visual experience serves and is served by goals,” Advances in Experimental Social Psychology, vol. 64, pp. 129–186, Academic Press, 2021. [100].M. McKenna, “Interactive viewpoint control and three-dimensional operations,” in Proceedings of the 1992 Symposium on Interactive 3D Graphics, pp. 53–56, 1992. [101].D. Bowman, E. Kruijff, J. J. LaViola Jr., and I. P. Poupyrev, 3D User Interfaces: Theory and Practice, Addison-Wesley, 2004. [102].K. Rahimi, C. Banigan, and E. D. Ragan, “Scene transitions and teleportation in virtual reality and the implications for spatial awareness and sickness,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 6, pp. 2273–2287, 2020. [103].D. A. Bowman, D. Koller, and L. F. Hodges, “Travel in immersive virtual environments: An evaluation of viewpoint motion control techniques,” in Proceedings of IEEE 1997 Annual International Symposium on Virtual Reality, pp. 45–52, 1997. [104].L. Men and D. Zhao, “Designing privacy for collaborative music making in virtual reality,” in Proceedings of the 16th International Audio Mostly Conference (AM ’21), pp. 93–100, 2021. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98329 | - |
| dc.description.abstract | 近年來,隨著網路技術的迅速發展,以及AR、VR與MR等沉浸式科技的日益成熟,元宇宙快速興起,並廣泛應用於各個領域。其中,虛擬實境為關鍵技術之一,透過虛擬實境技術,可建構沉浸式的虛擬展廳,使觀眾能夠自由移動於三維空間,並透過控制器與虛擬物件互動。然而在這樣的環境中,使用者常因操作不熟悉或體驗時間有限而影響觀展品質。為提升使用者體驗,虛擬展廳設計需納入有效的使用者引導系統,協助觀眾理解展覽流程與互動方式,引導觀眾探索虛擬空間,進而增強使用者體驗。本篇研究以使用者為中心針對元宇宙虛擬展廳的互動觀展方式,提出一套設計考量框架,並分析使用者於虛擬展廳中的互動行為。我們針對展覽中的互動式引導,探討並比較視覺引導與AI語音引導外觀兩種方式的使用效果。此外,我們亦提出一套導入風格轉換技術的展品編輯系統,協助策展者創造出多元且具視覺變化的展覽形式,提供使用者在虛擬空間中進行個人化風格的表達與體驗,提升藝術品的呈現深度與觀眾的理解程度。最後,我們比較了桌面版元宇宙平台中的互動設計,並實作一套具備聊天功能、虛擬替身與多視角支援的原型系統。本文總結了元宇宙虛擬展廳應用開發過程中所需的跨領域設計考量,並針對虛擬環境下的互動體驗提出建議。 | zh_TW |
| dc.description.abstract | In recent years, with the rapid development of internet technology and the increasing maturity of immersive technologies such as AR, VR, and MR, the metaverse has quickly emerged and been widely applied across various fields. Among these technologies, Virtual Reality (VR), a key component, enables the construction of immersive virtual galleries, allowing viewers to freely navigate three-dimensional spaces and interact with virtual objects using controllers. To enhance user experience, the design of virtual galleries must incorporate effective user guidance systems to assist visitors in understanding the exhibition flow and methods of interaction, thereby guiding their exploration of the virtual space and enhancing the overall experience. We adopt a user-centered approach to propose a design consideration framework focused on interaction and exhibition viewing within metaverse-based virtual galleries, and analyzes users’ interactive behaviors in such environments. First, we examine and compare two interactive guidance methods for exhibitions: visual guidance and conversational agents. Second, we propose an exhibition editing system with style transfer capabilities, supporting curators in creating visually diverse exhibitions and users in expressing personalized styles in virtual environments. Finally, we compare interaction designs in desktop-based metaverse platforms and implement a prototype system featuring chat functions, virtual avatars, and multi-perspective support. This thesis concludes by summarizing user-centered design considerations essential to developing the metaverse and offers concrete recommendations for enhancing interactive experiences in virtual environments. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-01T16:14:33Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-08-01T16:14:33Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgments iii 摘要 v ABSTRACT vii TABLE OF CONTENTS ix TABLE OF FIGURES xiii TABLE OF TABLES xvii CHAPTER 1 INTRODUCTION 1 1.1 Background and Motivation 1 1.2 Outline of this Research 3 CHAPTER 2 RELATED WORK 4 2.1 Interaction Methods in Metaverse 4 2.2 Guidance in Metaverse 5 2.2.1 Guidance in Video Games 5 2.2.2 Guidance in AR, VR, and MR 6 2.3 Virtual Agents (VAs) 7 2.3.1 The Appearance of Virtual Agent 8 2.3.2 The Interactivity of Virtual Agents 9 2.3.3 Using Virtual Agents in Guiding Experience 10 2.4 Using Large Language Model in VR 11 2.5 VR in Exhibitions 12 2.6 Summary 12 CHAPTER 3 User-Centered Guidance in Metaverse 14 3.1 Visual Guidance in Interactive Virtual Reality 14 3.1.1 Introduction 14 3.1.2 Design Consideration 16 3.1.3 User Study 20 3.1.4 Experimental Results 26 3.1.5 Discussion 31 3.1.6 Summary 34 3.2 Conversational Agents for Guiding Users in Interactive Virtual Reality 36 3.2.1 Introduction 36 3.2.2 System Design 39 3.2.3 User Study 45 3.2.4 Discussion 55 3.2.5 Summary 59 CHAPTER 4 Applications of Style Transfer for Personalization in Metaverse 61 4.1 Introduciton 61 4.2 System Design 63 4.2.1 Object Type 63 4.2.2 Model Selection 64 4.2.3 Style Selection 69 4.2.4 User Interface 70 4.2.5 Transferred Results 70 4.3 User Study 71 4.3.1 Tasks 72 4.3.3 Procedure 74 4.3.4 Participants 75 4.3.5 Apparatus 75 4.4 Results 75 4.5 Discussion 78 4.6 Summary 80 CHAPTER 5 Multi-User Interaction in Metaverse 82 5.1 Introduction 82 5.2 System Design 83 5.2.1 System Architecture 83 5.2.2 User Interface 84 5.2.3 Components and Features 85 5.3 User study and Results 89 5.3.1. Study of Visual Perception of Moving in Virtual Space 89 5.3.2 Study of Auditory Experience of Multi-User Interaction in Virtual Space 96 5.4 Summary 106 CHAPTER 6 Conclusion and Future Work 108 6.1 Conclusion 108 6.2 Future Research Direction 111 LIST OF REFERENCES 112 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 虛擬展廳 | zh_TW |
| dc.subject | 使用者經驗 | zh_TW |
| dc.subject | 元宇宙 | zh_TW |
| dc.subject | 互動設計 | zh_TW |
| dc.subject | 視覺引導 | zh_TW |
| dc.subject | User Experience | en |
| dc.subject | Interaction Design | en |
| dc.subject | Visual Guidance | en |
| dc.subject | Virtual Gallery | en |
| dc.subject | Metaverse | en |
| dc.title | 以使用者為中心的元宇宙互動設計 | zh_TW |
| dc.title | User-Centric Design for Interactions in Metaverse | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 歐陽明;李明穗;黃彥男;詹力韋 | zh_TW |
| dc.contributor.oralexamcommittee | Ming Ouhyoung;Ming-Sui Lee;Yennun Huang;Liwei Chan | en |
| dc.subject.keyword | 使用者經驗,虛擬展廳,視覺引導,互動設計,元宇宙, | zh_TW |
| dc.subject.keyword | User Experience,Virtual Gallery,Visual Guidance,Interaction Design,Metaverse, | en |
| dc.relation.page | 126 | - |
| dc.identifier.doi | 10.6342/NTU202501930 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2025-07-31 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 38.59 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
