請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91127完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳炳宇 | zh_TW |
| dc.contributor.advisor | Bing-Yu Chen | en |
| dc.contributor.author | 洪佳生 | zh_TW |
| dc.contributor.author | Chia-Sheng Hung | en |
| dc.date.accessioned | 2023-11-13T16:08:03Z | - |
| dc.date.available | 2024-03-16 | - |
| dc.date.copyright | 2023-11-13 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-10-06 | - |
| dc.identifier.citation | [1] Airpods redefine the personal audio experience, 2023.
[2] Applevis, 2023. [3] Blind and visually impaired community, 2023. [4] Blindsquare, 2023. [5] Microsoft soundscape, 2023. [6] C. R. André, J.-J. Embrechts, J. G. Verly, M. Rébillat, and B. F. Katz. Sound for 3d cinema and the sense of presence. Georgia Institute of Technology, 2012. [7] G. Ballou. Handbook for sound engineers: The new audio cyclopedia, howard w. Sams and Co., Indianapolis, 1987. [8] M. M. Blattner, D. A. Sumikawa, and R. M. Greenberg. Earcons and icons: Their structure and common design principles. Human–Computer Interaction, 4(1):11–44, 1989. [9] W.-P. Brinkman, A. R. Hoekstra, and R. van EGMOND. The effect of 3d audio and other audio techniques on virtual reality experience. Annual Review of Cybertherapy and Telemedicine 2015, pages 44–48, 2015. [10] R.-C. Chang, C.-H. Ting, C.-S. Hung, W.-C. Lee, L.-J. Chen, Y.-T. Chao, B.-Y. Chen, and A. Guo. Omniscribe: Authoring immersive audio descriptions for 360°videos. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST ’22, New York, NY, USA, 2022. Association for Computing Machinery. [11] K.-W. Chen, Y.-J. Chang, and L. Chan. Predicting opportune moments to deliver notifications in virtual reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery. [12] V. Clarke, V. Braun, and N. Hayfield. Thematic analysis. Qualitative psychology: A practical guide to research methods, 3:222–248, 2015. [13] L. Cliffe, J. Mansell, C. Greenhalgh, and A. Hazzard. Materialising contexts: virtual soundscapes for real-world exploration. Personal and Ubiquitous Computing, 25:623–636, 2021. [14] F. Daiber, D. Degraen, A. Zenner, T. Döring, F. Steinicke, O. J. Ariza Nunez, and A. L. Simeone. Everyday proxy objects for virtual reality. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA ’21, New York, NY, USA, 2021. Association for Computing Machinery. [15] E. D’Atri, C. M. Medaglia, A. Serbanati, U. B. Ceipidor, E. Panizzi, and A. D’Atri. A system to aid blind people in the mobility: A usability test and its results. In Second International Conference on Systems (ICONS’07), pages 35–35, New York, NY, USA, 2007. IEEE, Institute of Electrical and Electronics Engineers. [16] I. Endo, K. Takashima, M. Inoue, K. Fujita, K. Kiyokawa, and Y. Kitamura. Modularhmd: A reconfigurable mobile head-mounted display enabling ad-hoc peripheral interactions with the real world. In The 34th Annual ACM Symposium on User Interface Software and Technology, UIST ’21, page 100–117, New York, NY, USA, 2021. Association for Computing Machinery. [17] C. for Disease Control and Prevention. What noises cause hearing loss?, 2023. [18] W. W. Gaver and D. A. Norman. Everyday listening and auditory icons. PhD thesis, University of California, San Diego, Department of Cognitive Science and Psychology, 1988. [19] S. Ghosh, L. Winston, N. Panchal, P. Kimura-Thollander, J. Hotnog, D. Cheong, G. Reyes, and G. D. Abowd. Notifivr: Exploring interruptions and notifications in virtual reality. IEEE Transactions on Visualization and Computer Graphics, 24(4):1447–1456, 2018. [20] R. H. Gilkey and J. M. Weisenberger. The sense of presence for the suddenly deafened adult: Implications for virtual environments. Presence: Teleoperators & Virtual Environments, 4(4):357–363, 1995. [21] U. Gruenefeld, J. Auda, F. Mathis, S. Schneegass, M. Khamis, J. Gugenheimer, and S. Mayer. Vrception: Rapid prototyping of cross-reality systems in virtual reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery. [22] G. Haas, E. Stemasov, and E. Rukzio. Can’t you hear me? investigating personal soundscape curation. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, MUM ’18, page 59–69, New York, NY, USA, 2018. Association for Computing Machinery. [23] M. Hagood. Quiet comfort: Noise, otherness, and the mobile production of personal space. American Quarterly, 63(3):573–589, 2011. [24] S. G. Hart. Nasa-task load index (nasa-tlx); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, volume 50, pages 904–908. Sage publications Sage CA: Los Angeles, CA, 2006. [25] J. Hartmann, C. Holz, E. Ofek, and A. D. Wilson. Realitycheck: Blending virtual environments with situated physical reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, page 1–12, New York, NY, USA, 2019. Association for Computing Machinery. [26] J. Herskovitz, J. Wu, S. White, A. Pavel, G. Reyes, A. Guo, and J. P. Bigham. Making mobile augmented reality applications accessible. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’20, New York, NY, USA, 2020. Association for Computing Machinery. [27] A. Hettiarachchi and D. Wigdor. Annexing reality: Enabling opportunistic use of everyday objects as tangible proxies in augmented reality. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, page 1957–1967, New York, NY, USA, 2016. Association for Computing Machinery. [28] C.-Y. Hsieh, Y.-S. Chiang, H.-Y. Chiu, and Y.-J. Chang. Bridging the virtual and real worlds: A preliminary study of messaging notifications in virtual reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, page 1–14, New York, NY, USA, 2020. Association for Computing Machinery. [29] D. Jain, K. Huynh Anh Nguyen, S. M. Goodman, R. Grossman-Kahn, H. Ngo, A. Kusupati, R. Du, A. Olwal, L. Findlater, and J. E. Froehlich. Protosound: A personalized and scalable sound recognition system for deaf and hard-of-hearing users. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery. [30] D. Jain, S. Junuzovic, E. Ofek, M. Sinclair, J. R. Porter, C. Yoon, S. Machanavajhala, and M. Ringel Morris. Towards sound accessibility in virtual reality. In Proceedings of the 2021 International Conference on Multimodal Interaction, ICMI ’21, page 8091, New York, NY, USA, 2021. Association for Computing Machinery. [31] G. Jain, Y. Teng, D. H. Cho, Y. Xing, M. Aziz, and B. A. Smith. ”i want to figure things out”: Supporting exploration in navigation for people with visual impairments. Proc. ACM Hum.-Comput. Interact., 7(CSCW1), apr 2023. [32] T. F. Ji, B. Cochran, and Y. Zhao. Vrbubble: Enhancing peripheral awareness of avatars for people with visual impairments in social virtual reality. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’22, New York, NY, USA, 2022. Association for Computing Machinery. [33] S. S. Johansen, N. van Berkel, and J. Fritsch. Characterising soundscape research in human-computer interaction. In Designing Interactive Systems Conference, DIS ’22, page 1394–1417, New York, NY, USA, 2022. Association for Computing Machinery. [34] S. R. Klemmer, A. K. Sinha, J. Chen, J. A. Landay, N. Aboobaker, and A. Wang. Suede: A wizard of oz prototyping tool for speech user interfaces. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, UIST ’00, page 1–10, New York, NY, USA, 2000. Association for Computing Machinery. [35] E. Kreiss, C. Bennett, S. Hooshmand, E. Zelikman, M. R. Morris, and C. Potts. Context matters for image descriptions for accessibility: Challenges for referenceless evaluation metrics. arXiv preprint arXiv:2205.10646, 2022. [36] R. L. Franz, S. Junuzovic, and M. Mott. Nearmi: A framework for designing point of interest techniques for vr users with limited mobility. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’21, New York, NY, USA, 2021. Association for Computing Machinery. [37] G. Laput, K. Ahuja, M. Goel, and C. Harrison. Ubicoustics: Plug-and-play acoustic activity recognition. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST ’18, page 213–224, New York, NY, USA, 2018. Association for Computing Machinery. [38] P. Larsson, A. Väljamäe, D. Västfjäll, A. Tajadura-Jiménez, and M. Kleiner. Auditory-induced presence in mixed reality environments and related technology. In The engineering of mixed reality systems, pages 143–163. Springer, 2010. [39] A. Lecuyer, P. Mobuchon, C. Megard, J. Perret, C. Andriot, and J.-P. Colinot. Homere: a multimodal system for visually impaired people to explore virtual environments. In IEEE Virtual Reality, 2003. Proceedings., pages 251–258, New York, NY, USA, 2003. Institute of Electrical and Electronics Engineers. [40] C. Y. P. Lee, Z. Zhang, J. Herskovitz, J. Seo, and A. Guo. Collabally: Accessible collaboration awareness in document editing. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery. [41] J. Lee, J. Herskovitz, Y.-H. Peng, and A. Guo. Imageexplorer: Multi-layered touch exploration to encourage skepticism towards imperfect ai-generated image captions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery. [42] M. K. Lee, J. Forlizzi, P. E. Rybski, F. Crabbe, W. Chung, J. Finkle, E. Glaser, and S. Kiesler. The snackbot: documenting the design of a robot for long-term humanrobot interaction. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, pages 7–14, 2009. [43] Z. Li, S. Connell, W. Dannels, and R. Peiris. Soundvizvr: Sound indicators for accessible sounds in virtual reality for deaf or hard-of-hearing users. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’22, New York, NY, USA, 2022. Association for Computing Machinery. [44] D. Lindlbauer and A. D. Wilson. Remixed reality: Manipulating space and time in augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, page 1–13, New York, NY, USA, 2018. Association for Computing Machinery. [45] M. J. Lindstrom and D. M. Bates. Newton—raphson and em algorithms for linear mixed-effects models for repeated-measures data. Journal of the American Statistical Association, 83(404):1014–1022, 1988. [46] M. Lombard and T. Ditton. At the heart of it all: The concept of presence. Journal of computer-mediated communication, 3(2):JCMC321, 1997. [47] M. Lopez, G. Kearney, and K. Hofstädter. Audio description in the uk: What works, what doesn't, and understanding the need for personalising access. British journal of visual impairment, 36(3):274–291, 2018. [48] M. Lopez, G. Kearney, and K. Hofstädter. Seeing films through sound: Sound design, spatial audio, and accessibility for visually impaired audiences. British Journal of Visual Impairment, 40(2):117–144, 2022. [49] A. Mamuji, R. Vertegaal, C. Sohn, and D. Cheng. Attentive headphones: Augmenting conversational attention with a real world tivo. In Extended Abstracts of CHI, volume 5, 2005. [50] M. McGill, D. Boland, R. Murray-Smith, and S. Brewster. A dose of reality: Overcoming usability challenges in vr head-mounted displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, page 2143 2152, New York, NY, USA, 2015. Association for Computing Machinery. – [51] M. McGill, S. Brewster, D. McGookin, and G. Wilson. Acoustic transparency and the changing soundscape of auditory mixed reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, page 1–16, New York, NY, USA, 2020. Association for Computing Machinery. [52] P. Milgram and F. Kishino. A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems, 77(12):1321–1329, 1994. [53] M. Mott, E. Cutrell, M. Gonzalez Franco, C. Holz, E. Ofek, R. Stoakley, and M. Ringel Morris. Accessible by design: An opportunity for virtual reality. In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pages 451–454, 2019. [54] M. Mott, J. Tang, S. Kane, E. Cutrell, and M. Ringel Morris. “i just went into it assuming that i wouldn’t be able to have the full experience": Understanding the accessibility of virtual reality for people with limited mobility. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’20, New York, NY, USA, 2020. Association for Computing Machinery. [55] C. D. Murray, P. Arnold, and B. Thornton. Presence accompanying induced hearing loss: Implications for immersive virtual environments. Presence, 9(2):137–148, 2000. [56] V. Nair, J. L. Karp, S. Silverman, M. Kalra, H. Lehv, F. Jamil, and B. A. Smith. Navstick: Making video games blind-accessible via the ability to look around. In The 34th Annual ACM Symposium on User Interface Software and Technology, page 538–551, New York, NY, USA, 2021. Association for Computing Machinery. [57] V. Nair, S.-e. Ma, R. E. Gonzalez Penuela, Y. He, K. Lin, M. Hayes, H. Huddleston, M. Donnelly, and B. A. Smith. Uncovering visually impaired gamers'preferences for spatial awareness tools within video games. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, pages 1–16, 2022. [58] V. Nair, S.-e. Ma, H. Huddleston, K. Lin, M. Hayes, M. Donnelly, R. E. Gonzalez, Y. He, and B. A. Smith. Towards a generalized acoustic minimap for visually impaired gamers. In The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, page 89–91, New York, NY, USA, 2021. Association for Computing Machinery. [59] V. Nair, S.-e. Ma, H. Huddleston, K. Lin, M. Hayes, M. Donnelly, R. E. Gonzalez, Y. He, and B. A. Smith. Towards a generalized acoustic minimap for visually impaired gamers. In Adjunct Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, UIST ’21 Adjunct, page 89–91, New York, NY, USA, 2021. Association for Computing Machinery. [60] W. Odom, J. Zimmerman, S. Davidoff, J. Forlizzi, A. K. Dey, and M. K. Lee. A fieldwork of the future with user enactments. In Proceedings of the Designing Interactive Systems Conference, DIS ’12, page 338–347, New York, NY, USA, 2012. Association for Computing Machinery. [61] W. Odom, J. Zimmerman, J. Forlizzi, H. Choi, S. Meier, and A. Park. Unpacking the thinking and making behind a user enactments project. In Proceedings of the 2014 Conference on Designing Interactive Systems, DIS ’14, page 513–522, New York, NY, USA, 2014. Association for Computing Machinery. [62] A. Pavel, G. Reyes, and J. P. Bigham. Rescribe: Authoring and automatically editing audio descriptions. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, page 747–759, New York, NY, USA, 2020. Association for Computing Machinery. [63] D. A. Ramsdell. The psychology of the hard-of-hearing and the deafened adult. Hearing and deafness, 4:499–510, 1978. [64] J. S. Roo and M. Hachet. One reality: Augmenting how the physical world is experienced by combining multiple mixed reality modalities. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST ’17, page 787–795, New York, NY, USA, 2017. Association for Computing Machinery. [65] A. L. Simeone, E. Velloso, and H. Gellersen. Substitutional reality: Using the physical environment to design virtual reality experiences. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, page 3307–3316, New York, NY, USA, 2015. Association for Computing Machinery. [66] A. F. Siu, M. Sinclair, R. Kovacs, E. Ofek, C. Holz, and E. Cutrell. Virtual reality without vision: A haptic and auditory white cane to navigate complex virtual worlds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, page 1–13, New York, NY, USA, 2020. Association for Computing Machinery. [67] A. H. Solutions. How does active noise cancelling work?, 2023. [68] A. Stangl, N. Verma, K. R. Fleischmann, M. R. Morris, and D. Gurari. Going beyond one-size-fits-all image descriptions to satisfy the information wants of people who are blind or have low vision. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’21, New York, NY, USA, 2021. Association for Computing Machinery. [69] Y. Tian, D. Hu, and C. Xu. Cyclic co-learning of sounding object visual grounding and sound separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2745–2754, 2021. [70] T. N. Y. Times. What your noise-cancelling headphones can and can't do, 2020. [71] R. van Rijswijk and J. Strijbos. Sounds in Your Pocket: Composing Live Soundscapes with an App. Leonardo Music Journal, 23:27–29, 12 2013. [72] B. Veluri, J. Chan, M. Itani, T. Chen, T. Yoshioka, and S. Gollakota. Real-time target sound extraction. arXiv preprint arXiv:2211.02250, 2022. [73] W. W. W. C. (W3C). Xr accessibility user requirements, 2021. [74] C.-H. Wang, B.-Y. Chen, and L. Chan. Realitylens: A user interface for blending customized physical world view into virtual reality. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST ’22, New York, NY, USA, 2022. Association for Computing Machinery. [75] C.-H. Wang, C.-E. Tsai, S. Yong, and L. Chan. Slice of light: Transparent and integrative transition among realities in a multi-hmd-user environment. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, UIST ’20, page 805–817, New York, NY, USA, 2020. Association for Computing Machinery. [76] A. Zenner, M. Speicher, S. Klingner, D. Degraen, F. Daiber, and A. Krüger. Immersive notification framework: Adaptive & plausible notifications in virtual reality. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA ’18, page 1–6, New York, NY, USA, 2018. Association for Computing Machinery. [77] Y. Zhao, C. L. Bennett, H. Benko, E. Cutrell, C. Holz, M. R. Morris, and M. Sinclair. Enabling people with visual impairments to navigate virtual reality with a haptic and auditory cane simulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, page 1–14, New York, NY, USA, 2018. Association for Computing Machinery. [78] Y. Zhao, E. Cutrell, C. Holz, M. R. Morris, E. Ofek, and A. D. Wilson. Seeingvr: A set of tools to make virtual reality more accessible to people with low vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, page 1–14, New York, NY, USA, 2019. Association for Computing Machinery. [79] A. Zimmermann and A. Lorenz. Listen: a user-adaptive audio-augmented museum guide. User Modeling and User-Adapted Interaction, 18(5):389–416, 2008. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91127 | - |
| dc.description.abstract | 混合實境(MR)音景將真實世界的聲音與來自聽覺設備的虛擬音效結合,呈現複雜的聽覺資訊,往往難以區分與辨識。這對視力有挑戰或視力受損的個體來說尤其具挑戰性,因為他們在日常生活中仰賴聲音和描述。為了瞭解如何處理複雜的聲音資訊,我們分析了視力受損社群內的網路討論區貼文,辨識出常見的情境、需求和所期望的解決方案。我們綜合了這些結果,並提出了「Sound Blending」的方法,旨在提高MR聲音感知,其中包含六種聲音操作:環境建構、特徵轉移、音效生成、優先處理、音源定位和風格調整。為了評估聲音混合的效果,我們進行了一項使用者實踐研究,共有18名視力受損的參與者參與,涵蓋三種模擬的MR情境,參與者需要識別複雜音景中的特定聲音。我們發現聲音混合有助提高MR聲音感知並減輕認知負擔。最後,我們開發了三個實際應用示例,以證明聲音混合的實用性。 | zh_TW |
| dc.description.abstract | Mixed-reality (MR) soundscapes blend real-world sound with virtual audio from hearing devices, presenting intricate auditory information that is hard to discern and differentiate. This is particularly challenging for blind or visually impaired individuals, who rely on sounds and descriptions in their everyday lives. To understand how complex audio information is consumed, we analyzed online forum posts within the blind community, identifying prevailing challenges, needs, and desired solutions. We synthesized the results and proposed Sound Blending for increasing MR sound awareness, which includes six sound manipulations: Ambience Builder, Feature Shifter, Earcon Generator, Prioritizer, Spatializer, and Stylizer. To evaluate the effectiveness of sound blending, we conducted a user enactment study with 18 blind participants across three simulated MR scenarios, where participants identified specific sounds within intricate soundscapes. We found that sound blending increased MR sound awareness and minimized cognitive load. Finally, we developed three real-world example applications to demonstrate the practicality of sound blending. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-11-13T16:08:03Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-11-13T16:08:03Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 iii Abstract v Contents vii List of Figures xi List of Tables xv Chapter 1 Introduction 1 Chapter 2 Related work 7 2.1 Blending Real and Virtual World . . . . . . . . . . . . . . . . . . . . 7 2.2 Soundscape Formation and Personalization . . . . . . . . . . . . . . 8 2.3 Accessibility of Mixed Reality . . . . . . . . . . . . . . . . . . . . . 9 Chapter 3 Understanding practices of consuming complex sounds 11 3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2.1 Everyday Scenarios to Consume Complex Audio Information . . . . 12 3.2.2 Retaining Real-World Awareness in Noisy Environment . . . . . . . 13 3.2.3 Adjusting Sound Characteristics of Important Sounds that are in Conflict with Each Other . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.4 Distributing Sound Sources for Better Distinction . . . . . . . . . . 15 3.2.5 Customizing or Augmenting Existing Sound Library . . . . . . . . 15 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 4 Sound Blending Manipulations for Accessible MR Awareness 17 Chapter 5 User Enactment Study 21 5.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2 Simulated Scenarios and Manipulations . . . . . . . . . . . . . . . . 22 5.2.1 RW-Focused Scenario: Navigating on the street with a white cane and voice navigation guidance. . . . . . . . . . . . . . . . . . . . . 23 5.2.2 VR-Focused Scenario: Consuming an audio handbook while working at the help desk. . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.2.3 Fully-Mixed Scenario: Attending a hybrid conference. . . . . . . . 27 5.3 Technical Details in Unity Implementation . . . . . . . . . . . . . . 30 5.4 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.5 Tasks and Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.6 Dependent Measures and Data Analysis . . . . . . . . . . . . . . . . 33 5.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.7.1 RQ1: How do sound manipulations affect participants’ performance compared to full transparency and noise cancellation? . . . . . . . . 34 5.7.2 RQ2: How do the different scenarios with varying emphases on reality and virtuality affect participants'performance? . . . . . . . . 36 5.7.3 RQ3: How do the conditions affect participants'performance differently across the scenarios? . . . . . . . . . . . . . . . . . . . . . 38 5.7.4 RQ4: How do sound manipulations affect participants’ cognitive load compared to full transparency and noise cancellation? . . . . . 40 5.7.5 RQ5: How are participants’ experiences and ways to further customize their soundscape for each scenario? . . . . . . . . . . . . . . 40 5.8 Additional Results from Sighted Participants . . . . . . . . . . . . . 42 Chapter 6 Example Applications with Sound Blending 43 6.1 Accessible Online Meeting Application . . . . . . . . . . . . . . . . 43 6.2 Content-Aware MR Image Exploration . . . . . . . . . . . . . . . . 44 6.3 Context-Aware Outdoor Navigation . . . . . . . . . . . . . . . . . . 45 Chapter 7 Discussion and Future Work 47 7.1 From Simulation to Practical Applications . . . . . . . . . . . . . . . 47 7.2 Towards Sound-Aware Description Manipulations . . . . . . . . . . 49 7.3 Customizable Sound Manipulations . . . . . . . . . . . . . . . . . . 49 7.4 Generalizing Results to Broader Groups with Different Sensory Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Chapter 8 Conclusion 53 References 55 | - |
| dc.language.iso | en | - |
| dc.subject | AR/VR | zh_TW |
| dc.subject | 混合實境 | zh_TW |
| dc.subject | 聲音感知 | zh_TW |
| dc.subject | 無障礙設計 | zh_TW |
| dc.subject | AR/VR | en |
| dc.subject | accessibility | en |
| dc.subject | sound awareness | en |
| dc.subject | mixed reality | en |
| dc.title | Sound Blending: 探索聲音操控以實現無障礙混合現實感知 | zh_TW |
| dc.title | Sound Blending: Exploring Sound Manipulations for Accessible Mixed-Reality Awareness | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 詹力韋;蔡欣叡;鄭龍磻;韓秉軒 | zh_TW |
| dc.contributor.oralexamcommittee | Li-Wei Chan;Hsin-Ruey Tsai;Lung-Pan Cheng;Ping-Hsuan Han | en |
| dc.subject.keyword | AR/VR,混合實境,聲音感知,無障礙設計, | zh_TW |
| dc.subject.keyword | AR/VR,mixed reality,sound awareness,accessibility, | en |
| dc.relation.page | 67 | - |
| dc.identifier.doi | 10.6342/NTU202304253 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-10-06 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | - |
| dc.date.embargo-lift | 2027-07-30 | - |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-1.pdf 未授權公開取用 | 16.71 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
