Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69489
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳彥仰
dc.contributor.authorMing-Wei Hsuen
dc.contributor.author徐銘威zh_TW
dc.date.accessioned2021-06-17T03:17:09Z-
dc.date.available2018-07-19
dc.date.copyright2018-07-19
dc.date.issued2018
dc.date.submitted2018-07-03
dc.identifier.citation[1] Ava.
[2] Google noto fonts.
[3] Uni.
[4] Unity.
[5] A. Brown, R. Jones, M. Crabb, J. Sandford, M. Brooks, M. Armstrong, and C. Jay. Dynamic subtitles: The user experience. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video, TVX ’15, pages 103–112, New York, NY, USA, 2015. ACM.
[6] D. Chiaro, C. Heiss, and C. Bucaria. Between Text and Image: Updating Research in Screen Translation. John Benjamins Publishing Company, Amsterdam, Netherlands, 2008.
[7] B.-K. Chun, D.-S. Ryu, W.-I. Hwang, and H.-G. Cho. An automated procedure for word balloon placement in cinema comics. In Advances in Visual Computing: Second International Symposium, ISVC 2006 Lake Tahoe, NV, USA, November 6- 8, 2006. Proceedings, Part II, pages 576–585. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.
[8] M. Crabb, R. Jones, M. Armstrong, and C. J. Hughes. Online news videos: The ux of subtitle position. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS ’15, pages 215–222, New York, NY, USA, 2015. ACM.
[9] M. E. Demorest and S. A. Erdman. Scale composition and item analysis of the communication profile for the hearing impaired. Journal of Speech and Hearing Research, 29(4):515–535, dec 1986.
[10] G.C.Fong.Letthewordsdothetalking:Thenatureandartofsubtitling.InDubbing and Subtitling in a World Context, pages 91–106. The Chinese University of Hong Kong, 2009.
[11] B. M. Gorman. Visaural: A wearable sound-localisation device for people with im- paired hearing. In Proceedings of the 16th International ACM SIGACCESS Confer- ence on Computers & Accessibility, ASSETS ’14, pages 337–338, New York, NY, USA, 2014. ACM.
[12] J.Gugenheimer,K.Plaumann,F.Schaub,P.DiCampliSanVito,S.Duck,M.Rabus, and E. Rukzio. The impact of assistive technology on communication quality be- tween deaf and hearing individuals. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’17, pages 669–682, New York, NY, USA, 2017. ACM.
[13] S.R.Gulliver.Impactofcaptionsondeafandhearingperceptionofmultimediavideo clips. In Proceedings of the 2002 IEEE International Conference on Multimedia and Expo, volume 1 of ICME ’02, pages 753–756, Washington, DC, USA, 2002. IEEE.
[14] S. R. Gulliver and G. Ghinea. How level and type of deafness affect user perception of multimedia video clips. Universal Access in the Information Society, 2(4):374– 386, nov 2003.
[15] R. S. Hallam and R. Corney. Conversation tactics in persons with normal hearing and hearing-impairment. International Journal of Audiology, 53(3):174–181, mar 2014.
[16] M. A. Hersh and M. A. Johnson. Assistive Technology for the Hearing-impaired, Deaf and Deafblind. Springer-Verlag London, London, England, UK, 2003.
[17] M. Hollander, D. A. Wolfe, and E. Chicken. Nonparametric Statistical Methods. Wiley, New York, NY, USA, 1999.
[18] R. Hong, M. Wang, M. Xu, S. Yan, and T.-S. Chua. Dynamic captioning: Video accessibility enhancement for hearing impairment. In Proceedings of the 18th ACM International Conference on Multimedia, MM ’10, pages 421–430, New York, NY, USA, 2010. ACM.
[19] D. Jain, L. Findlater, J. Gilkeson, B. Holland, R. Duraiswami, D. Zotkin, C. Vogler, and J. E. Froehlich. Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, pages 241–250, New York, NY, USA, 2015. ACM.
[20] J. Jankowski, K. Samp, I. Irzynska, M. Jozwowicz, and S. Decker. Integrating text with video and 3d graphics: The effects of text drawing styles on text readability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, pages 1321–1330, New York, NY, USA, 2010. ACM.
[21] F. Karamitroglou. A proposed set of subtitling standards in europe. Translation Journal, 2(2):1–15, 1998.
[22] S. Kawas, G. Karalis, T. Wen, and R. E. Ladner. Improving real-time captioning experiences for deaf and hard of hearing students. In Proceedings of the 18th Inter- national ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’16, pages 15–23, New York, NY, USA, 2016. ACM.
[23] C. M. Koolstra, A. L. Peeters, and H. Spinhof. The pros and cons of dubbing and subtitling. European Journal of Communication, 17:325–354, sep 2002.
[24] D. Kurlander, T. Skelly, and D. Salesin. Comic chat. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, pages 225–236, New York, NY, USA, 1996. ACM.
[25] K. Kurzhals, E. Cetinkaya, Y. Hu, W. Wang, and D. Weiskopf. Close to the action: Eye-tracking evaluation of speaker-following subtitles. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, pages 6559– 6568, New York, NY, USA, 2017. ACM.
[26] R. S. Kushalnagar, G. W. Behm, A. W. Kelstone, and S. Ali. Tracked speech-to- text display: Enhancing accessibility and readability of real-time speech-to-text. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS ’15, pages 223–230, New York, NY, USA, 2015. ACM.
[27] T.Matthews,J.Fong,andJ.Mankoff.Visualizingnon-speechsoundsforthedeaf.In
Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’05, pages 52–59, New York, NY, USA, 2005. ACM.
[28] R. W. McCreery, R. A. Venediktov, J. J. Coleman, and H. M. Leech. An evidence- based systematic review of directional microphones and digital noise reduction hear- ing aids in school-age children with hearing loss. American Journal of Audiology, 21(2):295–312, dec 2012.
[29] A. M. Piper and J. D. Hollan. Supporting medical conversations between deaf and hearing individuals with tabletop displays. In Proceedings of the 2008 ACM Confer- ence on Computer Supported Cooperative Work, CSCW ’08, pages 147–156, New York, NY, USA, 2008. ACM.
[30] R. Shen, T. Terada, and M. Tsukamoto. A system for visualizing sound source using augmented reality. In Proceedings of the 10th International Conference on Advances in Mobile Computing & Multimedia, MoMM ’12, pages 97–102, New York, NY, USA, 2012. ACM.
[31] N. Tye-Murray, S. C. Purdy, and G. G. Woodworth. Reported use of communication strategies by shhh members: Client, talker, and situational variables. Journal of Speech, Language, and Hearing Research, 35:708–717, jun 1992.
[32] Y. Wang. Discussion on technical principle for handling with translation of captions of movies and televisions. Journal of Hebei Polytechnic College, 6(1):61–63, 2006.
[33] H. Zhao. The Approach & Research of Foreign Film Dubbing. Broadcasting Cor- poration of China, Beijing, China, aug 2000.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69489-
dc.description.abstract如何在 VR/AR 的三維空間中顯示即時的對話文字一直是個重要的 研究議題。我們的目標是提出在 VR/AR 空間中的對話視覺化方法,將 語音資訊最佳化的放置在合適的位置使之變成動態的即時字幕系統。 在研究過程中,我們發現聽障者在多人對話情境下,對於這樣的視覺 化方法有更急迫的需求,他們在多人對話情境與聽力正常者對話時時 常遇到諸如:無法將字幕對應上說話者、無法應對來自視野外的話語 等等困難。我們對應了這些困難提出了多種設計並且透過十二位聽障 使用者測試進行評估。評估的結果顯示比起傳統的字幕顯示,使用者 比較喜歡能夠明確聯繫話語與說話者的氣泡框設計。我們根據這次使 用者測試的結果開發了「文字氣泡框」,一個 AR 語音即時辨識介面。 經過我們的使用評測,在多人交談情境下,比起傳統字幕顯示系統, 大多數聽障使用者比較傾向使用我們的系統。zh_TW
dc.description.abstractDeaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view. We interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers. We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants. Our study results showed that participants significantly preferred speech bubble visualizations over traditional captions. These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display. From our evaluations, we further demonstrated that DHH participants preferred our prototype over traditional captions for group conversations.en
dc.description.provenanceMade available in DSpace on 2021-06-17T03:17:09Z (GMT). No. of bitstreams: 1
ntu-107-R05944023-1.pdf: 4984704 bytes, checksum: b499000f3bd35f5f8242a4ff571a0591 (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents摘要 i
Abstract ii
誌謝 iii
1 Introduction 1
2 Related Work 3
2.1 CommunicationAssistiveSystemsforDHHUsers . . . . . . . . . . . . 3
2.2 VideoCaptioningInterfaces ........................ 4
2.3 TextBubbleVisualizations ......................... 5
3 Design 6
3.1 Semi-structured Interviews and Co-design Process . . . . . . . . . . . . 7
3.1.1 Challenges ............................. 7
3.1.2 Accommodations.......................... 8
3.1.3 Idealdesignforreal-timecaptions................. 9
3.2 VisualCueDetails ............................. 11
3.2.1 Textbubblebehavior........................ 11
3.2.2 Soundlocationawareness ..................... 11
3.3 DesignGoalandProposal ......................... 12
4 User Study 14
4.1 StudyMethodandProcedure........................ 14
4.1.1 StudyParticipants ......................... 14
4.1.2 StudyProcedure .......................... 15
4.2 StudyResults ................................ 16
4.2.1 Participants’ thoughts on the visualization . . . . . . . . . . . . . 16
4.2.2 Associating speech utterances with speakers . . . . . . . . . . . . 17
4.2.3 Appropriateamountofcontenttodisplay . . . . . . . . . . . . . 17
4.2.4 Displayingorderoftheconversation. . . . . . . . . . . . . . . . 18
4.2.5 Intuitivehintcuesforout-of-viewdialogue . . . . . . . . . . . . 19
5 Implementation and Evaluation 21
5.1 InterfaceImplementation.......................... 21
5.2 EvaluationMethodology .......................... 22
5.3 PreliminaryInterfaceFeedback....................... 23
5.3.1 Backgroundpreference....................... 23
5.3.2 Quantifiedanalysis......................... 23
5.3.3 Qualitativefeedback ........................ 23
6 Discussion and Future Work 25
6.1 CaptioningwithDifferentLanguages ................... 26
6.2 EmotionbehindCaptions.......................... 26
6.3 ContributionstoHearingIndividuals.................... 26
6.4 AdditionalNextSteps............................ 27
7 Conclusion 28
8 Acknowledgment 29
Bibliography 30
dc.language.isoen
dc.subject無障礙zh_TW
dc.subjectHololenszh_TW
dc.subject文字泡zh_TW
dc.subject擴增實境zh_TW
dc.subject隱藏字幕zh_TW
dc.subject對話框zh_TW
dc.subject聽障者zh_TW
dc.subjecttext bubblesen
dc.subjecthololensen
dc.subjectAccessibilityen
dc.subjectaugmented realityen
dc.subjectclosed captionsen
dc.subjectdeaf and hard of hearingen
dc.subjectword balloonsen
dc.title文字氣泡框:聽障者之多人對話情境下的視覺化設計zh_TW
dc.titleSpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group Conversationsen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee余能豪,黃大源
dc.subject.keyword無障礙,對話框,文字泡,聽障者,隱藏字幕,擴增實境,Hololens,zh_TW
dc.subject.keywordAccessibility,text bubbles,word balloons,deaf and hard of hearing,closed captions,augmented reality,hololens,en
dc.relation.page34
dc.identifier.doi10.6342/NTU201801263
dc.rights.note有償授權
dc.date.accepted2018-07-03
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
4.87 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved