Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96551
Title: 語音離散表徵與音位的相關性分析
Correlation Analysis Between Discrete Speech Representations and Phonemes
Authors: 陳建成
Chien-cheng Chen
Advisor: 李琳山
Lin-shan Lee
Keyword: 語音基石模型,離散單元,語音表徵,語音學,相關性,
Speech Foundation Model,Discrete Unit,Speech Representation,Phonology,Correlation,
Publication Year : 2025
Degree: 碩士
Abstract:   隨著語音科技的進步,強大的語音基石模型已經被廣泛應用於各種語音任務中。基於這些語音模型所得到的語音表徵,透過分群演算法等離散化程序,大量資料與模型已經讓接近於文字的各式離散表徵問世,甚至出現了「不用文字卻可以近似於文字」的「無文字(Textless)自然語言處理(Natural Language Processing,NLP)」架構。
  然而,這些語音的離散表徵與人類對語音或文字的理解究竟有多接近,依然是一個未解之謎。為了解答這個問題,本論文結合語音學的知識,以人類感知得到的、最接近文字且與語音訊號密切相關的「音位(Phoneme)」為基準,分析兩種類型的語音離散表徵 ── 第一種是透過分群演算法得到的「離散單元(Discrete Unit)」,第二種則是將離散單元經過分詞演算法重新組合成的「聲學片段(Acoustic Piece)」。本論文比較了音位與這些離散表徵之間的相關性,探討這些離散表徵是否能夠有效地辨識出與人類認知相近的發音類型(Pattern)。
  通過對離散單元的研究,我們發現 HuBERT 是最適合用於獲取離散表徵的模型,並且增加分群數有助於捕捉更細微的語音特徵。隨後透過對聲學片段的研究發現,聲學片段可以作為分群演算法之外,另一種有效的語音表徵離散化方法。此外,從音位類別的角度分析,我們還觀察到塞音和塞擦音音位較難被語音離散表徵準確歸類,而擦音、雙元音與近音的特徵則相對容易被離散表徵辨識出來。
With recent advancement of speech technology, powerful speech foundation models have been widely applied to various speech tasks. Based on the speech representations obtained from these speech models, through clustering algorithms and other discretization processes, a large amount of data and models have made various discrete representations that are close to text available, and even a framework called “Textless Natural Language Processing” which can approximate texts without using real texts has emerged.
However, how correlated these discrete representations of speech are to human understanding of speech or text remains a mystery. To answer the question, the thesis combines knowledge of phonology and uses the most text-like and closely related to speech signals that humans perceive, “Phoneme,” as a reference. We analyze two types of discrete speech representations ── the first is “Discrete Unit” obtained through clustering algorithms, and the second is “Acoustic Piece” recombined through tokenization algorithms. This thesis compares the correlation between phonemes and these discrete representations, and investigates whether these discrete representations can effectively identify pronunciation patterns that are close to human cognition.
Through the study of discrete units, we found that HuBERT is the most suitable model for obtaining discrete representations, while increasing the number of clusters helps capture more subtle speech features. Subsequently, through the study of acoustic pieces, we found that acoustic pieces can be used as another effective method of discretizing speech representations aside from clustering algorithms. In addition, from the perspective of phoneme types, we also observed that plosives and affricates are difficult to be accurately classified by speech discrete representations, while fricatives, diphthongs, and approximants are relatively easy to be figured out by discrete representations.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96551
DOI: 10.6342/NTU202500258
Fulltext Rights: 同意授權(全球公開)
metadata.dc.date.embargo-lift: 2025-02-20
Appears in Collections:電信工程學研究所

Files in This Item:
File SizeFormat 
ntu-113-1.pdf9.64 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved