Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 共同教育中心
  3. 統計碩士學位學程
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98729
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳彥賓zh_TW
dc.contributor.advisorYan-Bin Chenen
dc.contributor.author許維仁zh_TW
dc.contributor.authorWei-Ren Hsuen
dc.date.accessioned2025-08-18T16:15:43Z-
dc.date.available2025-08-19-
dc.date.copyright2025-08-18-
dc.date.issued2025-
dc.date.submitted2025-08-06-
dc.identifier.citationT. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners, 2020.
J. H. Clark, D. Garrette, I. Turc, and J. Wieting. <scp>canine</scp>: Pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73–91, 2022.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021.
P. Gage. A new algorithm for data compression. C Users Journal, 1994. http: //www.pennelynn.com/Documents/CUJ/HTML/94HTML/19940045.HTM.
Google. Noto Sans. https://fonts.google.com/noto/specimen/Noto+Sans, 2024. Version used: Regular. Accessed: July 2025.
A. Gourabathina, W. Gerych, E. Pan, and M. Ghassemi. The medium is the message: How non-clinical information shapes clinical decisions in llms. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’25, page 1805–1828, New York, NY, USA, 2025. Association for Computing Machinery.
I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017.
D. P. Kingma and M. Welling. Auto-encoding variational bayes, 2022.
T. Kudo and J. Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing, 2018.
Z. Sun, X. Li, X. Sun, Y. Meng, X. Ao, Q. He, F. Wu, and J. Li. ChineseBERT: Chinese pretraining enhanced by glyph and Pinyin information. In C. Zong, F. Xia, W. Li, and R. Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2065– 2075, Online, Aug. 2021. Association for Computational Linguistics.
Y. Tai, X. Liao, A. Suglia, and A. Vergari. Pixar: Auto-regressive language modeling in pixel space, 2024.
H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama: Open and efficient foundation language models, 2023.
L. Xue, A. Barua, N. Constant, R. Al-Rfou, S. Narang, M. Kale, A. Roberts, and C. Raffel. Byt5: Towards a token-free future with pre-trained byte-to-byte models, 2022.
Y. Zhang. A better autoencoder for image: Convolutional autoencoder. In ICONIP17-DCEC. Available online: http://users. cecs. anu. edu. au/Tom. Gedeon/conf/ABCs2018/paper/ABCs2018_paper_58. pdf (accessed on 23 March 2017), 2018.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98729-
dc.description.abstract以 BERT 為代表的子詞語言模型,在自然語言理解任務中表現優異,但在面對需要字符級推理的任務時(如字符計數或字串比對),往往無法精確捕捉細微的字符級特徵。相比之下,字符級語言模型能夠細緻地建構這類訊息,但訓練字符級語言模型會帶來極高的運算負擔。本論文提出一套輕量化方法,將字符級視覺特徵融合至子詞單位的向量表示中。我們將每個子詞轉換為字形序列,並透過自編碼器 (AE) 與 beta-變分自編碼器 (beta-VAE) 進行視覺特徵的壓縮與表徵學習。這些特徵經由線性映射或多層感知器 (MLP) 映射至 BERT 的嵌入空間,並與原始子詞向量結合,僅需修改輸入層即可完成整合,無須調整原始模型架構。我們將此方法應用於一項字符數預測任務,並以遮罩語言模型 (Masked Language Modeling, MLM) 的形式進行訓練與評估。實驗涵蓋 460 萬筆樣本,結果顯示加入視覺特徵後,模型表現皆優於基準模型。透過 beta-變分自編碼器所取得之特徵在僅使用線性投影的情況下即可展現明顯效果,而透過自編碼器取得之特徵則需搭配多層感知器才能達到最佳表現。實驗結果驗證了將額外的視覺特徵注入子詞向量,能有效提升模型的字符推理能力,彌補子詞建模與字符細節之間的落差。本方法使模型能以極小的計算成本,學習到字符層級的特徵表現,適用於如計數、比對等需精細字元辨識的任務。zh_TW
dc.description.abstractSubword-based language models like BERT achieve strong performance in natural language understanding but often miss fine-grained character-level cues essential for symbolic reasoning tasks like counting or string matching. While character-level representations can capture these subtle patterns more effectively, incorporating them directly into large models poses significant computational challenges. This thesis presents a lightweight framework to enrich subword token embeddings with character-level visual features. We render each token as a glyph sequence and train autoencoders or beta-VAEs to learn compact visual representations. These features are projected into BERT’s embedding space via linear or MLP layers and added to the original embeddings, requiring no architectural changes beyond the input layer. We evaluate this integration on a character-counting task framed as masked language modeling. Experiments on 4.6 million examples show consistent improvements over the baseline. beta-VAE-based features are effective even with linear projections, while AE-based features benefit from non-linear mappings. Our findings indicate that augmenting subword embeddings with additional visual features significantly improves symbolic reasoning abilities, bridging the gap between subword efficiency and character-level precision. This approach enables models to capture fine-grained character patterns crucial for tasks like counting, without incurring the computational costs of full character-level architectures.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-18T16:15:43Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-08-18T16:15:43Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
摘要 iii
Abstract v
Contents vii
List of Figures xi
List of Tables xiii
Chapter 1 Introduction 1
1.1 Background 1
1.2 Contributions 3
1.3 Thesis Structure 3
Chapter 2 Literature Review 5
2.1 Limitations of Subword Tokenization 5
2.2 Character and Byte-Level Modeling 5
2.3 Character-Level Enhancements for Chinese NLP 6
2.4 Vision-Based Text Modeling 7
2.5 Technical Challenges and Research Motivation 7
Chapter 3 Methodology 9
3.1 Autoencoders 10
3.2 Variational Autoencoders 11
3.3 Injecting Additional Character-Level Features into Embeddings 14
Chapter 4 Experiments 19
4.1 Overview and Motivation 19
4.2 Building Character-Level Visual Features for Subword Tokens 20
4.2.1 Dataset and Preprocessing 20
4.2.2 Visual Autoencoder and β-VAE Architecture 21
4.3 Integrating Visual Embeddings into BERT 25
4.4 Evaluating Visual Integration on Fine-grained Token Tasks 28
4.4.1 Real English Word Counting 28
Chapter 5 Conclusion and Discussion 33
5.1 Discussion and Limitations 33
5.1.1 Representation Quality and Structural Feature Space 33
5.1.2 Computational and Practical Constraints 34
5.2 Future Work 34
5.3 Conclusion 35
5.4 Overall Reflection 36
References 37
Appendix A — Detailed Results and Statistical Tests 41
A.0.1 Per-seed Evaluation Accuracy (β-VAE) 41
A.0.2 Per-seed Evaluation Accuracy (AE) 41
Appendix B — List of Font Files Utilized in Glyph Rendering 43
-
dc.language.isoen-
dc.subject字形結構zh_TW
dc.subject自編碼器zh_TW
dc.subject變分自編碼器zh_TW
dc.subject語言模型zh_TW
dc.subject模型微調zh_TW
dc.subjectBeta-變分自編碼器zh_TW
dc.subject詞嵌入zh_TW
dc.subjectFine-Tuningen
dc.subjectEmbeddingsen
dc.subjectBERTen
dc.subjectVariational Autoencoderen
dc.subjectBeta-VAEen
dc.subjectAutoencoderen
dc.subjectGlyph Structureen
dc.title以AE或Beta-VAE外加特徵方式提升BERT語言模型的內嵌訊息zh_TW
dc.titleEnhancing BERT Embeddings Using AE or Beta-VAE–Processed Additional Featuresen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee郭柏志;藍俊宏;張漢利zh_TW
dc.contributor.oralexamcommitteePo-Chih Kuo;Jakey Blue;Hendri Sutrisnoen
dc.subject.keyword自編碼器,變分自編碼器,Beta-變分自編碼器,語言模型,模型微調,詞嵌入,字形結構,zh_TW
dc.subject.keywordAutoencoder,Beta-VAE,Variational Autoencoder,BERT,Embeddings,Glyph Structure,Fine-Tuning,en
dc.relation.page44-
dc.identifier.doi10.6342/NTU202503286-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-08-09-
dc.contributor.author-college共同教育中心-
dc.contributor.author-dept統計碩士學位學程-
dc.date.embargo-lift2025-08-19-
顯示於系所單位:統計碩士學位學程

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf3.29 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved