Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 理學院
  3. 應用數學科學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100929
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor王振男zh_TW
dc.contributor.advisorJenn-Nan Wangen
dc.contributor.author戴佑諼zh_TW
dc.contributor.authorYu-Hsuan Taien
dc.date.accessioned2025-11-26T16:08:14Z-
dc.date.available2025-11-27-
dc.date.copyright2025-11-26-
dc.date.issued2025-
dc.date.submitted2025-11-10-
dc.identifier.citation[1] D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians, 2018. arXiv:1601.00670v9 [stat.CO], version 9, May 9, 2018.
[2] B. H. Brown. Electrical impedance tomography (eit): a review. Journal of Medical Engineering & Technology, 27(3):97–108, May 2003.
[3] M. Fil, M. Mesinovic, M. Morris, and J. Wildberger. β-vae reproducibility: Chal-lenges and extensions, 2021. arXiv:2112.14278v2 [cs.LG], version 2, Dec 30, 2021.
[4] X. Hou, L. Shen, K. Sun, and G. Qiu. Feature perceptual loss for variational autoen-coder, 2016. arXiv:1610.00291 [cs.CV], Oct 2016.
[5] D. P. Kingma and M. Welling. Auto-encoding variational bayes, 2022.arXiv:312.6114v11 [stat.ML], version 11, Dec 10, 2022.
[6] M. Razghandi, H. Zhou, M. Erol-Kantarci, and D. Turgut. Variational autoencoder generative adversarial network for synthetic data generation in smart home, 2022. arXiv:2201.07387 [cs.LG], Jan 2022.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100929-
dc.description.abstract本研究探討變分自編碼器(Variational Autoencoder, VAE)在不同資料型態上的應用,包含一維與二維高斯混合模型,以及從簡單圖像到複雜照片的影像資料訓練,藉此觀察 VAE 對不同資料結構的表現差異。為評估其於電阻抗斷層掃描(Electrical Impedance Tomography, EIT)領域的潛在應用,本研究進一步自行生成模擬的邊界電壓矩陣資料,並將不同矩陣形狀(例如 (120, 20) 與 (20, 900))作為輸入進行訓練與重建,分析編碼與解碼過程中的誤差變化。實驗結果顯示,高維資料在數值重建誤差上雖顯著降低,但其與導電率變化的相關性也相對減弱,顯示模型可能僅學習到主要結構,而未能充分捕捉潛在的物理特徵。最後,本研究嘗試利用卷積神經網路(Convolutional Neural Network, CNN)學習邊界電壓矩陣與 VAE 潛在變量之間的對應關係,雖未能成功獲得穩定的映射結果,但此現象凸顯了該問題的挑戰性,也為後續模型設計與改進提供了方向。本研究結果顯示,資料結構特性對 VAE 的重建能力有明顯影響,並對 EIT 的資料驅動式影像重建方法提供了實驗性證據與啟示。zh_TW
dc.description.abstractThis study investigates the application of Variational Autoencoders (VAE) to different types of datasets, including one-dimensional and two-dimensional Gaussian mixture models, as well as image datasets ranging from simple to complex patterns, in order to examine the performance of VAE under various data structures. To explore its potential application in Electrical Impedance Tomography (EIT), we further generated simulated boundary voltage matrices and trained the VAE with different matrix shapes (e.g., (120, 20) and (20, 900)) to analyze reconstruction errors during the encoding and decoding processes. Experimental results show that although higher-dimensional data yield significantly lower numerical reconstruction errors, their correlation with conductivity variation is weaker, indicating that the model may capture only the dominant structure rather than the underlying physical features. Finally, we attempted to use a Convolutional Neural Network (CNN) to learn the mapping between the boundary voltage matrices and the VAE latent variables. While the CNN failed to produce stable and reliable mappings, this result highlights the complexity of the problem and provides valuable insights for future model design. Overall, the findings demonstrate that data structure plays a crucial role in VAE reconstruction performance and offer experimental evidence for advancing data-driven reconstruction methods in EIT.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-11-26T16:08:14Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-11-26T16:08:14Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements ii
摘要 iv
Abstract v
Contents vii
List of Figures xi
Denotation xiii
Chapter 1 Introduction 1
1.1 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related Research Introduction . . . . . . . . . . . . . . . . . . . . . 3
1.3 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 2 Variational Inference and Variational Autoencoder 13
2.1 Variational Inference . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.1 Mixture of Gaussian Model (GMM) . . . . . . . . . . . . . . . . . 16
2.2 Variational Autoencoder . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 MNIST and KMNIST . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 CelebA Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Human Faces and Animal Photos . . . . . . . . . . . . . . . . . . . 26
2.2.4 Observation and Summary of VAE’ s Performance on Different Datasets 27
Chapter 3 VAE-Based Compression of Simulated EIT Potential Data 30
3.1 EIT Calculation and Data Collection . . . . . . . . . . . . . . . . . . 31
3.2 VAE Compression of Boundary Voltage Data . . . . . . . . . . . . . 33
3.2.1 Data Format and Pre-Processing . . . . . . . . . . . . . . . . . . . 33
3.2.2 Model Architecture and Training . . . . . . . . . . . . . . . . . . . 35
3.3 Reconstruction Results and Error Analysis . . . . . . . . . . . . . . 38
3.3.1 Overall Performance . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 Exceptional Samples and Error Fluctuations . . . . . . . . . . . . . 38
3.4 Effect of Matrix Size on VAE Training . . . . . . . . . . . . . . . . 40
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Chapter 4 CNN Regression for VAE Latent Space Modeling 44
4.1 CNN Model Design and Training . . . . . . . . . . . . . . . . . . . 46
4.1.1 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.2 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.1 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.2 Qualitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Chapter 5 Conclusion and Future Development 55
5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 Future Development . . . . . . . . . . . . . . . . . . . . . . . . . . 58
References 61
Appendix A — Model Architecture 62
A.1 CNN Model Architecture Summary . . . . . . . . . . . . . . . . . . 62
-
dc.language.isoen-
dc.subject變分推斷-
dc.subject變分自動編譯器-
dc.subject電阻抗斷層掃描-
dc.subject卷積神經網路-
dc.subjectVI-
dc.subjectVAE-
dc.subjectEIT-
dc.subjectCNN-
dc.title訓練資料特徵對 VAE 隱變量和基於 CNN 之預測的影響zh_TW
dc.titleImpact of Training Data Characteristics on VAE Latent Variables and CNN-Based Predictionsen
dc.typeThesis-
dc.date.schoolyear114-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee邱普照;林奕亘;林景隆zh_TW
dc.contributor.oralexamcommitteePu-Zhao KOW;Yi-Hsuan Lin;Ching-Lung Linen
dc.subject.keyword變分推斷,變分自動編譯器電阻抗斷層掃描卷積神經網路zh_TW
dc.subject.keywordVI,VAEEITCNNen
dc.relation.page62-
dc.identifier.doi10.6342/NTU202504646-
dc.rights.note未授權-
dc.date.accepted2025-11-10-
dc.contributor.author-college理學院-
dc.contributor.author-dept應用數學科學研究所-
dc.date.embargo-liftN/A-
顯示於系所單位:應用數學科學研究所

文件中的檔案:
檔案 大小格式 
ntu-114-1.pdf
  未授權公開取用
8.44 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved