Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 醫學工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98869
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor程子翔zh_TW
dc.contributor.advisorTzu-Hsiang Chenen
dc.contributor.author蔡柏惟zh_TW
dc.contributor.authorBo-Wei Tsaien
dc.date.accessioned2025-08-20T16:05:50Z-
dc.date.available2025-08-21-
dc.date.copyright2025-08-20-
dc.date.issued2025-
dc.date.submitted2025-08-06-
dc.identifier.citationW. E. Klunk, H. Engler, A. Nordberg, Y. Wang, G. Blomqvist, D. P. Holt, M. Bergstrom, I. Savitcheva, G.-f. Huang, S. Estrada, B. Ausmn, M. L. Debnath, J. Barletta, J. C. Price, J. Sandell, B. J. Lopresti, A. Wall, P. Koivisto,G. Antoni, C. A. Mathis, and B. L˚angstr¨om, “Imaging brain amyloid in alzheimer’s disease with pittsburgh compound-b,” Annals of Neurology,vol. 55, no. 3, pp. 306–319, Mar 2004.
M. E. Gurol, A. Viswanathan, C. Gidicsin, T. Hedden, S. Martinez-Ramirez, A. Dumas, A. Vashkevich, A. M. Ayres, E. Auriel, E. Van Etten, A. Becker, J. Carmasin, K. Schwab, J. Rosand, K. A. Johnson, and S. M. Greenberg, “Cerebral amyloid angiopathy burden associated with leukoaraiosis: A positron emission tomography/magnetic resonance imaging study,” Annals of Neurology, vol. 73, no. 4, pp. 529–536, Apr 2013.
R. E. Carson, “Pet physiological measurements using constant infusion,” Nuclear Medicine and Biology, vol. 27, no. 7, pp. 657–660, Oct 2000.
A. Mallik, A. Drzezga, and S. Minoshima, “Clinical amyloid imaging,” Seminars in Nuclear Medicine, vol. 47, no. 1, pp. 31–43, Jan 2017.
S. Komori, D. J. Cross, M. Mills, Y. Ouchi, S. Nishizawa, H. Okada, T. Norikane, T. Thientunyakit, Y. Anzai, and S. Minoshima, “Deep-learning prediction of amyloid deposition from early-phase amyloid positron emission tomography imaging,” Annals of Nuclear Medicine, vol. 36, no. 10, pp. 913–921, Oct 2022.
A. Sanaat, C. Boccalini, G. Mathoux, D. Perani, G. B. Frisoni, S. Haller, M.-L. Montandon, C. Rodriguez, P. Giannakopoulos, V. Garibotto, and H. Zaidi, “A deep learning model for generating [{18}F]FDG PET images from early-phase [{18}F]Florbetapir and [{18}F]Flutemetamol PET images,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 51, no. 12, pp. 3518–3531, Oct 2024.
M. Mokri, M. Safari, S. Kaviani, D. Juneau, C. Cohalan, L. Archambault, and J.-F. Carrier, “Deep learning-based prediction of later 13N-ammonia myocardial PET image frames from initial frames,” Biomedical Signal Processing and Control, vol. 100, no. 4, p. 106865, Feb 2025.
Q. Yang, W. Li, Z. Huang, Z. Chen, W. Zhao, Y. Gao, X. Yang, Y. Yang, H. Zheng, D. Liang, J. Liu, R. Chen, and Z. Hu, “Bidirectional dynamic frame prediction network for total-body [68Ga]Ga-PSMA-11 and [68Ga]Ga-FAPI-04 PET images,” EJNMMI Physics, vol. 11, no. 1, p. 92, Nov 2024.
A. Sanaat, E. Mirsadeghi, B. Razeghi, N. Ginovart, and H. Zaidi, “Fast dynamic brain PET imaging using stochastic variational prediction for recurrent frame generation,” Medical Physics, vol. 48, no. 9, pp. 5059–5071, Sep 2021.
R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, “Neural ordinary differential equations,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31. Curran Associates, Inc., 2018. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf
P. Kidger, J. Foster, X. Li, and T. J. Lyons, “Neural sdes as infinitedimensional gans,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. Meila and T. Zhang, Eds., vol. 139, PMLR, 18–24 Jul 2021, pp. 5453–5463. [Online]. Available: https://proceedings.mlr.press/v139/kidger21b.html.
E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. Courville, “Film: Visual reasoning with a general conditioning layer,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), vol. 32, no. 1, 2018, pp. 3942–3951.
O. Dalmaz, M. Yurt, and T. ?ukur, “Resvit: Residual vision transformers for multimodal medical image synthesis,” IEEE Transactions on Medical Imaging, vol. 41, no. 10, pp. 2598–2614, Oct 2022.
H. Emami, M. Dong, and C. K. Glide-Hurst, “Attention-guided generative adversarial network to address atypical anatomy in synthetic CT generation,” in Proceedings of the 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI), 2020, pp. 188–193.
K. T. Chen, T. N. Toueg, M. E. I. Koran, G. Davidzon, M. Zeineh, D. Holley, H. Gandhi, K. Halbert, A. Boumis, G. Kennedy, E. Mormino, M. Khalighi, and G. Zaharchuk, “True ultra-low-dose amyloid PET/MRI enhanced with deep learning for clinical interpretation,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 48, no. 8, pp. 2416–2425, Jul 2021.
Z. S. Siam, M. Y. Akon, I. J. Munmun, A. Al-Amin, M. A. Salam, and I. A. Mamoon, “A paired ct and mri dataset for advanced medical imaging applications,” Data in Brief, vol. 61, p. 111768, Aug 2025.
Z. Zhang, L. Yao, B. Wang, G.-R. Kwon, G. Durak, E. Keles, A. Medetalibeyoglu, and U. Bagci, “Diffboost: Enhancing medical image segmentation via text-guided diffusion model,” IEEE Transactions on Medical Imaging, Dec 2024, Early Access.
O. Dalmaz, B. Saglam, G. Elmas, M. U. Mirza, and T. ?ukur, “Denoising diffusion adversarial models for unconditional medical image generation,” in Proceedings of the 2023 31st Signal Processing and Communications Applications Conference (SIU), Istanbul, Turkey: IEEE, Jul 2023, pp. 1–5.
Z. Liu, H. Ye, and H. Liu, “Deep-learning-based framework for PET image reconstruction from sinogram domain,” Applied Sciences, vol. 12, no. 16, p. 8118, 2022.
C. Shen, Z. Yang, and Y. Zhang, “Pet image denoising with score-based diffusion probabilistic models,” in Medical Image Computing and Computer Assisted Intervention V MICCAI 2023, ser. Lecture Notes in Computer Science, H. Greenspan, A. Madabhushi, P. Mousavi, S. Salcudean, J. Duncan, T. Syeda-Mahmood, and R. Taylor, Eds., vol. 14220. Springer, Cham, 2023, pp. 270–278.
T. Zhang, H. Fu, Y. Zhao, J. Cheng, M. Guo, Z. Gu, B. Yang, Y. Xiao, S. Gao, and J. Liu, “SkrGAN: Sketching-Rendering Unconditional Generative Adversarial Networks for Medical Image Synthesis,” in Medical Image Computing and Computer Assisted Intervention V MICCAI 2019, ser. Lecture Notes in Computer Science, D. Shen, T. Liu, and T. Peters, Eds., vol. 11767, Springer, Cham, 2019, pp. 777–785.
S.-I. Jang, C. Lois, E. Thibault, J. A. Becker, Y. Dong, M. D. Normandin, J. C. Price, K. A. Johnson, G. El Fakhri, and K. Gong, “TauPETGen: Text-conditional tau PET image synthesis based on latent diffusion models,” arXiv preprint arXiv:2306.11984, Jun 2023, version 1. [Online]. Available: https://arxiv.org/abs/2306.11984
B. Zhan, D. Li, X. Wu, J. Zhou, and Y. Wang, “Multi-modal MRI image synthesis via GAN with multi-scale gate mergence,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 1, pp. 17–26, Jan 2022.
K. T. Islam, S. Zhong, P. Zakavi, Z. Chen, H. Kavnoudias, S. Farquharson, G. Durbridge, M. Barth, K. L. McMahon, P. M. Parizel, A. Dwyer, G. F. Egan, M. Law, and Z. Chen, “Improving portable low-field MRI image quality through image-to-image translation using paired low- and high-field images,” Scientific Reports, vol. 13, p. 21183, Dec 2023.
C. D. Pain, G. F. Egan, and Z. Chen, “Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 49, no. 9, pp. 3098–3118, Jul 2022.
C. Liu, K. Xu, L. L. Shen, G. Huguet, Z. Wang, A. Tong, D. Bzdok, J. Stewart, J. C. Wang, L. V. Del Priore, and S. Krishnaswamy, “ImageFlowNet: Forecasting multiscale image-level trajectories of disease progression with irregularly-sampled longitudinal medical images,” in Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2025, pp. 1–5.
J. Wang, J. Chen, D. Z. Chen, and J. Wu, “LKM-UNet: Large Kernel Vision Mamba UNet for Medical Image Segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2024, ser. Lecture Notes in Computer Science, M. G. Linguraru, Q. Dou, A. Feragen, S. Giannarou, B. Glocker, and K. Lekadir, Eds., vol. 15008. Springer, Cham, 2024, pp. 360–370.
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, Apr 2004.
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
C. Finlay, J.-H. Jacobsen, L. Nurbekyan, and A. Oberman, “How to train your neural ODE: the world of Jacobian and kinetic regularization,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119. PMLR, 13–18 Jul 2020, pp. 3154–3164. [Online]. Available: https://proceedings.mlr.press/v119/finlay20a.html
P. Kidger, J. Foster, X. C. Li, and T. Lyons, “Efficient and accurate gradients for neural SDEs,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34. Curran Associates, Inc., 2021, pp. 18747–18761. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2021/file/9ba196c7a6e89eafd0954de80fc1b224-Paper.pdf
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119. PMLR, 13–18 Jul 2020, pp. 1597–1607. [Online]. Available: https://proceedings.mlr.press/v119/chen20j.html
O. Ronneberger, P. Fischer, and T. Brox, “UNet: Convolutional networks for biomedical image segmentation,” arXiv preprint arXiv:1505.04597, 2015. [Online]. Available: https://arxiv.org/abs/1505.04597.
S. S. Shapiro and M. B. Wilk, “An analysis of variance test for normality (complete samples),” Biometrika, vol. 52, no. 3V4, pp. 591–611, 1965.
M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” Journal of the American Statistical Association, vol. 69, no. 346, pp. 364–367, 1974.
W. S. Student, “The probable error of a mean,” Biometrika, vol. 6, no. 1, pp. 1–25, 1908.
B. Fischl, “Freesurfer,” NeuroImage, vol. 62, no. 2, pp. 774–781, 2012.
K. R. Moon, D. van Dijk, Z. Wang, S. Gigante, D. B. Burkhardt, W. S. Chen, K. Yim, A. van den Elzen, M. J. Hirn, R. R. Coifman, N. B. Ivanova, G. Wolf, and S. Krishnaswamy, “Visualizing structure and transitions in high-dimensional biological data,” Nature Biotechnology, vol. 37, no. 12, pp. 1482–1492, Dec 2019.
Y. Li, J. O. Rinne, L. Mosconi, E. Pirraglia, H. Rusinek, S. DeSanti, N. Kemppainen, K. Nägren, B.-C. Kim, W. Tsui, and M. J. de Leon, “Regional analysis of FDG and PiB-PET images in normal aging, mild cognitive impairment, and Alzheimer’s disease,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 35, no. 12, pp. 2169–2181, Dec 2008.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98869-
dc.description.abstract我們提出 M.O.V.I.E.(Modeling ODEs with Visually Interpretable Evolution),一套可將僅約10 分鐘之早期 [¹¹C]-PiB PET 影像推估晚期(late-phase)影像。與以往將此任務視為靜態逐幀映射的方法不同,M.O.V.I.E. 將影像轉換建模為由神經常微分方程(neural ODE)或隨機微分方程(neural SDE)所支配的時間連續過程,因此能產生平滑且可解釋的示蹤劑潛在動態。為進一步貼近生理真實性,我們導入 FiLM(Feature-wise Linear Modulation)條件,利用中期 PET 訊號生理導向地調控 漂移(drift)與擴散(diffusion)過程。

與真實晚期影像相比,M.O.V.I.E. 模型在量化指標上取得顯著改善。ODE 模型將峰值信噪比 (PSNR) 自 26.69 提升至 41.23(+54.4%),SDE 模型則提高至 41.46(+55.3%),顯示重建品質大幅進步;結構相似性 (SSIM) 方面,兩者皆較基準影像提升了4.28 %。在感知品質方面,以深度特徵衡量的影像感知相似度 (LPIPS) 顯示,UNet 基線模型可將感知誤差降低 20.7 %,SDE 亦降低 17.3%。

在臨床層面,合成影像展現出強大的診斷效用。在 SDE 模型下,扣帶回(cingulate)與額葉(frontal)區域對 Aβ 陽性偵測的ROC 曲線下面積(AUC)分別達到 0.798 與 0.786;敏感性(sensitivity)提高至 63.04%,而特異性(specificity)則維持在 83.33%,使正向預測值(positive predictive value)達到 87.88%。此外,透過 PHATE 投影潛在空間,可清晰區分不同診斷群體及 Aβ 狀態的動力學模式。

M.O.V.I.E. 為首個結合時間連續性與生理引導之早期至晚期 PET 合成框架。透過跳脫傳統黑箱式迴歸(black‐box regression),轉向可解釋潛在動態,本方法展現縮短掃描時間、提升可解釋性,並於澱粉樣成像之臨床決策中提供實質助益。
zh_TW
dc.description.abstractWe propose M.O.V.I.E. (Modeling ODEs with Visually Interpretable Evolution), a generative framework for synthesizing late-phase [¹¹C]-PiB PET images from short 1-minute (9′30′′-10′30′′ p.i.)early acquisitions. Unlike prior approaches that treat this task as static frame-to-frame mapping, M.O.V.I.E. models the transformation as a time-continuous process governed by neural ordinary or stochastic differential equations (ODEs/SDEs), enabling smooth and interpretable latent dynamics of tracer uptake. To further enhance biological realism, we introduce FiLM-based conditioning, guiding the drift and diffusion processes using mid-phase PET signals in a physiology-aware manner.

Compared to the ground-truth reference images, our M.O.V.I.E. models demonstrated substantial quantitative improvements. PSNR increased by +54.4% and +55.3% for the ODE and SDE variants, respectively, indicating superior reconstruction fidelity. In terms of structural similarity, both ODE and SDE improved SSIM by +4.28% over the reference. Furthermore, perceptual quality measured by LPIPS was significantly improved, with the UNet baseline reducing perceptual error by -20.7%, followed by -17.3% in the SDE variant. Effect size comparisons also confirmed that both ODE and SDE outperformed the UNet baseline.

At the clinical level, synthesized images yielded strong diagnostic utility. Under the SDE model, the cingulate and frontal regions achieved AUCs of 0.798 and 0.786 for amyloid-positivity detection. Sensitivity increased to 63.04%, while specificity was preserved at 83.33%, resulting in a high positive predictive value (87.88%). PHATE projections of the latent space further revealed distinct kinetic patterns across diagnostic groups and Aβ status.

M.O.V.I.E. is the first framework to bridge early-to-late PET synthesis with a temporally continuous, physiology-guided generative model. By moving beyond black-box regression toward explainable latent dynamics, it demonstrates the potential to reduce scan time, improve interpretability, and support clinical decision-making in amyloid imaging.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-20T16:05:50Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-08-20T16:05:50Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
Chinese Abstract iii
Abstract v
List of Figures xi
List of Tables xv
List of Abbreviations xix
1 Introduction 1
2 Related Works 5
2.1 Medical Image Translation and Its Variants . . . . . . . . . . . . 5
2.2 Static vs. Multi-Frame Strategies in Early-to-Late PET Translation 6
2.2.1 Static / Averaged Frame Prediction . . . . . . . . . . . . 7
2.2.2 Multi-Frame Sequence Prediction . . . . . . . . . . . . . 7
2.3 Limitations of Prior Work and Our Contributions . . . . . . . . . 8
3 Materials and Methods 11
3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 Clinical and Demographic Profile of the Cohort . . . . . . 12
3.2.2 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . 13
3.3 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.1 Dual-Encoder Large-Kernel Mamba U-Net for Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.2 FiLM-conditioned Dynamic Bottleneck . . . . . . . . . . 17
3.3.3 FiLM Generation . . . . . . . . . . . . . . . . . . . . . . 19
3.3.4 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Experimental Details . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4.1 Software & Hardware . . . . . . . . . . . . . . . . . . . 22
3.4.2 Data Splits and Stratified 5-Fold Cross-Validation . . . . . 22
3.4.3 Optimization Protocol . . . . . . . . . . . . . . . . . . . 23
3.4.4 Guide-Dropout Curriculum . . . . . . . . . . . . . . . . . 23
3.4.5 Reproducibility . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5.1 Image-level Analysis . . . . . . . . . . . . . . . . . . . . 25
3.5.2 Regional Analysis . . . . . . . . . . . . . . . . . . . . . 26
3.5.3 Latent Trajectory Analysis . . . . . . . . . . . . . . . . . 27
4 Results 33
4.1 Image-level analysis . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Region-level Analysis . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Regional SUVR Consistency Analysis . . . . . . . . . . . 34
4.2.2 Numerical Agreement . . . . . . . . . . . . . . . . . . . 35
4.2.3 Diagnostic Accuracy (ROC Analysis) . . . . . . . . . . . 35
4.2.4 Clinical Utility (Decision-Curve Analysis) . . . . . . . . 36
4.3 Latent Trajectory and Integral Visualization . . . . . . . . . . . . 37
5 Discussion 47
5.1 Image-level analysis . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Region-level analysis . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.1 Numerical Agreement Insights . . . . . . . . . . . . . . . 49
5.2.2 Diagnostic Implications . . . . . . . . . . . . . . . . . . 50
5.2.3 Decision-Making Value . . . . . . . . . . . . . . . . . . 51
5.3 Interpretable Dynamics and Clinical Insight . . . . . . . . . . . . 51
5.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6 Conclusion 55
Reference 57
-
dc.language.isoen-
dc.subject動態 PET 合成zh_TW
dc.subject早期-晚期 PET 預測zh_TW
dc.subject連續時間建模zh_TW
dc.subject可解釋深度學習zh_TW
dc.subject潛在軌跡分析zh_TW
dc.subject臨床決策支援zh_TW
dc.subject神經常微分方程/隨機微分方程zh_TW
dc.subjectFiLM 調控zh_TW
dc.subjectContinuous-time modelingen
dc.subjectDynamic PET synthesisen
dc.subjectClinical decision supporten
dc.subjectLatent trajectory analysisen
dc.subjectInterpretable deep learning;en
dc.subjectEarly-to-late PET predictionen
dc.subjectFiLM Modulationen
dc.subjectNeural ODE / SDEen
dc.titleM.O.V.I.E:應用 FiLM 調控的神經微分方程於 ¹¹C-PiB 正子斷層掃描晚期影像生成潛在動態建模研究zh_TW
dc.titleM.O.V.I.E: A FiLM-Guided Neural ODE for Latent Trajectory Modeling in Late-Phase ¹¹C-PiB PET Synthesisen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee顏若芳;陳中明zh_TW
dc.contributor.oralexamcommitteeRuoh-Fang Yen;Chung-Ming Chenen
dc.subject.keyword動態 PET 合成,早期-晚期 PET 預測,連續時間建模,可解釋深度學習,潛在軌跡分析,臨床決策支援,神經常微分方程/隨機微分方程,FiLM 調控,zh_TW
dc.subject.keywordDynamic PET synthesis,Early-to-late PET prediction,Continuous-time modeling,Interpretable deep learning;,Latent trajectory analysis,Clinical decision support,Neural ODE / SDE,FiLM Modulation,en
dc.relation.page63-
dc.identifier.doi10.6342/NTU202504093-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2025-08-11-
dc.contributor.author-college工學院-
dc.contributor.author-dept醫學工程學系-
dc.date.embargo-lift2030-08-06-
顯示於系所單位:醫學工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  未授權公開取用
7.69 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved