Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 醫學院
  3. 醫療器材與醫學影像研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99680
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor蕭輔仁zh_TW
dc.contributor.advisorFuren Xiaoen
dc.contributor.authorZolnamar Dorjsembezh_TW
dc.contributor.authorZolnamar Dorjsembeen
dc.date.accessioned2025-09-17T16:21:26Z-
dc.date.available2025-09-18-
dc.date.copyright2025-09-17-
dc.date.issued2025-
dc.date.submitted2025-08-06-
dc.identifier.citation[1] G. Katti, S. A. Ara, and A. Shireen, "Magnetic resonance imaging (MRI)–A review," International journal of dental clinics, vol. 3, no. 1, pp. 65-70, 2011.
[2] R. Baskar, K. A. Lee, R. Yeo, and K.-W. Yeoh, "Cancer and radiation therapy: current advances and future directions," International journal of medical sciences, vol. 9, no. 3, p. 193, 2012.
[3] T. M. Buzug, "Computed tomography," in Springer handbook of medical technology: Springer, 2011, pp. 311-342.
[4] R. Rai et al., "The integration of MRI in radiation therapy: collaboration of radiographers and radiation therapists," Journal of Medical Radiation Sciences, vol. 64, no. 1, pp. 61-68, 2017.
[5] R. J. Goodburn et al., "The future of MRI in radiation therapy: Challenges and opportunities for the MR community," Magnetic resonance in medicine, vol. 88, no. 6, pp. 2592-2608, 2022.
[6] M. Boulanger et al., "Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review," Physica Medica, vol. 89, pp. 265-281, 2021.
[7] M. A. Bahloul, S. Jabeen, S. Benoumhani, H. A. Alsaleh, Z. Belkhatir, and A. Al‐Wabil, "Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning," Journal of Applied Clinical Medical Physics, vol. 25, no. 11, p. e14499, 2024.
[8] E. Alvarez-Andres, F. Villegas, A. Barateau, and C. Robert, "sCT and Dose Calculation," in A Practical Guide to MR-Linac: Technical Innovation and Clinical Implication: Springer, 2024, pp. 89-121.
[9] F. Villegas et al., "Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy," Radiotherapy and Oncology, p. 110387, 2024.
[10] J. Grigo, J. Szkitsak, D. Höfler, R. Fietkau, F. Putz, and C. Bert, "“sCT-Feasibility”-a feasibility study for deep learning-based MRI-only brain radiotherapy," Radiation Oncology, vol. 19, no. 1, p. 33, 2024.
[11] X. Han, "MR‐based synthetic CT generation using a deep convolutional neural network method," Medical physics, vol. 44, no. 4, pp. 1408-1419, 2017.
[12] G. Li, L. Bai, C. Zhu, E. Wu, and R. Ma, "A novel method of synthetic CT generation from MR images based on convolutional neural networks," in 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2018: IEEE, pp. 1-5.
[13] A. Bahrami, A. Karimian, E. Fatemizadeh, H. Arabi, and H. Zaidi, "A new deep convolutional neural network design with efficient learning capability: Application to CT image synthesis from MRI," Medical physics, vol. 47, no. 10, pp. 5158-5171, 2020.
[14] H. Emami, M. Dong, S. P. Nejad-Davarani, and C. K. Glide-Hurst, "SA-GAN: Structure-aware GAN for organ-preserving synthetic CT generation," in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VI 24, 2021: Springer, pp. 471-481.
[15] S. K. Kang et al., "Synthetic CT generation from weakly paired MR images using cycle-consistent GAN for MR-guided radiotherapy," Biomedical engineering letters, vol. 11, no. 3, pp. 263-271, 2021.
[16] X. Chen et al., "SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy," Computerized Medical Imaging and Graphics, vol. 113, p. 102353, 2024.
[17] L. Zhong et al., "Multi-scale tokens-aware transformer network for multi-region and multi-sequence MR-to-CT synthesis in a single model," IEEE transactions on medical imaging, vol. 43, no. 2, pp. 794-806, 2023.
[18] B. Zhao et al., "CT synthesis from MR in the pelvic area using Residual Transformer Conditional GAN," Computerized medical imaging and graphics, vol. 103, p. 102150, 2023.
[19] S. Pan et al., "Synthetic CT generation from MRI using 3D transformer‐based denoising diffusion model," Medical Physics, vol. 51, no. 4, pp. 2538-2548, 2024.
[20] S. L. Brooks, "Computed tomography," Dental Clinics of North America, vol. 37, no. 4, pp. 575-590, 1993.
[21] E. Samei and B. J. Pelc, Computed tomography. Springer, 2020.
[22] M. T. Vlaardingerbroek and J. A. Boer, Magnetic resonance imaging: theory and practice. Springer Science & Business Media, 2013.
[23] J. R. Haaga and D. Boll, Computed tomography & magnetic resonance imaging of the whole body e-book. Elsevier Health Sciences, 2016.
[24] R. Hudej and U. A. Van Der Heide, "The Physics of CT and MR Imaging," in Gynecologic Radiation Therapy: Novel Approaches to Image-Guidance and Management: Springer, 2010, pp. 33-39.
[25] J. Bernardo et al., "Generative or discriminative? getting the best of both worlds," Bayesian statistics, vol. 8, no. 3, pp. 3-24, 2007.
[26] T. Jebara, Machine learning: discriminative and generative. Springer Science & Business Media, 2012.
[27] D. P. Kingma and M. Welling, "Auto-encoding variational bayes," ed: Banff, Canada, 2013.
[28] P. Baldi, "Autoencoders, unsupervised learning, and deep architectures," in Proceedings of ICML workshop on unsupervised and transfer learning, 2012: JMLR Workshop and Conference Proceedings, pp. 37-49.
[29] I. J. Goodfellow et al., "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014.
[30] N. K. Singh and K. Raza, "Medical image generation using generative adversarial networks: A review," Health informatics: A computational perspective in healthcare, pp. 77-96, 2021.
[31] Y. Chen et al., "Generative adversarial networks in medical image augmentation: a review," Computers in Biology and Medicine, vol. 144, p. 105382, 2022.
[32] L. Kong, C. Lian, D. Huang, Y. Hu, and Q. Zhou, "Breaking the dilemma of medical image-to-image translation," Advances in Neural Information Processing Systems, vol. 34, pp. 1964-1978, 2021.
[33] K. Armanious et al., "MedGAN: Medical image translation using GANs," Computerized medical imaging and graphics, vol. 79, p. 101684, 2020.
[34] M. Esmaeili et al., "Generative adversarial networks for anomaly detection in biomedical imaging: A study on seven medical image datasets," IEEE Access, vol. 11, pp. 17906-17921, 2023.
[35] R. Gupta, A. Sharma, and A. Kumar, "Super-resolution using GANs for medical imaging," Procedia Computer Science, vol. 173, pp. 28-35, 2020.
[36] Y. Gu et al., "MedSRGAN: medical images super-resolution using generative adversarial networks," Multimedia Tools and Applications, vol. 79, pp. 21815-21840, 2020.
[37] A. A. Showrov et al., "Generative adversarial networks (GANs) in medical imaging: advancements, applications and challenges," IEEE Access, 2024.
[38] M. M. Saad, R. O’Reilly, and M. H. Rehmani, "A survey on training challenges in generative adversarial networks for biomedical image analysis," Artificial Intelligence Review, vol. 57, no. 2, p. 19, 2024.
[39] L. Fetty et al., "Latent space manipulation for high-resolution medical image synthesis via the StyleGAN," Zeitschrift für Medizinische Physik, vol. 30, no. 4, pp. 305-314, 2020.
[40] L. Sun, J. Chen, Y. Xu, M. Gong, K. Yu, and K. Batmanghelich, "Hierarchical amortized GAN for 3D high resolution medical image synthesis," IEEE journal of biomedical and health informatics, vol. 26, no. 8, pp. 3966-3975, 2022.
[41] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, "Deep unsupervised learning using nonequilibrium thermodynamics," in International conference on machine learning, 2015: pmlr, pp. 2256-2265.
[42] J. Ho, A. Jain, and P. Abbeel, "Denoising diffusion probabilistic models," Advances in neural information processing systems, vol. 33, pp. 6840-6851, 2020.
[43] Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, "Score-based generative modeling through stochastic differential equations," arXiv preprint arXiv:2011.13456, 2020.
[44] P. Dhariwal and A. Nichol, "Diffusion models beat gans on image synthesis," Advances in neural information processing systems, vol. 34, pp. 8780-8794, 2021.
[45] A. Q. Nichol and P. Dhariwal, "Improved denoising diffusion probabilistic models," in International conference on machine learning, 2021: PMLR, pp. 8162-8171.
[46] J. Wolleb, F. Bieder, R. Sandkühler, and P. C. Cattin, "Diffusion models for medical anomaly detection," in International Conference on Medical image computing and computer-assisted intervention, 2022: Springer, pp. 35-45.
[47] A. Kazerouni et al., "Diffusion models in medical imaging: A comprehensive survey," Medical image analysis, vol. 88, p. 102846, 2023.
[48] J. Ma, G. Jian, and J. Chen, "Diffusion Model‐Based MRI Super‐Resolution Synthesis," International Journal of Imaging Systems and Technology, vol. 35, no. 2, p. e70021, 2025.
[49] J. Song, C. Meng, and S. Ermon, "Denoising diffusion implicit models," arXiv preprint arXiv:2010.02502, 2020.
[50] T. Salimans and J. Ho, "Progressive distillation for fast sampling of diffusion models," arXiv preprint arXiv:2202.00512, 2022.
[51] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[52] S. R. Dubey and S. K. Singh, "Transformer-based generative adversarial networks in computer vision: A comprehensive survey," IEEE Transactions on Artificial Intelligence, 2024.
[53] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
[54] E. U. Henry, O. Emebob, and C. A. Omonhinmin, "Vision transformers in medical imaging: A review," arXiv preprint arXiv:2211.10043, 2022.
[55] C. Prabhakar, H. Li, J. Yang, S. Shit, B. Wiestler, and B. Menze, "ViT-AE++: improving vision transformer autoencoder for self-supervised medical image representations," in Medical Imaging with Deep Learning, 2024: PMLR, pp. 666-679.
[56] S. Wang et al., "Autoregressive sequence modeling for 3d medical image representation," in Proceedings of the AAAI Conference on Artificial Intelligence, 2025, vol. 39, no. 8, pp. 7871-7879.
[57] F. Shamshad et al., "Transformers in medical imaging: A survey," Medical image analysis, vol. 88, p. 102802, 2023.
[58] K. Xia and J. Wang, "Recent advances of transformers in medical image analysis: a comprehensive review," MedComm–Future Medicine, vol. 2, no. 1, p. e38, 2023.
[59] S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, "Transformers in vision: A survey," ACM computing surveys (CSUR), vol. 54, no. 10s, pp. 1-41, 2022.
[60] C. Oulmalme, H. Nakouri, and F. Jaafar, "A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models," International journal of medical informatics, p. 105903, 2025.
[61] K. Denecke, R. May, and O. Rivera-Romero, "Transformer models in healthcare: a survey and thematic analysis of potentials, shortcomings and risks," Journal of Medical Systems, vol. 48, no. 1, p. 23, 2024.
[62] A. Martins, A. Farinhas, M. Treviso, V. Niculae, P. Aguiar, and M. Figueiredo, "Sparse and continuous attention mechanisms," Advances in Neural Information Processing Systems, vol. 33, pp. 20989-21001, 2020.
[63] M. Heidari et al., "Hiformer: Hierarchical multi-scale representations using transformers for medical image segmentation," in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2023, pp. 6202-6212.
[64] S. E. Boudjellal, A. Boudjelal, and N.-E. Boukezzoula, "Hybrid convolution-transformer models for breast cancer classification using histopathological images," in 2022 2nd International Conference on New Technologies of Information and Communication (NTIC), 2022: IEEE, pp. 1-6.
[65] P. Dutta, K. A. Sathi, M. A. Hossain, and M. A. A. Dewan, "Conv-ViT: a convolution and vision transformer-based hybrid feature extraction method for retinal disease detection," Journal of Imaging, vol. 9, no. 7, p. 140, 2023.
[66] J. A. Dowling et al., "An atlas-based electron density mapping method for magnetic resonance imaging (MRI)-alone treatment planning and adaptive MRI-based prostate radiation therapy," International Journal of Radiation Oncology* Biology* Physics, vol. 83, no. 1, pp. e5-e11, 2012.
[67] F. Guerreiro et al., "Evaluation of a multi-atlas CT synthesis approach for MRI-only radiotherapy treatment planning," Physica Medica, vol. 35, pp. 7-17, 2017.
[68] D. Andreasen, K. Van Leemput, R. H. Hansen, J. A. Andersen, and J. M. Edmund, "Patch‐based generation of a pseudo CT from conventional MRI sequences for MRI‐only radiotherapy of the brain," Medical physics, vol. 42, no. 4, pp. 1596-1605, 2015.
[69] A. Johansson, M. Karlsson, and T. Nyholm, "CT substitute derived from MRI sequences with ultrashort echo time," Medical physics, vol. 38, no. 5, pp. 2708-2714, 2011.
[70] S. Aouadi et al., "Generation of synthetic CT using multi-scale and dual-contrast patches for brain MRI-only external beam radiotherapy," Physica Medica, vol. 42, pp. 174-184, 2017.
[71] S.-H. Hsu, Y. Cao, K. Huang, M. Feng, and J. M. Balter, "Investigation of a method for generating synthetic CT models from MRI scans of the head and neck for radiation therapy," Physics in Medicine & Biology, vol. 58, no. 23, p. 8419, 2013.
[72] J. M. Edmund and T. Nyholm, "A review of substitute CT generation for MRI-only radiation therapy," Radiation Oncology, vol. 12, pp. 1-15, 2017.
[73] J. A. Dowling et al., "Automatic substitute computed tomography generation and contouring for magnetic resonance imaging (MRI)-alone external beam radiation therapy from standard MRI sequences," International Journal of Radiation Oncology* Biology* Physics, vol. 93, no. 5, pp. 1144-1153, 2015.
[74] M. Maspero et al., "Quantification of confounding factors in MRI-based dose calculations as applied to prostate IMRT," Physics in Medicine & Biology, vol. 62, no. 3, p. 948, 2017.
[75] D. Nie, X. Cao, Y. Gao, L. Wang, and D. Shen, "Estimating CT image from MRI data using 3D fully convolutional networks," in International Workshop on Deep Learning in Medical Image Analysis, 2016: Springer, pp. 170-178.
[76] S. Chen, A. Qin, D. Zhou, and D. Yan, "U‐net‐generated synthetic CT images for magnetic resonance imaging‐only prostate intensity‐modulated radiation therapy treatment planning," Medical physics, vol. 45, no. 12, pp. 5659-5665, 2018.
[77] A. K. Yawson et al., "Enhancing U-Net-based Pseudo-CT generation from MRI using CT-guided bone segmentation for radiation treatment planning in head & neck cancer patients," Physics in Medicine & Biology, vol. 70, no. 4, p. 045018, 2025.
[78] G. Dovletov, D. D. Pham, S. Lörcks, J. Pauli, M. Gratz, and H. H. Quick, "Grad-CAM guided U-net for MRI-based pseudo-CT synthesis," in 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2022: IEEE, pp. 2071-2075.
[79] J. Fu et al., "Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging," Medical physics, vol. 46, no. 9, pp. 3788-3798, 2019.
[80] H. Emami, M. Dong, S. P. Nejad‐Davarani, and C. K. Glide‐Hurst, "Generating synthetic CTs from magnetic resonance images using generative adversarial networks," Medical physics, vol. 45, no. 8, pp. 3627-3636, 2018.
[81] D. Nie et al., "Medical image synthesis with context-aware generative adversarial networks," in International conference on medical image computing and computer-assisted intervention, 2017: Springer, pp. 417-425.
[82] S. Kazemifar et al., "Dosimetric evaluation of synthetic CT generated with GANs for MRI‐only proton therapy treatment planning of brain tumors," Journal of applied clinical medical physics, vol. 21, no. 5, pp. 76-86, 2020.
[83] J. M. Wolterink, A. M. Dinkla, M. H. Savenije, P. R. Seevinck, C. A. van den Berg, and I. Išgum, "Deep MR to CT synthesis using unpaired data," in International workshop on simulation and synthesis in medical imaging, 2017: Springer, pp. 14-23.
[84] Y. Hiasa et al., "Cross-modality image synthesis from unpaired data using cyclegan: Effects of gradient consistency loss and training data size," in International workshop on simulation and synthesis in medical imaging, 2018: Springer, pp. 31-41.
[85] X. Liu, X. Wei, A. Yu, and Z. Pan, "Unpaired data based cross-domain synthesis and segmentation using attention neural network," in Asian Conference on Machine Learning, 2019: PMLR, pp. 987-1000.
[86] Y. Liu et al., "Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning," Physics in Medicine & Biology, vol. 64, no. 20, p. 205022, 2019.
[87] S. U. Dar, M. Yurt, L. Karacan, A. Erdem, E. Erdem, and T. Cukur, "Image synthesis in multi-contrast MRI with conditional generative adversarial networks," IEEE transactions on medical imaging, vol. 38, no. 10, pp. 2375-2388, 2019.
[88] Z. Zhang, L. Yang, and Y. Zheng, "Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network," in Proceedings of the IEEE conference on computer vision and pattern Recognition, 2018, pp. 9242-9251.
[89] A. Tapp et al., "MR to CT synthesis using 3D latent diffusion," in 2024 IEEE International Symposium on Biomedical Imaging (ISBI), 2024: IEEE, pp. 1-5.
[90] X. Wang, D. He, B. Zhang, Y. Hao, D. Yang, and Y. Duan, "Conditional Diffusion Model for Abdominal CT Image Synthesis," in 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI), 2025: IEEE, pp. 1-5.
[91] Y. Yang et al., "GANs-guided Conditional Diffusion Model for Synthesizing Contrast-enhanced Computed Tomography Images," in 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2024: IEEE, pp. 1-4.
[92] M. Özbey et al., "Unsupervised medical image translation with adversarial diffusion models," IEEE Transactions on Medical Imaging, vol. 42, no. 12, pp. 3524-3539, 2023.
[93] Y. Xu et al., "MedSyn: text-guided anatomy-aware synthesis of high-fidelity 3-D CT images," IEEE Transactions on Medical Imaging, vol. 43, no. 10, pp. 3648-3660, 2024.
[94] X. Xing, G. Papanastasiou, S. Walsh, and G. Yang, "Less is more: unsupervised mask-guided annotated CT image synthesis with Minimum manual segmentations," IEEE Transactions on Medical Imaging, vol. 42, no. 9, pp. 2566-2576, 2023.
[95] Y. Seol, H. Yoo, E. Yoon, Y. Kim, and J. Lee, "Brain MRI Synthesis from Amyloid PET/CT via Enhanced Denoising Diffusion Probabilistic Model," in 2024 IEEE Nuclear Science Symposium (NSS), Medical Imaging Conference (MIC) and Room Temperature Semiconductor Detector Conference (RTSD), 2024: IEEE, pp. 1-1.
[96] W. Chen et al., "Mask-aware transformer with structure invariant loss for CT translation," Medical Image Analysis, vol. 96, p. 103205, 2024.
[97] H. Yu et al., "3D Nephrographic Image Synthesis in CT Urography with the Diffusion Model and Swin Transformer," arXiv preprint arXiv:2502.19623, 2025.
[98] Y. Hu, H. Zhou, N. Cao, C. Li, and C. Hu, "Synthetic CT generation based on CBCT using improved vision transformer CycleGAN," Scientific Reports, vol. 14, no. 1, p. 11455, 2024.
[99] X. Chen et al., "A more effective CT synthesizer using transformers for cone-beam CT-guided adaptive radiotherapy," Frontiers in oncology, vol. 12, p. 988800, 2022.
[100] Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10012-10022.
[101] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[102] A. Thummerer et al., "SynthRAD2023 Grand Challenge dataset: Generating synthetic CT for radiotherapy," Medical physics, vol. 50, no. 7, pp. 4664-4674, 2023.
[103] Z. Dorjsembe, H.-K. Pao, S. Odonchimed, and F. Xiao, "Conditional diffusion models for semantic 3d brain mri synthesis," IEEE Journal of Biomedical and Health Informatics, 2024.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99680-
dc.description.abstract磁振造影(MRI)可在無游離輻射的狀況下提供優異的軟組織對比,但缺乏進行劑量計算所需的電子密度資訊. 因此,臨床流程必須同時依賴 MRI 與電腦斷層掃描(CT),增加流程複雜度並帶來配準誤差. 由 MRI 生成合成 CT(sCT)可望實現僅使用 MRI 的放射治療計畫,但因 MRI 與 CT 強度之間呈現非線性對應而具高度挑戰性. 本研究提出 Med2Transformer,一種 3D 雙分支編碼器模型,用於將 MRI 轉換為 sCT. 該架構結合卷積式與 Transformer 編碼器,並採用多尺度移窗自注意力機制,同時捕捉細微解剖結構及更廣泛的上下文資訊. 模型透過組合體素重建, 對抗及感知損失的複合損失函數進行訓練,以提升解剖準確性與影像強度一致性. Med2Transformer 於涵蓋腦部, 骨盆和頭頸部之公有與私有資料集上進行評估,在所有解剖區域均達到最先進表現;其中於頭部區域獲得平均絕對誤差(MAE)74.58 HU, 結構相似度指數(SSIM)0.8639 及峰值訊雜比(PSNR)27.73 dB. 幾何一致性評估亦顯示較高的 Dice 係數及較低的 Hausdorff95 距離,證實解剖保真度. 此外,單病例 CyberKnife 劑量學評估顯示劑量分布符合臨床可接受標準,20 個解剖結構的平均劑量誤差為 3.83%. 研究結果顯示,Med2Transformer 能生成精確且具泛化能力的 sCT 影像,支援 MRI 單影像放射治療計畫,並提供可擴充的臨床整合解決方案.zh_TW
dc.description.abstractMagnetic Resonance Imaging (MRI) provides high soft-tissue contrast without ionizing radiation but lacks electron density information needed for dose calculation. As a result, clinical workflows depend on both MRI and CT, increasing complexity and registration errors. Synthetic CT (sCT) generation from MRI enables MRI-only radiotherapy planning but remains challenging due to the non-linear mapping between MRI and CT intensities. This study proposes Med2Transformer, a 3D dual-branch encoder model for MRI-to-synthetic CT generation. The architecture merges convolutional and transformer-based encoders, employing multi-scale shifted-window self-attention to effectively represent fine-grained anatomical structures along with broader contextual patterns. The model is trained using a composite loss function comprising voxel-wise reconstruction, adversarial, and perceptual losses to enhance anatomical accuracy and intensity consistency. Med2Transformer was evaluated on public and private datasets spanning brain, pelvis, and head-and-neck regions. It achieved state-of-the-art performance across all anatomical sites, with a mean absolute error (MAE) of 74.58 HU, structural similarity index (SSIM) of 0.8639, and peak signal-to-noise ratio (PSNR) of 27.73 dB in the head region. Geometric consistency assessments further confirmed anatomical fidelity, as reflected by higher Dice coefficients and lower Hausdorff95 distances. Additionally, a single-case CyberKnife dosimetric evaluation demonstrated clinically acceptable dose distributions, with an average mean dose error of 3.83% across 20 anatomical structures. These findings demonstrate that Med2Transformer generates accurate and generalizable sCT images, supporting MRI-only radiotherapy planning and offering a scalable solution for clinical integration.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-09-17T16:21:26Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-09-17T16:21:26Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
ABSTRACT ii
CONTENTS iv
LIST OF FIGURES viii
LIST OF TABLES ix
ABBREVIATIONS AND ACRONYMS x
NOTATIONS xi
Chapter 1 Introduction 1
1.1 Problem Statement and Motivation 1
1.2 Research Objectives 3
1.3 Proposed Solution: Med2Transformer 3
1.4 Thesis Structure Overview 4
Chapter 2 Theoretical Background 6
2.1 Medical Imaging Modalities 6
2.1.1 Computed Tomography (CT) 6
2.1.2 Magnetic Resonance Imaging (MRI) 8
2.1.3 Comparative Analysis: CT vs. MRI 9
2.1.4 Region-Specific Appearance Differences Across Modalities 9
2.2 Generative Modeling Frameworks 11
2.2.1 Discriminative vs. Generative Models 11
2.2.2 Variational Autoencoders (VAEs) 13
2.2.3 Generative Adversarial Networks (GANs) 14
2.2.4 Diffusion Models 16
2.2.5 Transformer Architecture for Generative Modeling 18
Chapter 3 Literature Review 20
3.1 Traditional Approaches 20
3.1.1 Atlas-Based Methods 21
3.1.2 Patch-Based and Statistical Regression Methods 21
3.1.3 Tissue Classification and Segmentation-Based Approaches 22
3.2 Deep Learning-Based Approaches 23
3.2.1 Convolutional Neural Networks (CNNs) 23
3.2.2 Generative Adversarial Networks (GANs) 24
3.2.3 Diffusion-Based Generative Models 25
3.2.4 Transformer-Based Models and Emerging Architectures 26
3.3 Summary and Gaps 28
Chapter 4 Methodology 30
4.1 Model Architecture: Med2Transformer 30
4.1.1 Architectural Overview 30
4.1.2 Dual-Path Encoder Architecture 31
4.1.3 Multi-Scale Shifted-Window Transformer (MSwintransformer) 33
4.1.4 Feature Fusion Strategy 34
4.1.5 Decoder Architecture 35
4.2 Adversarial Training Framework 35
4.2.1 Generator and Discriminator Design 36
4.2.2 Loss Functions 37
4.3 Datasets and Image Preprocessing 38
4.3.1 Dataset Specifications and Partitioning 39
4.3.2 Image Preprocessing and Data Augmentation 40
Chapter 5 Experimental Setup and Results 41
5.1 Training Strategy 41
5.1.1 Optimization Setup 41
5.1.2 Training Workflow 42
5.1.3 Inference and Validation 42
5.2 Evaluation Metrics 43
5.2.1 Image Similarity Metrics 44
5.3 Geometric Consistency Evaluation 45
5.4 Baselines and Comparative Methods 47
5.4.1 MTT-Net 48
5.4.2 nnU-Net 48
5.4.3 Med-DDPM 49
5.5 Quantitative Results 49
5.6 Qualitative Analysis 52
5.6.1 Visual Assessment 52
5.6.2 Geometric Consistency Evaluation 55
5.7 Sample Dosimetric Evaluation 59
5.8 Ablation Studies 65
5.8.1 Batch Size 65
5.8.2 Patch Size 66
5.8.3 Activation Functions 66
5.8.4 Dilation and Activation Pairing 67
5.9 Model Limitations 68
Chapter 6 Discussion 71
6.1 Quantitative Performance and Model Superiority 71
6.2 Anatomical Fidelity and Structural Consistency 71
6.3 Visual and Perceptual Assessments 72
6.4 Clinical Feasibility: Dosimetric Consistency 73
6.5 Architectural Insights from Ablation Studies 73
6.6 Limitations and Areas for Improvement 74
Chapter 7 Conclusion 76
7.1 Summary of Findings 76
7.2 Limitations 77
7.3 Future Directions 77
7.4 Conclusion 78
REFERENCE 79
-
dc.language.isoen-
dc.subjectMRI 到 CT 影像轉換zh_TW
dc.subject合成 CT 生成zh_TW
dc.subjectMed2Transformerzh_TW
dc.subjectMRI 單影像放射治療計畫zh_TW
dc.subjectTransformer 式生成模型zh_TW
dc.subjectSynthetic CT generationen
dc.subjectMRI-to-CT translationen
dc.subjectMed2Transformeren
dc.subjectMRI-only radiotherapy planningen
dc.subjectTransformer-based generative modelen
dc.title用於磁振造影 到電腦斷層 影像轉換的深度學習框架zh_TW
dc.titleDeep Learning Framework for MRI to CT Translationen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.coadvisor鮑興國zh_TW
dc.contributor.coadvisorHsing-Kuo Paoen
dc.contributor.oralexamcommittee廖俊智zh_TW
dc.contributor.oralexamcommitteeJun-Zhi Liaoen
dc.subject.keyword合成 CT 生成,MRI 到 CT 影像轉換,Transformer 式生成模型,MRI 單影像放射治療計畫,Med2Transformer,zh_TW
dc.subject.keywordSynthetic CT generation,MRI-to-CT translation,Transformer-based generative model,MRI-only radiotherapy planning,Med2Transformer,en
dc.relation.page85-
dc.identifier.doi10.6342/NTU202504212-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-08-07-
dc.contributor.author-college醫學院-
dc.contributor.author-dept醫療器材與醫學影像研究所-
dc.date.embargo-lift2025-09-18-
顯示於系所單位:醫療器材與醫學影像研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf2.96 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved