Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 醫學院
  3. 醫療器材與醫學影像研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99680
Title: 用於磁振造影 到電腦斷層 影像轉換的深度學習框架
Deep Learning Framework for MRI to CT Translation
Authors: Zolnamar Dorjsembe
Zolnamar Dorjsembe
Advisor: 蕭輔仁
Furen Xiao
Co-Advisor: 鮑興國
Hsing-Kuo Pao
Keyword: 合成 CT 生成,MRI 到 CT 影像轉換,Transformer 式生成模型,MRI 單影像放射治療計畫,Med2Transformer,
Synthetic CT generation,MRI-to-CT translation,Transformer-based generative model,MRI-only radiotherapy planning,Med2Transformer,
Publication Year : 2025
Degree: 碩士
Abstract: 磁振造影(MRI)可在無游離輻射的狀況下提供優異的軟組織對比,但缺乏進行劑量計算所需的電子密度資訊. 因此,臨床流程必須同時依賴 MRI 與電腦斷層掃描(CT),增加流程複雜度並帶來配準誤差. 由 MRI 生成合成 CT(sCT)可望實現僅使用 MRI 的放射治療計畫,但因 MRI 與 CT 強度之間呈現非線性對應而具高度挑戰性. 本研究提出 Med2Transformer,一種 3D 雙分支編碼器模型,用於將 MRI 轉換為 sCT. 該架構結合卷積式與 Transformer 編碼器,並採用多尺度移窗自注意力機制,同時捕捉細微解剖結構及更廣泛的上下文資訊. 模型透過組合體素重建, 對抗及感知損失的複合損失函數進行訓練,以提升解剖準確性與影像強度一致性. Med2Transformer 於涵蓋腦部, 骨盆和頭頸部之公有與私有資料集上進行評估,在所有解剖區域均達到最先進表現;其中於頭部區域獲得平均絕對誤差(MAE)74.58 HU, 結構相似度指數(SSIM)0.8639 及峰值訊雜比(PSNR)27.73 dB. 幾何一致性評估亦顯示較高的 Dice 係數及較低的 Hausdorff95 距離,證實解剖保真度. 此外,單病例 CyberKnife 劑量學評估顯示劑量分布符合臨床可接受標準,20 個解剖結構的平均劑量誤差為 3.83%. 研究結果顯示,Med2Transformer 能生成精確且具泛化能力的 sCT 影像,支援 MRI 單影像放射治療計畫,並提供可擴充的臨床整合解決方案.
Magnetic Resonance Imaging (MRI) provides high soft-tissue contrast without ionizing radiation but lacks electron density information needed for dose calculation. As a result, clinical workflows depend on both MRI and CT, increasing complexity and registration errors. Synthetic CT (sCT) generation from MRI enables MRI-only radiotherapy planning but remains challenging due to the non-linear mapping between MRI and CT intensities. This study proposes Med2Transformer, a 3D dual-branch encoder model for MRI-to-synthetic CT generation. The architecture merges convolutional and transformer-based encoders, employing multi-scale shifted-window self-attention to effectively represent fine-grained anatomical structures along with broader contextual patterns. The model is trained using a composite loss function comprising voxel-wise reconstruction, adversarial, and perceptual losses to enhance anatomical accuracy and intensity consistency. Med2Transformer was evaluated on public and private datasets spanning brain, pelvis, and head-and-neck regions. It achieved state-of-the-art performance across all anatomical sites, with a mean absolute error (MAE) of 74.58 HU, structural similarity index (SSIM) of 0.8639, and peak signal-to-noise ratio (PSNR) of 27.73 dB in the head region. Geometric consistency assessments further confirmed anatomical fidelity, as reflected by higher Dice coefficients and lower Hausdorff95 distances. Additionally, a single-case CyberKnife dosimetric evaluation demonstrated clinically acceptable dose distributions, with an average mean dose error of 3.83% across 20 anatomical structures. These findings demonstrate that Med2Transformer generates accurate and generalizable sCT images, supporting MRI-only radiotherapy planning and offering a scalable solution for clinical integration.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99680
DOI: 10.6342/NTU202504212
Fulltext Rights: 同意授權(全球公開)
metadata.dc.date.embargo-lift: 2025-09-18
Appears in Collections:醫療器材與醫學影像研究所

Files in This Item:
File SizeFormat 
ntu-113-2.pdf2.96 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved