請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99680| 標題: | 用於磁振造影 到電腦斷層 影像轉換的深度學習框架 Deep Learning Framework for MRI to CT Translation |
| 作者: | Zolnamar Dorjsembe Zolnamar Dorjsembe |
| 指導教授: | 蕭輔仁 Furen Xiao |
| 共同指導教授: | 鮑興國 Hsing-Kuo Pao |
| 關鍵字: | 合成 CT 生成,MRI 到 CT 影像轉換,Transformer 式生成模型,MRI 單影像放射治療計畫,Med2Transformer, Synthetic CT generation,MRI-to-CT translation,Transformer-based generative model,MRI-only radiotherapy planning,Med2Transformer, |
| 出版年 : | 2025 |
| 學位: | 碩士 |
| 摘要: | 磁振造影(MRI)可在無游離輻射的狀況下提供優異的軟組織對比,但缺乏進行劑量計算所需的電子密度資訊. 因此,臨床流程必須同時依賴 MRI 與電腦斷層掃描(CT),增加流程複雜度並帶來配準誤差. 由 MRI 生成合成 CT(sCT)可望實現僅使用 MRI 的放射治療計畫,但因 MRI 與 CT 強度之間呈現非線性對應而具高度挑戰性. 本研究提出 Med2Transformer,一種 3D 雙分支編碼器模型,用於將 MRI 轉換為 sCT. 該架構結合卷積式與 Transformer 編碼器,並採用多尺度移窗自注意力機制,同時捕捉細微解剖結構及更廣泛的上下文資訊. 模型透過組合體素重建, 對抗及感知損失的複合損失函數進行訓練,以提升解剖準確性與影像強度一致性. Med2Transformer 於涵蓋腦部, 骨盆和頭頸部之公有與私有資料集上進行評估,在所有解剖區域均達到最先進表現;其中於頭部區域獲得平均絕對誤差(MAE)74.58 HU, 結構相似度指數(SSIM)0.8639 及峰值訊雜比(PSNR)27.73 dB. 幾何一致性評估亦顯示較高的 Dice 係數及較低的 Hausdorff95 距離,證實解剖保真度. 此外,單病例 CyberKnife 劑量學評估顯示劑量分布符合臨床可接受標準,20 個解剖結構的平均劑量誤差為 3.83%. 研究結果顯示,Med2Transformer 能生成精確且具泛化能力的 sCT 影像,支援 MRI 單影像放射治療計畫,並提供可擴充的臨床整合解決方案. Magnetic Resonance Imaging (MRI) provides high soft-tissue contrast without ionizing radiation but lacks electron density information needed for dose calculation. As a result, clinical workflows depend on both MRI and CT, increasing complexity and registration errors. Synthetic CT (sCT) generation from MRI enables MRI-only radiotherapy planning but remains challenging due to the non-linear mapping between MRI and CT intensities. This study proposes Med2Transformer, a 3D dual-branch encoder model for MRI-to-synthetic CT generation. The architecture merges convolutional and transformer-based encoders, employing multi-scale shifted-window self-attention to effectively represent fine-grained anatomical structures along with broader contextual patterns. The model is trained using a composite loss function comprising voxel-wise reconstruction, adversarial, and perceptual losses to enhance anatomical accuracy and intensity consistency. Med2Transformer was evaluated on public and private datasets spanning brain, pelvis, and head-and-neck regions. It achieved state-of-the-art performance across all anatomical sites, with a mean absolute error (MAE) of 74.58 HU, structural similarity index (SSIM) of 0.8639, and peak signal-to-noise ratio (PSNR) of 27.73 dB in the head region. Geometric consistency assessments further confirmed anatomical fidelity, as reflected by higher Dice coefficients and lower Hausdorff95 distances. Additionally, a single-case CyberKnife dosimetric evaluation demonstrated clinically acceptable dose distributions, with an average mean dose error of 3.83% across 20 anatomical structures. These findings demonstrate that Med2Transformer generates accurate and generalizable sCT images, supporting MRI-only radiotherapy planning and offering a scalable solution for clinical integration. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99680 |
| DOI: | 10.6342/NTU202504212 |
| 全文授權: | 同意授權(全球公開) |
| 電子全文公開日期: | 2025-09-18 |
| 顯示於系所單位: | 醫療器材與醫學影像研究所 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf | 2.96 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
