Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 醫學工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94341
標題: 以深度學習模型進行成對資料集之類澱粉蛋白放射性示蹤劑影像轉換
Deep-Learning-Based Amyloid PET Inter-Radiotracer Image Translation on a Paired Dataset
作者: 蔡承翰
Cheng-Han Tsai
指導教授: 程子翔
Kevin T. Chen
關鍵字: 類澱粉蛋白,阿茲海默症,深度學習,醫學影像轉換,正子造影,
Amyloid,Alzheimer's disease,Deep learning,Medical image translation,PET,
出版年 : 2024
學位: 碩士
摘要: 類澱粉蛋白正子造影成像是一個現今用於檢查阿茲海默症的非侵入式方法。放射性示蹤劑如[11C]-Pittsburgh Compound-B (PiB)、[18F]-florbetaben (FBB)、[18F]-florbetapir都是臨床上常用的藥物,但不同藥物的影像難以轉換導致長期研究以及跨組織的研究皆難以進行,Centiloid方法被提出用於標準化各個藥物(Klunk et al., 2015)。Centiloid方法使用單一數值來評估受測者的類澱粉蛋白沈積程度,而影像間的轉換仍是一個困難的任務。因此,本研究提出了以變形卷積為基礎的U-Net模型(DCN-based U-Net)來進行PiB至FBB的影像轉換。
GAAIN資料集包含成對核磁共振、PiB以及FBB正子造影影像;資料集中的受試者包含9位健康年輕者(YHC)、6位健康年長者(EHC)、9位輕度認知障礙者(MCI)、8位阿茲海默症患者(AD)以及2位額顳葉失智症患者(FTD)。為了避免腦組織以外的雜訊干擾模型訓練,在進行訓練前影像經過對位以及去頭骨的處理。所有的影像在進行訓練前皆在0到1之間進行個體歸一化。DCN-based U-Net模型用基本模塊替換了原始U-Net的雙卷積模塊。這種方法結合了自適應空間聚合和卷積計算效率較高的優點,同時利用自注意力捕捉大範圍資訊的優勢。損失函數是L1-L2結構相似性(SSIM)的混合函數。數據集按比例分層並分割用於模型訓練和測試,應用了5折交叉驗證。模型性能使用多種指標評估,包括峰值信噪比(PSNR)、SSIM和均方根誤差(RMSE)。此外還使用了AmPQ軟體進行SUVr以及Centiloid的計算;線性回歸分析、Bland-Altman圖和相對變化圖等可視化工具亦被用於分析,以全面評估模型性能。
本研究中使用五位具代表性的受測者(YHC, EHC, MCI, AD, FTD)的分析結果顯示了我們提出的模型可以在PET影像轉換任務中得到很好的成果。模型預測的影像相較於輸入的PiB影像有更好的同質性,並且更接近FBB的特徵。相對變化圖也顯示了模型預測的影像與FBB更無差距。Bland-Altman圖顯示了我們與FBB一致性,在數值指標方面我們的模型也顯著優於PiB PET (P<0.001)。
本研究提出DCN-based U-Net並在類澱粉蛋白正子造影影像轉換上有很好的表現,然而使用過小的資料集進行訓練使得模型的可信度不足。另外,單方向的影像轉換模型亦無法達成增加類澱粉蛋白正子造影影像互換性的目的。DCN-based U-Net提供了當Centiloid分數與影像評估不一致時的一個新的診斷選擇,另外一個可能的應用為生成假的FBB影像資料來增大現存的資料集並用於模型的訓練。
Introduction Amyloid PET imaging has been extensively employed in the noninvasive assessment of amyloid-beta accumulation in Alzheimer's disease. Various radiotracers, including [C-12]-Pittsburgh compound-B (PiB), [F-18]-florbetaben (FBB), and [F-18]-florbetapir, are commonly used in clinical settings. However, the limited interchangeability among these radiotracers hinders the feasibility of long-term clinical trials and multicenter comparisons. While the Centiloid scaling method (Klunk et al., 2015) was proposed to standardize radiotracer estimations, this scaling method provides a single score and voxel-wise translation remains a formidable task. Herein, we propose a deformable convolution-based (DCN) U-Net model for PiB-to-FBB image translation.
Materials and Methods The GAAIN dataset used in this work included 35 subjects with paired MRI, PiB, and FBB data, including young, healthy controls (YHC, 9), elderly healthy controls (EHC, 6), and patients with mild cognitive impairment (MCI, 9), Alzheimer's disease (AD, 8), and frontotemporal dementia (FTD, 2). During preprocessing, each subject's PET images were coregistered with their corresponding MRI. Next, the MRI images underwent skull stripping and spatial normalization. All images were individually normalized between 0 and 1 before training.
The DCN-based U-Net replaced the double convolution module of the original U-Net with basic blocks. This approach combined the benefits of adaptive spatial aggregation and efficient computation from convolutions with the advantage of capturing long-range dependencies from the multi-head self-attention. The loss function was an L1-L2-structural similarity (SSIM) mixed loss. The dataset was stratified and split for model training and testing, and 5-fold cross-validation was applied. The performance was evaluated against the ground truth FBB images using various metrics, including peak noise-to-signal ratio (PSNR), SSIM, and root-mean-square error (RMSE). Visualization tools such as correlation plots, Bland-Altman plots, and relative change maps were also implemented to provide a comprehensive assessment of our model's performance. AmPQ, which is an amyloid PET quantification tool including global and regional SUVr, Centiloid score, and Z-score, was used to evaluate the similarity in medical view.
Results The performance of our proposed model was demonstrated on five representative subjects (YHC, EHC, MCI, FTD, and AD). The prediction images, when compared with PiB PET images, showed increased homogeneity, closely resembling the texture of FBB. The relative change maps indicated that the visual disparities between the predicted and FBB images were less conspicuous than those between PiB and FBB images. The Bland-Altman plot illustrated consistency between the prediction images and the ground truth. High overall PSNR, SSIM, and RMSE values further confirmed the significant improvement (p < 0.001) in image quality.
Discussion and Conclusion The DCN-based U-Net is proposed for amyloid PET image conversion and demonstrates high performance for the PET image-to-image translation task. However, the model trained by a limited dataset is not trustworthy enough. In addition, the one-directional translation could not improve the interchangeability of the amyloid PET tracers. The proposed model provides modality translation and an image-based alternative when inconsistencies between visual assessments and the Centiloid scale occur; another potential application of the proposed model is generating pseudo-FBB images to augment existing datasets for model training.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94341
DOI: 10.6342/NTU202304320
全文授權: 同意授權(全球公開)
電子全文公開日期: 2029-07-30
顯示於系所單位:醫學工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
  此日期後於網路公開 2029-07-30
26.01 MBAdobe PDF
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved