Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 醫學工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98997
標題: 以確定性與生成式深度學習模型強化低計數Tau蛋白正子斷層造影影像
Low-Count Tau PET Enhancement by Deterministic and Generative Deep Learning Models
作者: 林欣達
Hsin-Ta Lin
指導教授: 程子翔
Kevin T. Chen
關鍵字: 深度學習,生成式模型,低計數正子斷層造影,Tau蛋白正子斷層造影,
Deep learning,Generative models,Low-count PET,Tau PET,
出版年 : 2025
學位: 碩士
摘要: Tau蛋白正子斷層造影(Tau PET imaging)在評估tau蛋白相關的神經退化性疾病,如阿茲海默症(Alzheimer's disease, AD),具有臨床實用性。為了提升PET的輻射安全與掃描效率,藉由降低注射的放射性示蹤劑(radiotracer uptake)劑量或縮短掃描時長獲得的低計數(low-count)PET影像成為一種替代方案。然而,低計數PET影像品質因示蹤劑攝取量減少導致影像雜訊增加,不利於臨床診斷及應用。深度學習(deep learning)方法近年在電腦視覺領域廣泛應用,並已被利用在低計數PET影像增強(enhancement),改善影像品質並保持臨床效用的準確性。過去的研究證實深度學習模型在增強低計數氟代脫氧葡萄糖(FDG)與類澱粉蛋白(amyloid)PET影像上表現良好,但對於低計數tau PET影像增強的研究仍相對不足。並且tau PET的示蹤劑攝取呈現更弱且更局部化的特徵,增加影像增強的難度。本研究旨在利用確定性(deterministic)與生成式(generative)深度學習方法增強低計數[18F]-florzolotau腦部tau PET影像,並探討是否能在定性與定量上得到接近完整計數(full-count)影像的增強效果。
本研究主要的資料集包含52對T1加權核磁共振(T1-weighted MR)影像與靜態tau PET影像序列,每組PET序列包含六個五分鐘的影像幀(frames),其中,第一個影像幀被選作六分之一低計數PET影像,而完整計數影像為全部三十分鐘的平均訊號。本研究利用兩種深度學習方法增強低計數tau PET影像。第一,我們引入一種確定性深度學習模型(一種U-Net變體),並以成對的低計數與完整計數PET影像進行監督式學習(supervised learning)。第二,我們提出一種新穎的生成策略──多幀生成(multi-frame generation),此方法透過合成多張由生成式深度學習模型的輸出影像幀,達到影像增強效果。實驗中使用條件式一致性模型(conditional consistency model, cCM)生成影像幀。多幀生成能夠藉由調整輸出影像幀的數量,進而應用在不同程度低計數的PET影像。此外,我們針對多幀生成策略的預測可靠性、輸出與條件的一致性以及對資料的泛用性進一步驗證。
在定量評估中,無論是確定性或是生成式方法,所增強的影像在品質上皆有顯著提升。根據視覺定性評估,兩種方法皆能夠減少影像雜訊,同時保留高攝取區域的主要細節。U-Net基底的確定性模型輸出的影像紋理較為模糊,而多幀生成策略能夠生成更加接近真實完整計數的影像紋理。此外,多幀生成策略在三分之一低計數資料集的零樣本(zero-shot)測試中相較其他模型展現出更加優異的表現。消融實驗(ablation studies)進一步支持了多幀生成的泛化能力,顯示其在不同資料集中實用的潛力。而不確定性視覺化(uncertainty maps)則提供了模型生成過程的信心程度。
確定性與生成式深度學習方法皆能有效增強低計數tau PET影像,而本研究提出的多幀生成策略在影像紋理保留與泛化能力方面表現更為優越。統計與腦區分析確認了影像品質的顯著提升以及影像數值的準確性。儘管本研究在訓練資料的多樣性上有所不足,研究結果仍支持深度學習在低計數tau PET影像增強具有高度潛力,使tau PET成為更安全且高效率的臨床成像工具。
Introduction
Tau PET imaging is essential for assessing tau-related neurodegenerative diseases such as Alzheimer's disease. Reducing radiation exposure and scan duration through low-count PET has become an important research focus to improve PET safety and efficiency. However, the image quality of low-count PET typically suffers due to increased noise and reduced tracer uptake. Deep learning methods have become popular in computer vision, and have been applied to low-count PET enhancement, aiming to improve image quality while maintaining clinical accuracy. Several deep learning models have been shown to be effective for FDG and amyloid PET but remain understudied for tau PET images, which exhibit weaker and more focal tracer uptake. In this study, we investigate whether deterministic and generative deep learning methods can synthesize qualitatively and quantitatively accurate enhanced low-count [$^{18\text{F$]-florzolotau tau brain PET images.
Materials and Methods
The main dataset included 52 pairs of T1-weighted MR images and static tau PET series with six five-minute frames (90--120 minutes after injection). The first frame was selected as the low-count PET image, corresponding to a 6-fold count reduction. First, we introduced a conventional deterministic deep learning model based on a U-Net variation to enhance low-count tau PET images, trained on paired low-count and full-count PET scans. Second, we proposed a novel generation strategy, multi-frame generation, to synthesize enhanced PET images by averaging multiple low-count-like PETs sampled from a generative deep learning model, which was implemented using a conditional consistency model (cCM). Both methods underwent thorough quantitative and qualitative evaluation at both global and regional levels. The proposed multi-frame generation was further validated in generalization such as output reliability, condition consistency, and count-level adaptability.
Results
Enhanced images from both deterministic and generative methods demonstrated significant improvements in quality and high similarity to the full-count PET images. The U-Net-based model effectively reduced image noise while preserving critical regions of high uptake, as observed in visual analyses. The generative cCM method achieved comparable performance in quantitative evaluations, and additionally synthesized images with more realistic textures. Furthermore, multi-frame generation demonstrated better performance in zero-shot evaluations and ablation studies, highlighting its flexibility and adaptability across different datasets.
Discussion and Conclusions
Deterministic and generative deep learning methods demonstrated effectiveness in enhancing low-count tau PET images, with multi-frame generation yielding superior texture preservation and generalizability. Statistical and regional analyses confirmed significant improvements, while uncertainty visualization and zero-shot inference highlighted the robustness and adaptability of the proposed generative strategy. Despite limitations in data diversity, these findings reinforce the potential of deep learning to support safer and more efficient tau PET imaging.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98997
DOI: 10.6342/NTU202504207
全文授權: 同意授權(全球公開)
電子全文公開日期: 2025-08-21
顯示於系所單位:醫學工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf13.08 MBAdobe PDF檢視/開啟
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved