Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 醫學工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98997
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor程子翔zh_TW
dc.contributor.advisorKevin T. Chenen
dc.contributor.author林欣達zh_TW
dc.contributor.authorHsin-Ta Linen
dc.date.accessioned2025-08-20T16:35:31Z-
dc.date.available2025-08-21-
dc.date.copyright2025-08-20-
dc.date.issued2025-
dc.date.submitted2025-08-14-
dc.identifier.citationC. V. M. L. Jie, V. Treyer, R. Schibli, and L. Mu, “Tauvid: The First FDA-Approved PET Tracer for Imaging Tau Pathology in Alzheimer’s Disease,” Pharmaceuticals, vol. 14, no. 2, p. 110, 2021.
A. Lindberg, J. Tong, C. Zheng, A. Mueller, H. Kroth, A. Stephens, C. A. Mathis, and N. Vasdev, “Radiosynthesis, In Vitro Characterization, and In Vivo PET Neuroimaging of [18F]F-4 for Tau Protein: A First-in-Human PET Study,” ACS Chemical Neuroscience, vol. 16, no. 6, pp. 1182–1189, 2025.
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” p. arXiv:1505.04597, May 01, 2015, conditionally accepted at MICCAI 2015. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2015arXiv150504597R
H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y.-W. Chen, and J. Wu, “Unet 3+: A full-scale connected unet for medical image segmentation,” arXiv e-prints, p. arXiv:2004.08790, 2020. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2020arXiv200408790H
M. Politis and P. Piccini, “Positron emission tomography imaging in neurological disorders,” Journal of Neurology, vol. 259, no. 9, pp. 1769–1780, 2012.
S. R. Cherry and M. Dahlbom, PET: Physics, Instrumentation, and Scanners. New York, NY: Springer, 2006. [Online]. Available: [http://dx.doi.org/10.1007/0-387-34946-4
N. Okamura, R. Harada, A. Ishiki, A. Kikuchi, T. Nakamura, and Y. Kudo, “The development and validation of tau PET tracers: Current status and future directions,” Clinical and Translational Imaging, vol. 6, no. 4, pp. 305–316, 2018.
G. C. Petersen, M. Roytman, G. C. Chiang, Y. Li, M. L. Gordon, and A. M. Franceschi, “Overview of tau PET molecular imaging,” Current Opinion in Neurology, vol. 35, no. 2, pp. 230–239, 2022.
H. Braak and E. Braak, “Neuropathological stageing of Alzheimer-related changes,” Acta Neuropathologica, vol. 82, pp. 239–259, 1991.
X. Xu, W. Ruan, F. Liu, Y. Gai, Q. Liu, Y. Su, Z. Liang, X. Sun, and X. Lan, “18F-APN-1607 tau positron emission tomography imaging for evaluating disease progression in Alzheimer’s disease,” Frontiers in Aging Neuroscience, vol. 13, p. 789054, 2022.
J. Lu, W. Bao, M. Li, L. Li, Z. Zhang, I. Alberts, M. Brendel, P. Cumming, H. Lu, Z. Xiao, C. Zuo, Y. Guan, Q. Zhao, and A. Rominger, “Associations of [18F]-APN-1607 tau PET binding in the brain of Alzheimer’s disease patients with cognition and glucose metabolism,” Frontiers in Neuroscience, vol. 14, p. Article 604, 2020.
J.-L. Hsu, K.-J. Lin, I.-T. Hsiao, K.-L. Huang, C.-H. Liu, H.-C. Wu, Y.-C. Weng, C.-Y. Huang, C.-C. Chang, T.-C. Yen, M. Higuchi, M.-K. Jang, and C.-C. Huang, “The imaging features and clinical associations of a novel tau PET tracer—18F-APN1607 in Alzheimer disease,” Clinical Nuclear Medicine, vol. 45, no. 10, pp. 747–756, 2020.
T. L. Slovis, “The ALARA concept in pediatric CT: Myth or reality?” Radiology, vol. 223, no. 1, pp. 5–6, 2002.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
K. O’Shea and R. Nash, “An introduction to convolutional neural networks,” arXiv e-prints, p. arXiv:1511.08458, November 01, 2015, 10 pages, 5 figures. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2015arXiv151108458O
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv, 2020.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” p. 1, 2016. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2016cvpr.confE...1H
A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the recent architectures of deep convolutional neural networks,” Artificial Intelligence Review, vol. 53, no. 8, pp. 5455–5516, 2020.
M. Naseer, K. Ranasinghe, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Intriguing properties of vision transformers,” arXiv, 2021.
F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, no. 2, pp. 203–211, 2021.
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” p. arXiv:1406.2661, June 01, 2014. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2014arXiv1406.2661G
J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” arXiv e-prints, p. arXiv:2006.11239, June 01, 2020. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2020arXiv200611239H
Y. Song, P. Dhariwal, M. Chen, and I. Sutskever, “Consistency models,” p. arXiv:2303.01469, March 01, 2023, ICML 2023. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2023arXiv230301469S
J. Xu, E. Gong, J. Pauly, and G. Zaharchuk, “200x low-dose PET reconstruction using deep learning,” arXiv e-prints, p. arXiv:1712.04119, 2017. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv171204119X
K. T. Chen, E. Gong, F. B. de Carvalho Macruz, J. Xu, A. Boumis, M. Khalighi, K. L. Poston, S. J. Sha, M. D. Greicius, E. Mormino, J. M. Pauly, S. Srinivas, and G. Zaharchuk, “Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs,” Radiology, vol. 290, no. 3, pp. 649–656, 2018. doi: 10.1148/radiol.2018180940. [Online]. Available: https://doi.org/10.1148/radiol.2018180940
H. Xie, W. Gan, R. Bayerlein, B. Zhou, M.-K. Chen, M. Kulon, A. Boustani, K.-Y. Ko, D.-S. Wang, B. A. Spencer, W. Ji, X. Chen, Q. Liu, X. Guo, M. Xia, Y. Zhou, H. Liu, L. Guo, H. An, U. S. Kamilov, H. Wang, B. Li, A. Rominger, K. Shi, G. Wang, R. D. Badawi, and C. Liu, “Dose-aware diffusion model for 3D PET image denoising: Multi-institutional validation with reader study and real low-dose data,” p. arXiv:2405.12996, May 01, 2024, 18 pages, 16 figures, 5 tables. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2024arXiv240512996X
S. Pan, E. Abouei, J. Peng, J. Qian, J. F. Wynne, T. Wang, C.-W. Chang, J. Roper, J. A. Nye, H. Mao, and X. Yang, “Full-dose whole-body PET synthesis from low-dose PET using high-efficiency denoising diffusion probabilistic model: PET consistency model,” Medical Physics, vol. 51, pp. 5468–5478, 2024. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2024MedPh..51.5468P
K. Gong, K. Johnson, G. El Fakhri, Q. Li, and T. Pan, “PET image denoising based on denoising diffusion probabilistic model,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 51, no. 2, pp. 358–368, 2024.
S.-I. Jang, C. Lois, J. A. Becker, E. G. Thibault, Y. Li, et al., “Low-dose tau PET imaging based on swin restormer with diagonally scaled self-attention,” in Proceedings of the 2022 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), Nov. 2022.
K. T. Chen, R. Tesfay, M. E. I. Koran, J. Ouyang, S. Shams, C. B. Young, G. Davidzon, T. Liang, M. Khalighi, E. Mormino, and G. Zaharchuk, “Generative adversarial network-enhanced ultra-low-dose [18F]-PI-2620 tau PET/MRI in aging and neurodegenerative populations,” American Journal of Neuroradiology, vol. 44, no. 9, pp. 1012–1019, 2023.
A. F. Agarap, “Deep learning using rectified linear units (ReLU),” p. arXiv:1803.08375, March 01, 2018, 7 pages, 11 figures, 9 tables. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018arXiv180308375A
P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for activation functions,” p. arXiv:1710.05941, October 01, 2017, updated version of “Swish: A self-gated activation function.” [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv171005941R
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. K ¨opf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” arXiv e-prints, p. arXiv:1912.01703, 2019. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2019arXiv191201703P
W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
S. S. Shapiro and M. B. Wilk, “An analysis of variance test for normality (complete samples),” Biometrika, vol. 52, no. 3-4, pp. 591–611, 1965.
Student, “The probable error of a mean,” Biometrika, vol. 6, no. 1, pp. 1–25, 1908.
F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945.
V. Fonov, A. C. Evans, R. C. McKinstry, C. R. Almli, and D. L. Collins, “Unbiased nonlinear average age-appropriate brain templates from birth to adulthood,” NeuroImage, vol. 47, no. Suppl 1, p. S102, 2009.
E. T. Rolls, C.-C. Huang, C.-P. Lin, J. Feng, and M. Joliot, “Automated anatomical labelling atlas 3,” NeuroImage, vol. 206, p. 116189, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1053811919307803
H.-C. Lin, K.-J. Lin, K.-L. Huang, S.-H. Chen, T.-Y. Ho, C.-C. Huang, J.-L. Hsu, C.-C. Chang, and I.-T. Hsiao, “Visual reading for [18F]florzolotau ([18F]APN-1607) tau PET imaging in clinical assessment of Alzheimer’s disease,” Frontiers in Neuroscience, vol. 17, 2023. [Online]. Available: https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1148054
J. Martin Bland and D. Altman, “Statistical methods for assessing agreement between two methods of clinical measurement,” The Lancet, vol. 327, no. 8476, pp. 307–310, 1986. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0140673686908378
K. Tagai, M. Ono, M. Kubota, S. Kitamura, K. Takahata, C. Seki, Y. Takado, H. Shinotoh, Y. Sano, Y. Yamamoto, K. Matsuoka, H. Takuwa, M. Shimojo, M. Takahashi, K. Kawamura, T. Kikuchi, M. Okada, H. Akiyama, H. Suzuki, M. Onaya, T. Takeda, K. Arai, N. Arai, N. Araki, Y. Saito, Y. Kimura, M. Ichise, Y. Tomita, M.-R. Zhang, T. Suhara, M. Shigeta, N. Sahara, M. Higuchi, and H. Shimada, “High-contrast in vivo imaging of tau pathologies in Alzheimer’s and non-Alzheimer’s disease tauopathies,” Neuron, vol. 109, no. 1, pp. 42–58.e8, 2021.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98997-
dc.description.abstractTau蛋白正子斷層造影(Tau PET imaging)在評估tau蛋白相關的神經退化性疾病,如阿茲海默症(Alzheimer's disease, AD),具有臨床實用性。為了提升PET的輻射安全與掃描效率,藉由降低注射的放射性示蹤劑(radiotracer uptake)劑量或縮短掃描時長獲得的低計數(low-count)PET影像成為一種替代方案。然而,低計數PET影像品質因示蹤劑攝取量減少導致影像雜訊增加,不利於臨床診斷及應用。深度學習(deep learning)方法近年在電腦視覺領域廣泛應用,並已被利用在低計數PET影像增強(enhancement),改善影像品質並保持臨床效用的準確性。過去的研究證實深度學習模型在增強低計數氟代脫氧葡萄糖(FDG)與類澱粉蛋白(amyloid)PET影像上表現良好,但對於低計數tau PET影像增強的研究仍相對不足。並且tau PET的示蹤劑攝取呈現更弱且更局部化的特徵,增加影像增強的難度。本研究旨在利用確定性(deterministic)與生成式(generative)深度學習方法增強低計數[18F]-florzolotau腦部tau PET影像,並探討是否能在定性與定量上得到接近完整計數(full-count)影像的增強效果。
本研究主要的資料集包含52對T1加權核磁共振(T1-weighted MR)影像與靜態tau PET影像序列,每組PET序列包含六個五分鐘的影像幀(frames),其中,第一個影像幀被選作六分之一低計數PET影像,而完整計數影像為全部三十分鐘的平均訊號。本研究利用兩種深度學習方法增強低計數tau PET影像。第一,我們引入一種確定性深度學習模型(一種U-Net變體),並以成對的低計數與完整計數PET影像進行監督式學習(supervised learning)。第二,我們提出一種新穎的生成策略──多幀生成(multi-frame generation),此方法透過合成多張由生成式深度學習模型的輸出影像幀,達到影像增強效果。實驗中使用條件式一致性模型(conditional consistency model, cCM)生成影像幀。多幀生成能夠藉由調整輸出影像幀的數量,進而應用在不同程度低計數的PET影像。此外,我們針對多幀生成策略的預測可靠性、輸出與條件的一致性以及對資料的泛用性進一步驗證。
在定量評估中,無論是確定性或是生成式方法,所增強的影像在品質上皆有顯著提升。根據視覺定性評估,兩種方法皆能夠減少影像雜訊,同時保留高攝取區域的主要細節。U-Net基底的確定性模型輸出的影像紋理較為模糊,而多幀生成策略能夠生成更加接近真實完整計數的影像紋理。此外,多幀生成策略在三分之一低計數資料集的零樣本(zero-shot)測試中相較其他模型展現出更加優異的表現。消融實驗(ablation studies)進一步支持了多幀生成的泛化能力,顯示其在不同資料集中實用的潛力。而不確定性視覺化(uncertainty maps)則提供了模型生成過程的信心程度。
確定性與生成式深度學習方法皆能有效增強低計數tau PET影像,而本研究提出的多幀生成策略在影像紋理保留與泛化能力方面表現更為優越。統計與腦區分析確認了影像品質的顯著提升以及影像數值的準確性。儘管本研究在訓練資料的多樣性上有所不足,研究結果仍支持深度學習在低計數tau PET影像增強具有高度潛力,使tau PET成為更安全且高效率的臨床成像工具。
zh_TW
dc.description.abstractIntroduction
Tau PET imaging is essential for assessing tau-related neurodegenerative diseases such as Alzheimer's disease. Reducing radiation exposure and scan duration through low-count PET has become an important research focus to improve PET safety and efficiency. However, the image quality of low-count PET typically suffers due to increased noise and reduced tracer uptake. Deep learning methods have become popular in computer vision, and have been applied to low-count PET enhancement, aiming to improve image quality while maintaining clinical accuracy. Several deep learning models have been shown to be effective for FDG and amyloid PET but remain understudied for tau PET images, which exhibit weaker and more focal tracer uptake. In this study, we investigate whether deterministic and generative deep learning methods can synthesize qualitatively and quantitatively accurate enhanced low-count [$^{18\text{F$]-florzolotau tau brain PET images.
Materials and Methods
The main dataset included 52 pairs of T1-weighted MR images and static tau PET series with six five-minute frames (90--120 minutes after injection). The first frame was selected as the low-count PET image, corresponding to a 6-fold count reduction. First, we introduced a conventional deterministic deep learning model based on a U-Net variation to enhance low-count tau PET images, trained on paired low-count and full-count PET scans. Second, we proposed a novel generation strategy, multi-frame generation, to synthesize enhanced PET images by averaging multiple low-count-like PETs sampled from a generative deep learning model, which was implemented using a conditional consistency model (cCM). Both methods underwent thorough quantitative and qualitative evaluation at both global and regional levels. The proposed multi-frame generation was further validated in generalization such as output reliability, condition consistency, and count-level adaptability.
Results
Enhanced images from both deterministic and generative methods demonstrated significant improvements in quality and high similarity to the full-count PET images. The U-Net-based model effectively reduced image noise while preserving critical regions of high uptake, as observed in visual analyses. The generative cCM method achieved comparable performance in quantitative evaluations, and additionally synthesized images with more realistic textures. Furthermore, multi-frame generation demonstrated better performance in zero-shot evaluations and ablation studies, highlighting its flexibility and adaptability across different datasets.
Discussion and Conclusions
Deterministic and generative deep learning methods demonstrated effectiveness in enhancing low-count tau PET images, with multi-frame generation yielding superior texture preservation and generalizability. Statistical and regional analyses confirmed significant improvements, while uncertainty visualization and zero-shot inference highlighted the robustness and adaptability of the proposed generative strategy. Despite limitations in data diversity, these findings reinforce the potential of deep learning to support safer and more efficient tau PET imaging.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-20T16:35:31Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-08-20T16:35:31Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
Chinese Abstract iii
Abstract v
Contents vii
List of Figures xi
List of Tables xv
List of Abbreviations xvii
1 Introduction 1
1.1 Positron Emission Tomography 1
1.1.1 Clinical Uses of Tau PET 2
1.1.2 Low-count PET 3
1.2 Deep Learning in Computer Vision 4
1.3 Contributions 6
2 Related Works 9
2.1 Low-count PET Enhancement 9
2.1.1 Low-count PET Enhancement using Deterministic Models 9
2.1.2 Low-count PET Enhancement using Generative Models 10
2.2 Low-count Tau-PET Enhancement 10
3 Materials and Methods 13
3.1 Data collection 13
3.1.1 Six-frame dataset 13
3.1.2 Four-frame dataset 14
3.2 Data Preprocessing 14
3.3 Deep learning Methods 15
3.3.1 U-Net-based Model 15
3.3.2 Conditional Consistency Model 16
3.3.3 Multi-frame Generation 16
3.3.4 Single-step Generation 18
3.4 Experimental Settings 19
3.5 Evaluations 20
3.5.1 Metrics 20
3.5.2 Statistical Analysis 21
3.5.3 Regional Analysis 21
3.5.4 Visualization 23
4 Results 27
4.1 Qualitative Assessment 27
4.2 Quantitative Assessment 27
4.3 Regional Analysis 28
4.4 Validation for Multi-frame Generation 28
4.4.1 Visualization 29
4.4.2 Zero-shot Inference on DRF-3 Dataset 29
4.4.3 Generalization Study 29
4.4.4 Ablation Study 30
5 Discussion 39
5.1 Enhancement Evaluation 39
5.2 Multi-frame Generation Validation 40
5.3 Computational Trade-off 41
5.4 Limitations 41
6 Conclusion 45
Reference 47
Appendix 55
A Network Configurations and Details 55
B Supplementary Results 58
-
dc.language.isoen-
dc.subject深度學習zh_TW
dc.subject生成式模型zh_TW
dc.subject低計數正子斷層造影zh_TW
dc.subjectTau蛋白正子斷層造影zh_TW
dc.subjectGenerative modelsen
dc.subjectDeep learningen
dc.subjectTau PETen
dc.subjectLow-count PETen
dc.title以確定性與生成式深度學習模型強化低計數Tau蛋白正子斷層造影影像zh_TW
dc.titleLow-Count Tau PET Enhancement by Deterministic and Generative Deep Learning Modelsen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳中明;蕭穎聰zh_TW
dc.contributor.oralexamcommitteeChung-Ming Chen;Ing-Tsung Hsiaoen
dc.subject.keyword深度學習,生成式模型,低計數正子斷層造影,Tau蛋白正子斷層造影,zh_TW
dc.subject.keywordDeep learning,Generative models,Low-count PET,Tau PET,en
dc.relation.page65-
dc.identifier.doi10.6342/NTU202504207-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-08-15-
dc.contributor.author-college工學院-
dc.contributor.author-dept醫學工程學系-
dc.date.embargo-lift2025-08-21-
顯示於系所單位:醫學工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf13.08 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved