Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69361
Title: | 卷積神經網路於正子斷層掃描影像重建之應用 Application of Convolutional Neural Networks in the Optimization of PET Image Reconstruction |
Authors: | Ke-Chia Kao 高可嘉 |
Advisor: | 周呈霙(Cheng-Ying Chou) |
Keyword: | 正子斷層掃描,深度學習,壓縮感知, PET,Deep Learning,Compressive sensing, |
Publication Year : | 2020 |
Degree: | 碩士 |
Abstract: | 因為雙平板式小動物正子斷層掃描具有高靈敏度和靈活的系統配置等特性,此系統常常被用於神經疾病的臨床前研究。但也因為雙平板式的硬體設計,許多事件無法被偵測到,而導致影像模糊,進而影響重建品質。為了要解決事件遺失的問題,本論文使用壓縮感知的技術搭配深度學習在影像重建演算法上。有別於以往的重建技術,重建模型必須經過訓練才可使用。訓練資料來自於以C語言以及GATE與MATLAB所撰寫的自動化資料生成程式,並且由Python工具包製作以ISTA-Net為基底之網路模型,用以訓練模型之學習參數。所得的重建影像,相對於傳統的最大似然與期望最大化估計演算法(MLEM)擁有更高的訊號雜訊比以及對比雜訊比。 Dual-head small-animal positron emission tomography (DHAPET) can be used for preclinical studies of neurological diseases because it possesses the characteristics of high detection sensitivity and flexible system configuration. However, the system geometry also leads to undetected events in the trans-axial direction and results in blurring of the reconstructed result. In order to solve the problem, a compressive sensing method incorporated with deep learning was applied to reconstruct images. Apart from traditional reconstruction algorithms, deep learning network models need to be trained before using. The training data were generated by use of a C based program, which integrated GATE and MATLAB codes. Then, the ISTA-Net based image reconstruction network model was built by using Python toolkits. The reconstructed results demonstrate higher signal-to-noise ratio and contrast- to-noisy ratio than those obtained by used of the maximum likelihood expectation maximization method. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69361 |
DOI: | 10.6342/NTU202003975 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 生物機電工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
U0001-1808202014440100.pdf Restricted Access | 1.97 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.