Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99686| Title: | 利用卷積神經網路恢復X光電腦斷層掃描影像之空間解析度 Spatial Resolution Restoration of X-ray Computed Tomography Images Using Convolutional Neural Network |
| Authors: | 林俊丞 Jun-Cheng Lin |
| Advisor: | 吳文超 Wen-Chau Wu |
| Keyword: | 薄切片,電腦斷層掃描,影像恢復,深度學習,雙視角, Thin-slice,Computed Tomography,Image Restoration,Deep Learning,Dual-View, |
| Publication Year : | 2025 |
| Degree: | 碩士 |
| Abstract: | 薄切片電腦斷層掃描(CT) 切片影像能提供比傳統CT切片影像更豐富的診斷細節,但是臨床上為了配合儲存、傳輸及重建等流程規格,通常只能取得較厚的切片影像,限制了醫師對細微結構的觀察。為解決此挑戰,本研究開發一種深度學習方法,從現有的厚切 CT 影像恢復出高品質的薄切影像。這個方法是基於卷積神經網路 (CNN) 的框架,其特色在於融合軸向與冠狀兩個視角的厚切影像資訊作為輸入,並採用殘差學習架構,透過數據訓練學習從厚切面到薄切面的非線性映射。研究中設計了多個不同參數數量的 2D CNN 模型,以傳統插值方法為基準線,並使用結構相似性指數 (SSIM) 及視覺評估進行性能衡量。實驗結果系統性地驗證了本研究所提出的雙視角 2D CNN 模型能有效重建薄切影像,其中,高參數量 2D CNN 模型在 SSIM 指標上表現最佳,平均 SSIM 值為 0.9610 ,顯著優於使用與此2D 架構相同變數量的 3D 模型以及傳統插值方法,更能捕捉影像特徵與保持空間連續性。我們透過像素級與視覺評估呈現模型恢復邊界、紋理及細節的能力,更進一步以模擬實驗探索模型所能夠恢復的切片厚度範圍,作為影像資料壓縮效率的參考。本研究成功驗證了利用雙視角深度學習重建高品質薄切片CT 影像的可行性與有效性,為前述臨床問題提供了一個解決方案。 Thin-slice computed tomography (CT) images provide richer diagnostic details as compared with conventional CT images, yet clinical practice often relies on thicker slices due to the limits in storage capacity, transmission bandwidth, and reconstruction processes, hampering radiologists from observing fine structures. To address this challenge, this study developed and evaluated a deep learning method to restore high-quality thin-slice images from existing thick-slice CT data. We proposed to use convolution neural networks (CNNs) to fuse thick-slice image information from both axial and coronal views. The model employed a residual learning architecture to extract the non-linear mapping from thick to thin slices. A series of 2D CNN models with varied numbers of parameters were designed for comparison, using traditional interpolation methods as a baseline, with performance measured by the structural similarity index (SSIM) and visual evaluation. Experimental results systematically validated the effectiveness of the proposed dual-view 2D CNN models in restoring thin-slice images. Of the 2D CNN models constructed, the high-parameter model achieved the best performance with an average SSIM of 0.9610, significantly outperforming the 3D model with a comparable number of parameters and traditional interpolation methods. The optimal model better captured image features and preserved spatial continuity. Pixel-level and visual evaluations also demonstrated the model’s ability to restore boundaries and textures. Further, we performed numerical simulation to evaluate the slice thickness recoverable with our model (i.e., compression efficiency). In conclusion, this study has successfully validated the feasibility and effectiveness of using dual-view deep learning to restore high-quality thin-slice CT images, which offers a potential solution to the abovementioned clinical problem. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99686 |
| DOI: | 10.6342/NTU202501898 |
| Fulltext Rights: | 同意授權(全球公開) |
| metadata.dc.date.embargo-lift: | 2025-09-18 |
| Appears in Collections: | 醫療器材與醫學影像研究所 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-113-2.pdf | 14.52 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
