請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99686完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳文超 | zh_TW |
| dc.contributor.advisor | Wen-Chau Wu | en |
| dc.contributor.author | 林俊丞 | zh_TW |
| dc.contributor.author | Jun-Cheng Lin | en |
| dc.date.accessioned | 2025-09-17T16:22:30Z | - |
| dc.date.available | 2025-09-18 | - |
| dc.date.copyright | 2025-09-17 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-28 | - |
| dc.identifier.citation | [1] Luuk J. Oostveen, Ewoud J. Smit, Helena M. Dekker, Constantinus F. Buckens, Sjoert A. H. Pegge, Frank de Lange, Ioannis Sechopoulos, and Mathias Prokop. Abdominopelvic ct image quality: Evaluation of thin (0.5-mm) slices using deep learning reconstruction. *American Journal of Roentgenology*, 220(3):381–388, 2023. PMID: 36259592.
[2] P. Thevenaz, T. Blu, and M. Unser. Interpolation revisited [medical images application]. *IEEE Transactions on Medical Imaging*, 19(7):739–758, 2000. [3] R. Lu, P. Marziliano, and C.H. Thng. Comparison of scene-based interpolation methods applied to ct abdominal images. In *The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society*, volume 1, pages 1561–1564, 2004. [4] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. *Nature*, 521(7553):436–444, May 2015. [5] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and Clara I. Sánchez. A survey on deep learning in medical image analysis. *Medical Image Analysis*, 42:60–88, 2017. [6] Sabina Umirzakova, Shabir Ahmad, Latif U. Khan, and Taegkeun Whangbo. Medical image super-resolution for smart healthcare applications: A comprehensive survey. *Information Fusion*, 103:102075, 2024. [7] Yanfei Jia, Guangda Chen, and Haotian Chi. Retinal fundus image super-resolution based on generative adversarial network guided with vascular structure prior. *Scientific Reports*, 14(1):22786, Oct 2024. [8] M. L. de Leeuw den Bouter, G. Ippolito, T. P. A. O’Reilly, R. F. Remis, M. B. van Gijzen, and A. G. Webb. Deep learning-based single image super-resolution for low-field mr brain images. *Scientific Reports*, 12(1):6362, Apr 2022. [9] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. *IEEE transactions on pattern analysis and machine intelligence*, 38(2):295–307, 2015. [10] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 1646–1654, 2016. [11] Yukai Wang, Qizhi Teng, Xiaohai He, Junxi Feng, and Tingrong Zhang. Ct-image of rock samples super resolution using 3d convolutional neural network. *Computers & Geosciences*, 133:104314, 2019. [12] Li Kang, Bin Tang, Jianjun Huang, and Jianping Li. 3d-mri super-resolution reconstruction using multi-modality based on multi-resolution cnn. *Computer Methods and Programs in Biomedicine*, 248:108110, 2024. [13] Xianling Dong, Shiqi Xu, Yanli Liu, Aihui Wang, M. Iqbal Saripan, Li Li, Xiaolei Zhang, and Lijun Lu. Multi-view secondary input collaborative deep learning for lung nodule 3d segmentation. *Cancer Imaging*, 20(1):53, Aug 2020. [14] Yang Qu, Xiaomin Li, Zhennan Yan, Liang Zhao, Lichi Zhang, Chang Liu, Shuaining Xie, Kang Li, Dimitris Metaxas, Wen Wu, Yongqiang Hao, Kerong Dai, Shaoting Zhang, Xiaofeng Tao, and Songtao Ai. Surgical planning of pelvic tumor using multi-view cnn with relation-context representation learning. *Medical Image Analysis*, 69:101954, 2021. [15] Lennart R Koetzier, Domenico Mastrodicasa, Timothy P Szczykutowicz, Niels R van der Werf, Adam S Wang, Veit Sandfort, Aart J van der Molen, Dominik Fleischmann, and Martin J Willemink. Deep learning image reconstruction for ct: technical principles and clinical prospects. *Radiology*, 306(3):e221257, 2023. [16] Heng-Sheng Chao, Yu-Hong Wu, Linda Siana, and Yuh-Min Chen. Generating high-resolution ct slices from two image series using deep-learning-based resolution enhancement methods. *Diagnostics*, 12(11), 2022. [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 770–778, 2016. [18] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 7132–7141, 2018. [19] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift, 2015. [20] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In *Proceedings of the 27th international conference on machine learning (ICML-10)*, pages 807–814, 2010. [21] Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity. *IEEE Transactions on Image Processing*, 13(4):600–612, 2004. [22] Kai Zeng and Zhou Wang. 3d-ssim for video quality assessment. In *2012 19th IEEE International Conference on Image Processing*, pages 621–624, 2012. [23] Diederik P Kingma. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. [24] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. [25] Jianning Chi, Zhiyi Sun, Huan Wang, Pengfei Lyu, Xiaosheng Yu, and Chengdong Wu. Ct image super-resolution reconstruction based on global hybrid attention. *Computers in Biology and Medicine*, 150:106112, 2022. [26] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. [27] Yuan Cao, Zixiang Chen, Misha Belkin, and Quanquan Gu. Benign overfitting in two-layer convolutional neural networks. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, *Advances in Neural Information Processing Systems*, volume 35, pages 25237–25250. Curran Associates, Inc., 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99686 | - |
| dc.description.abstract | 薄切片電腦斷層掃描(CT) 切片影像能提供比傳統CT切片影像更豐富的診斷細節,但是臨床上為了配合儲存、傳輸及重建等流程規格,通常只能取得較厚的切片影像,限制了醫師對細微結構的觀察。為解決此挑戰,本研究開發一種深度學習方法,從現有的厚切 CT 影像恢復出高品質的薄切影像。這個方法是基於卷積神經網路 (CNN) 的框架,其特色在於融合軸向與冠狀兩個視角的厚切影像資訊作為輸入,並採用殘差學習架構,透過數據訓練學習從厚切面到薄切面的非線性映射。研究中設計了多個不同參數數量的 2D CNN 模型,以傳統插值方法為基準線,並使用結構相似性指數 (SSIM) 及視覺評估進行性能衡量。實驗結果系統性地驗證了本研究所提出的雙視角 2D CNN 模型能有效重建薄切影像,其中,高參數量 2D CNN 模型在 SSIM 指標上表現最佳,平均 SSIM 值為 0.9610 ,顯著優於使用與此2D 架構相同變數量的 3D 模型以及傳統插值方法,更能捕捉影像特徵與保持空間連續性。我們透過像素級與視覺評估呈現模型恢復邊界、紋理及細節的能力,更進一步以模擬實驗探索模型所能夠恢復的切片厚度範圍,作為影像資料壓縮效率的參考。本研究成功驗證了利用雙視角深度學習重建高品質薄切片CT 影像的可行性與有效性,為前述臨床問題提供了一個解決方案。 | zh_TW |
| dc.description.abstract | Thin-slice computed tomography (CT) images provide richer diagnostic details as compared with conventional CT images, yet clinical practice often relies on thicker slices due to the limits in storage capacity, transmission bandwidth, and reconstruction processes, hampering radiologists from observing fine structures. To address this challenge, this study developed and evaluated a deep learning method to restore high-quality thin-slice images from existing thick-slice CT data. We proposed to use convolution neural networks (CNNs) to fuse thick-slice image information from both axial and coronal views. The model employed a residual learning architecture to extract the non-linear mapping from thick to thin slices. A series of 2D CNN models with varied numbers of parameters were designed for comparison, using traditional interpolation methods as a baseline, with performance measured by the structural similarity index (SSIM) and visual evaluation. Experimental results systematically validated the effectiveness of the proposed dual-view 2D CNN models in restoring thin-slice images. Of the 2D CNN models constructed, the high-parameter model achieved the best performance with an average SSIM of 0.9610, significantly outperforming the 3D model with a comparable number of parameters and traditional interpolation methods. The optimal model better captured image features and preserved spatial continuity. Pixel-level and visual evaluations also demonstrated the model’s ability to restore boundaries and textures. Further, we performed numerical simulation to evaluate the slice thickness recoverable with our model (i.e., compression efficiency). In conclusion, this study has successfully validated the feasibility and effectiveness of using dual-view deep learning to restore high-quality thin-slice CT images, which offers a potential solution to the abovementioned clinical problem. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-09-17T16:22:30Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-09-17T16:22:30Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements ii 摘要 iii Abstract iv Contents v List of Figures viii List of Tables x Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation and Problem Statement 1 1.3 Existing Methods and Limitations 2 1.4 Study Purpose 4 Chapter 2 Materials and Methods 5 2.1 Image Data 7 2.2 Image Preprocessing 7 2.2.1 Normalization 8 2.2.2 Resizing 8 2.2.3 Cropping 10 2.3 Deep Learning Model Structure 11 2.3.1 Feature Extraction 13 2.3.2 Feature Fusion 15 2.3.3 Residual Learning 15 2.3.4 Loss Function 17 2.3.5 Training Settings 19 Chapter 3 Results and Discussion 22 3.1 Training Results 22 3.2 Model Performance 27 3.3 Model Comparison 29 3.4 Performance Comparison of Single-View and Dual-View Models 37 3.5 Simulation 40 3.6 Study Limitations 44 Chapter 4 Conclusion 45 References 45 | - |
| dc.language.iso | en | - |
| dc.subject | 薄切片 | zh_TW |
| dc.subject | 電腦斷層掃描 | zh_TW |
| dc.subject | 影像恢復 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 雙視角 | zh_TW |
| dc.subject | Dual-View | en |
| dc.subject | Thin-slice | en |
| dc.subject | Computed Tomography | en |
| dc.subject | Image Restoration | en |
| dc.subject | Deep Learning | en |
| dc.title | 利用卷積神經網路恢復X光電腦斷層掃描影像之空間解析度 | zh_TW |
| dc.title | Spatial Resolution Restoration of X-ray Computed Tomography Images Using Convolutional Neural Network | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 鍾孝文;林益如;蔡炳輝;劉高郎 | zh_TW |
| dc.contributor.oralexamcommittee | Hsiao-Wen Chung;Yi-ru Lin;Ping-Huei Tsai;Kao-Lang Liu | en |
| dc.subject.keyword | 薄切片,電腦斷層掃描,影像恢復,深度學習,雙視角, | zh_TW |
| dc.subject.keyword | Thin-slice,Computed Tomography,Image Restoration,Deep Learning,Dual-View, | en |
| dc.relation.page | 50 | - |
| dc.identifier.doi | 10.6342/NTU202501898 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-07-29 | - |
| dc.contributor.author-college | 醫學院 | - |
| dc.contributor.author-dept | 醫療器材與醫學影像研究所 | - |
| dc.date.embargo-lift | 2025-09-18 | - |
| 顯示於系所單位: | 醫療器材與醫學影像研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf | 14.52 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
