Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87665
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林永松zh_TW
dc.contributor.advisorYeong-Sung Linen
dc.contributor.author陳宇鑫zh_TW
dc.contributor.authorYu-Hsin Chenen
dc.date.accessioned2023-07-11T16:12:32Z-
dc.date.available2025-01-01-
dc.date.copyright2023-07-11-
dc.date.issued2022-
dc.date.submitted2002-01-01-
dc.identifier.citationH. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, 2021.
P. Campadelli, E. Casiraghi, and A. Esposito, “Liver Segmentation from Computed Tomography Scans: A survey and a New Algorithm,” Artificial Intelligence in Medicine, vol. 45, no. 2, pp. 185–196, 2009.
“CT/ MRI LI-RADS® v2018 CORE.” https://www.acr.org/-/media/ACR/Files/RADS/LI-RADS/LI-RADS-2018-Core.pdf, 2018. [Online; accessed December 20, 2021].
X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P.-A. Heng, “H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes,” IEEE Transactions on Medical Imaging, vol. 37, no. 12, pp. 2663–2674, 2018.
F. F. Li, J. Johnson, and S. Yeung, “Lecture 11: Detection and Segmentation.” http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf, 2017. [Online; accessed December 20, 2021].
Y. Han, X. Li, B. Wang, and L. Wang, “Boundary Loss-Based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor on Computed Tomography,” Algorithms, vol. 14, no. 5, 2021.
P. Bilic et al., “The Liver Tumor Segmentation Benchmark (LiTS).” arXiv:1901.04056, 2019.
CodaLab, “LiTS - Liver Tumor Segmentation Challenge.” https://competitions.codalab.org/competitions/17094. [Online; accessed December 20, 2021].
A. Kalra, “Chapter 9 - Developing FE Human Models From Medical Images,” in Basic Finite Element Method as Applied to Injury Biomechanics (K.-H. Yang, ed.), pp. 389–415, Academic Press, 2018.
J. Broder and R. Preston, “Chapter 1 - Imaging the Head and Brain,” in Diagnostic Imaging for the Emergency Physician (J. Broder, ed.), pp. 1–45, Saint Louis: W.B. Saunders, 2011.
K. Cai, R. Yang, H. Chen, L. Li, J. Zhou, S. Ou, and F. Liu, “A Framework Combining Window Width-Level Adjustment and Gaussian Filter-Based Multi-Resolution for Automatic Whole Heart Segmentation,” Neurocomputing, vol. 220, pp. 138–150, 2017.
X. Han, “Automatic Liver Lesion Segmentation Using A Deep Convolutional Neural Network Method.” arXiv:1704.07239, 2017.
H. Jiang, T. Shi, Z. Bai, and L. Huang, “AHCNet: An Application of Attention Mechanism and Hybrid Connection for Liver Tumor Segmentation in CT Volumes,” IEEE Access, vol. 7, pp. 24898–24909, 2019.
S.-T. Tran, C.-H. Cheng, and D.-G. Liu, “A Multiple Layer U-Net, Un-Net, for Liver and Liver Tumor Segmentation in CT,” IEEE Access, vol. 9, pp. 3752–3764, 2021.
X.-F. Xi, L. Wang, V. S. Sheng, Z. Cui, B. Fu, and F. Hu, “Cascade U-ResNets for Simultaneous Liver and Lesion Segmentation,” IEEE Access, vol. 8, pp. 68944–68952, 2020.
L. Soler, H. Delingette, G. Malandain, J. Montagnat, N. Ayache, C. Koehl, O. Dourthe, B. Malassagne, M. Smith, D. Mutter, and J. Marescaux, “Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery,” Comput. Aided Surg., vol. 6, no. 3, pp. 131–142, 2001.
M. Ciecholewski and M. R. Ogiela, “Automatic Segmentation of Single and Multiple Neoplastic Hepatic Lesions in CT Images,” in IWINAC, 2007.
J. Moltz, L. Bornemann, V. Dicken, and H. Peitgen, “Segmentation of Liver Metastases in CT Scans by Adaptive Thresholding and Morphological Processing,” The MIDAS Journal, 2008.
L. Ruskó, G. Bekes, G. Németh, and M. Fidrich, “Fully Automatic Liver Segmentation for Contrast-Enhanced CT Images,” in Proc. MICCAI Workshop 3-D Segmentation. Clinic: A Grand Challenge, p. 143–150, 2007.
D. Wong, J. Liu, F. Yin, Q. Tian, W. Xiong, J. Zhou, Q. Yingyi, T. Han, S. Venkatesh, and S. Wang, “A Semi-Automated Method for Liver Tumor Segmentation Based on 2D Region Growing with,” The MIDAS Journal, 2008.
L. Massoptier and S. Casciaro, “A New Fully Automatic and Robust Algorithm for Fast Segmentation of Liver Tissue and Tumors from CT Scans,” Eur. Radiol., vol. 18, no. 8, pp. 1658–1665, 2008.
Y. Häme, “Liver Tumor Segmentation Using Implicit Surface Evolution,” The MIDAS Journal, 2008.
M. Freiman, O. Cooper, D. Lischinski, and L. Joskowicz, “Liver Tumors Segmentation from CTA Images Using Voxels Classification and Affinity Constraint Propagation,” Int. J. Comput. Assist. Radiol. Surg., vol. 6, no. 2, pp. 247–255, 2011.
Y. Li, S. Hara, and K. Shimura, “A Machine Learning Approach for Locating Boundaries of Liver Tumors in CT Images,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 1, pp. 400–403, 2006.
P.-H. Conze, V. Noblet, F. Rousseau, F. Heitz, V. de Blasi, R. Memeo, and P. Pessaux, “Scale-Adaptive Supervoxel-Based Random Forests for Liver Tumor Segmentation in Dynamic Contrast-Enhanced CT Scans,” Int. J. Comput. Assist. Radiol. Surg., vol. 12, no. 2, pp. 223–233, 2017.
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440, 2015.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems (F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds.), vol. 25, Curran Associates, Inc., 2012.
K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv:1409.1556, 2015.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015.
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pp. 234–241, Springer International Publishing, 2015.
“KiTS19 Results.” http://results.kits-challenge.org/miccai2019/. [Online; accessed September 7, 2022].
M. H. Vu, G. Grimbergen, T. Nyholm, and T. Löfstedt, “Evaluation of Multislice Inputs to Convolutional Neural Networks for Medical Image Segmentation,” Medical Physics, vol. 47, no. 12, p. 6216–6231, 2020.
F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A Comprehensive Survey on Transfer Learning.” arXiv:1911.02685, 2020.
N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, p. 1299–1312, 2016.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269, 2017.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, 2009.
J. Ma, J. Chen, M. Ng, R. Huang, Y. Li, C. Li, X. Yang, and A. L. Martel, “Loss Odyssey in Medical Image Segmentation,” Medical Image Analysis, vol. 71, p. 102035, 2021.
T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal Loss for Dense Object Detection.” arXiv:1708.02002, 2018.
M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The Importance of Skip Connections in Biomedical Image Segmentation.” arXiv:1608.04117, 2016.
F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.” arXiv:1606.04797, 2016.
S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky Loss Function for Image Segmentation Using 3D Fully Convolutional Deep Networks.” arXiv:1706.05721, 2017.
D. Karimi and S. E. Salcudean, “Reducing the Hausdorff Distance in Medical Image Segmentation With Convolutional Neural Networks,” IEEE Transactions on Medical Imaging, vol. 39, no. 2, pp. 499–513, 2020.
S. A. Taghanaki, Y. Zheng, S. Kevin Zhou, B. Georgescu, P. Sharma, D. Xu, D. Comaniciu, and G. Hamarneh, “Combo Loss: Handling Input and Output Imbalance in Multi-Organ Segmentation,” Computerized Medical Imaging and Graphics, vol. 75, pp. 24–33, 2019.
F. Isensee and K. H. Maier-Hein, “An Attempt at Beating the 3D U-Net.” arXiv:1908.02182, 2019.
A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam, and M. Nielsen, “Deep Feature Learning for Knee Cartilage Segmentation Using a Triplanar Convolutional Neural Network,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013 (K. Mori, I. Sakuma, Y. Sato, C. Barillot, and N. Navab, eds.), (Berlin, Heidelberg), pp. 246–253, Springer Berlin Heidelberg, 2013.
V. Sundaresan, G. Zamboni, P. M. Rothwell, M. Jenkinson, and L. Griffanti, “Triplanar Ensemble U-Net Model for White Matter Hyperintensities Segmentation on MR Images,” Medical Image Analysis, vol. 73, p. 102184, 2021.
“Normalization | Machine Learning - Google Developers.” https://developers.google.com/machine-learning/data-prep/transform/normalization. [Online; accessed September 2, 2022].
Y. Tang, Y. Tang, Y. Zhu, J. Xiao, and R. M. Summers, “E2Net: An Edge Enhanced Network for Accurate Liver and Tumor Segmentation on CT Scans.” arXiv:2007.09791, 2020.
“Keras: the Python Deep Learning API.” https://keras.io/. [Online; accessed September 2, 2022].
“PyTorch.” https://pytorch.org/. [Online; accessed December 22, 2021].
M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.” arXiv:1905.11946, 2020.
“Morphological Transformations - OpenCV.” https://docs.opencv.org/4.x/d9/d61/tutorial_py_morphological_ops.html. [Online; accessed September 2, 2022].
“Matplotlib — Visualization with Python.” https://matplotlib.org/. [Online; accessed September 3, 2022].
“Welcome to Python.org.” https://www.python.org/. [Online; accessed September 3, 2022].
E. Vorontsov, G. Chartrand, A. Tang, C. Pal, and S. Kadoury, “Liver Lesion Segmentation Informed by Joint Liver Segmentation,” 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1332–1335, 2018.
Y. Fu, “Image Classification via Fine-Tuning with EfficientNet - Keras.” https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/, 2020. [Online; accessed September 2, 2022].
Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, “Self-Training With Noisy Student Improves ImageNet Classification,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, 2020.
S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” arXiv:1502.03167, 2015.
F. Yu and V. Koltun, “Multi-Scale Context Aggregation by Dilated Convolutions.” arXiv:1511.07122, 2015.
P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, “Understanding Convolution for Semantic Segmentation.” arXiv:1702.08502, 2017.
“Multidimensional Image Processing (scipy.ndimage).” https://docs.scipy.org/doc/scipy/reference/ndimage.html. [Online; accessed September 3, 2022].
“SciPy.” https://scipy.org/. [Online; accessed September 3, 2022].
Z. Bai, H. Jiang, S. Li, and Y.-D. Yao, “Liver Tumor Segmentation Based on Multi-Scale Candidate Generation and Fractal Residual Network,” IEEE Access, vol. 7, pp. 82122–82133, 2019.
“tf.keras.preprocessing.image.ImageDataGenerator - TensorFlow.” https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator. [Online; accessed September 3, 2022].
“TensorFlow.” https://www.tensorflow.org/. [Online; accessed September 3, 2022].
B. G. do Amaral, “Elastic Transform for Data Augmentation - Kaggle.” https://www.kaggle.com/code/bguberfain/elastic-transform-for-data-augmentation/notebook, 2016. [Online; accessed September 2, 2022].
“Shearing Transformation - JavaFX - Tutorialspoint.” https://www.tutorialspoint.com/javafx/shearing_transformation.htm. [Online; accessed September 2, 2022].
J. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, 1986.
“Canny Edge Detection - OpenCV.” https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html. [Online; accessed September 3, 2022].
“OpenCV: Home.” https://opencv.org/. [Online; accessed September 3, 2022].
“scipy.ndimage.gaussian_filter — SciPy v1.9.1 Manual.” https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.gaussian_filter.html. [Online; accessed September 3, 2022].
“cv::normalize - Operations on arrays - OpenCV.” https://docs.opencv.org/3.4/d2/de8/group__core__array.html#ga87eef7ee3970f86906d69a92cbf064bd. [Online; accessed September 3, 2022].
D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization.” arXiv:1412.6980, 2014.
“NumPy.” https://numpy.org/. [Online; accessed September 3, 2022].
“NIfTI: — Neuroimaging Informatics Technology Initiative.” https://nifti.nimh.nih.gov/. [Online; accessed September 3, 2022].
N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang, “On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima.” arXiv:1609.04836, 2016.
“skimage.util.view_as_windows - Module: util — skimage v0.19.2 docs.” https://scikit-image.org/docs/stable/api/skimage.util.html#skimage.util.view_as_windows. [Online; accessed September 3, 2022].
“scikit-image: Image Processing in Python — scikit-image.” https://scikit-image.org/. [Online; accessed September 3, 2022].
J. Zhang, Y. Xie, P. Zhang, H. Chen, Y. Xia, and C. Shen, “Light-Weight Hybrid Convolutional Network for Liver Tumor Segmentation,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 4271–4277, International Joint Conferences on Artificial Intelligence Organization, 2019.
N. Alalwan, A. Abozeid, A. A. ElHabshy, and A. Alzahrani, “Efficient 3D Deep Learning Model for Medical Image Semantic Segmentation,” Alexandria Engineering Journal, vol. 60, no. 1, pp. 1231–1239, 2021.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87665-
dc.description.abstract電腦斷層是目前肝臟腫瘤診斷中應用最廣泛的醫學成像方式。傳統上,放射科醫師透過肉眼對電腦斷層圖像中別出肝臟和肝臟的腫瘤區域後,再以逐一切片勾畫肝臟和腫瘤的方式,作為後續治療的依據。然而,該工作相當耗費時間與勞力,而且缺乏一個客觀且明確定義的判斷方法。因此,在臨床實踐中,肝臟和腫瘤分割過程的自動化是有價值的。
近年來,深度學習的技術促使醫療影像語義分割的結果有了顯著的進步,其中 3D 分割網路在許多的相關任務當中取得優異的表現。然而,3D 卷積本身有計算資源上的限制,2D 卷積則難以學習到相鄰 CT 切片之間的上下文資訊。因此,本論文提出了一個基於 2D 的深度學習框架用於電腦斷層圖像中肝臟及腫瘤的自動分割,並應用一些相關文獻提出的可以提升 2D 分割網路表現的方式,包含輸入多張相鄰的二維切片,兩階段級聯式的模型建置方式,使用 EfficientNet 做為分割網路的編碼器,以及三平面集成。如此,所提出的方法能夠在相對較低的計算資源需求之下達到良好的肝臟和腫瘤分割表現。
除此之外,多數相關研究只注重在整個模型建置流程其中一個層面上的改良,例如網路架構或是損失函數的設計。故本論文將以整個深度學習框架當中的各項實作細節作為研究主體,從一開始的資料前處理、網路架構、損失函數、多模型分割結果集成,再到最後的後處理,試圖從相關實驗中找出一個最佳的方法組合以更進一步地提升所提出方法肝臟和腫瘤的分割準確度。
本篇論文的研究貢獻如下。第一,本論文提出了一個基於 2D 的深度學習框架用於電腦斷層圖像中肝臟及腫瘤的自動分割,以解決相關 3D 方法於計算資源上的限制,其中輸入多張相鄰的二維切片,兩階段級聯式的模型建置方式,使用 EfficientNet 做為分割網路的編碼器,以及三平面集成的作法使其能夠在相對較低的計算資源需求之下達到良好的肝臟和腫瘤分割表現。第二,多數相關研究只注重在整個模型建置流程其中一個層面上的改良,故本論文以整個深度學習框架當中的各項實作細節作為研究主體,從一開始的資料前處理到最後的後處理。基於從實驗結果得到的最佳方法組合,所提出方法於肝臟和腫瘤分割的表現有更進一步的提升,在 LiTS 測試集肝臟分割的 Dice per case 達到 0.9660 的水準,腫瘤分割的 Dice per case 則達到 0.7180 的水準。同時,這些實驗結果將會形成一套分割模型表現的改善方案,可供從事相關工作的研究人員參考。
zh_TW
dc.description.abstractComputed tomography is currently the most widely used medical imaging method in liver tumor diagnosis. Traditionally, radiologists identify liver and tumor regions from CT images with naked eyes and then delineate these regions in a slice-by-slice manner as the basis for subsequent treatment. However, this work is time-consuming and labor-intensive, lacking an objective and clearly defined measurement method. Therefore, in clinical practice, the automation of the liver and tumor segmentation process is valuable.
In recent years, deep learning techniques have led to significant progress in the results of semantic segmentation of medical images, in which 3D segmentation networks have achieved excellent performance in many related tasks. However, 3D convolution has limitations in computational resources, and 2D convolution is hard to learn the context information between adjacent CT slices. Therefore, this thesis proposes a 2D-based deep learning framework for automatic liver and tumor segmentation on computed tomography images and applies some approaches proposed in the related literature that can improve the performance of 2D segmentation networks, including inputting multiple adjacent 2D slices, two-stage cascaded model building approach, using EfficientNet as the segmentation network's encoder, and the triplanar ensemble. This way, the proposed method can achieve great liver and tumor segmentation performance with a relatively low computational resource requirement.
Moreover, most related research only focuses on improving one aspect of the entire model building process, such as the design of network architecture or the loss function. Thus, this thesis will take the implementation details of the whole deep learning framework as the research subject, from the initial data preprocessing, network architecture, loss function, multi-model segmentation results ensemble, and to the final post-processing, trying to find the best method combination from the related experiments to improve the proposed method's liver and tumor segmentation accuracy even further.
The research contributions of this thesis are as follows. First, this thesis proposes a 2D-based deep learning framework for automatic liver and tumor segmentation on computed tomography images to address the computational resource constraints of related 3D methods, in which the implementation of inputting multiple adjacent 2D slices, two-stage cascaded model building approach, using EfficientNet as the segmentation network's encoder, and the triplanar ensemble enable it to achieve great liver and tumor segmentation performance with a relatively low computational resource requirement. Second, most related research only focuses on improving one aspect of the entire model building process, so this thesis takes the implementation details of the whole deep learning framework as the research subject, from the initial data preprocessing to the final post-processing. Based on the best method combination obtained from the experimental results, the proposed method further improves its liver and tumor segmentation performance, achieving a Dice per case of 0.9660 for liver segmentation and a Dice per case of 0.7180 for tumor segmentation on the LiTS test set. Meanwhile, these experimental results will form a set of improvement schemes for the performance of the segmentation model, which can serve as a reference for researchers engaged in related works.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-07-11T16:12:32Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-07-11T16:12:32Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員審定書 i
誌謝 ii
摘要 iv
Abstract vi
Contents ix
List of Figures xiii
List of Tables xvii
Chapter 1 Introduction 1
1.1 Background 1
1.2 Research Question Definition 2
1.3 Motivation and Objective 3
1.4 Research Contribution 5
1.5 Thesis Organization 6
Chapter 2 Literature Review 8
2.1 Overview 8
2.2 Data Preprocessing Windowing 8
2.3 Liver and Tumor Segmentation Methods 11
2.3.1 Statistical-Based and Traditional Machine Learning Methods 11
2.3.2 Deep Learning Methods 13
2.3.2.1 FCN 13
2.3.2.2 U-Net 14
2.3.2.3 2D, 3D and 2.5D U-Net 16
2.3.2.4 Cascaded U-Net 17
2.3.2.5 Transfer Learning 18
2.4 Loss Functions 19
2.5 Multi-Model Segmentation Results Ensemble 21
2.6 Summary 23
Chapter 3 Method 25
3.1 Overview 25
3.2 Data Preprocessing 26
3.3 2.5D U-Net Architecture Design 30
3.4 Loss Functions 32
3.5 Triplanar Ensemble 33
3.6 Post-Processing 34
Chapter 4 Experiments 40
4.1 Overview 40
4.2 Dataset 40
4.3 Implementation Details 42
4.3.1 Data Preprocessing 42
4.3.1.1 Reorientation 42
4.3.1.2 Volumetric Size Unification 43
4.3.1.3 Windowing and Normalization 47
4.3.1.4 Other Implementation Details in Data Preprocessing 48
4.3.2 2.5D U-Net Architecture Design 49
4.3.3 Loss Functions 56
4.3.4 Triplanar Ensemble 57
4.3.5 Post-Processing 59
4.3.6 Stabilize the Experimental Results 61
4.3.7 Training Strategy 62
4.3.8 Data Augmentation 62
4.3.9 Edge Enhancement of the Ground Truth 64
4.4 Experimental Settings 69
4.5 Evaluation Metrics 72
4.6 Programming Details 78
4.7 Subsequent Experiments and the Experiment Running Process 81
Chapter 5 Results and Discussion 84
5.1 Effectiveness of the Triplanar Ensemble 84
5.2 Whether to Implement the “E2H” Training Strategy 85
5.3 Comparison of Different Volumetric Size Unification Methods 86
5.4 Whether to Unfreeze (Train) the Model Parameters of the Encoder and Whether to Increase the Batch Size 88
5.5 Comparison of Different Probability Value Thresholds for Being the Foreground Class 90
5.6 Whether and How to Implement Binary Mathematical Morphological Operations on the Segmentation Results after the Triplanar Ensemble 92
5.7 Comparison of Different Loss Functions 97
5.8 Comparison of Different Decoder Architectures 100
5.9 Whether and How to Implement Data Augmentation 102
5.10 Whether and How to Implement Edge Enhancement on the Ground Truth 104
5.11 Additional Experiments 106
5.11.1 Whether to Increase the Batch Size - Use a Larger batch size 108
5.11.2 Whether to Use Other Version of EfficientNet as the 2.5D U-Net Encoder 110
5.11.3 Improve the Overall Recall of the Model Through Loss Function Design 113
5.11.4 Whether to Use the “Cropped and Resized Patches” as the Input of the Second-Stage Model 118
5.11.5 Transverse Plane Model - Using Patient Data of Its Original Volumetric Size 127
5.11.6 Improvement of the Ensemble Weights Calculation of the Triplanar Model Segmentation Results 131
5.11.7 Whether to Exclude the Patient Data of the Specific Types in the Training of the Second-Stage Model 137
5.12 Generation of the Final Submission Data for the LiTS Competition 141
5.13 Comparison with Other Related Methods on the LiTS Test Set 146
5.14 Segmentation Result Visualization 153
5.15 Summary 157
Chapter 6 Conclusion 164
References 174
-
dc.language.isoen-
dc.subject醫療圖像分析zh_TW
dc.subject深度學習zh_TW
dc.subject2.5D U-Netzh_TW
dc.subject三平面集成zh_TW
dc.subject電腦斷層zh_TW
dc.subject肝臟及腫瘤分割zh_TW
dc.subject2.5D U-Neten
dc.subjectDeep Learningen
dc.subjectLiver and Tumor Segmentationen
dc.subjectMedical Image Analysisen
dc.subjectComputed Tomographyen
dc.subjectTriplanar Ensembleen
dc.title2.5D U-Net 級聯式深度學習框架應用於電腦斷層圖像中肝臟及腫瘤自動分割zh_TW
dc.titleU-LTSF, A 2.5D U-Net Cascaded Deep Learning Framework for Automatic Liver and Tumor Segmentation on Computed Tomography Imagesen
dc.typeThesis-
dc.date.schoolyear110-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee孔令傑zh_TW
dc.contributor.oralexamcommitteeChun-Hsien Lu;Shun-Ping Chung;Chia-Yen Lee;Ling-Chieh Kungen
dc.subject.keyword電腦斷層,醫療圖像分析,肝臟及腫瘤分割,深度學習,2.5D U-Net,三平面集成,zh_TW
dc.subject.keywordComputed Tomography,Medical Image Analysis,Liver and Tumor Segmentation,Deep Learning,2.5D U-Net,Triplanar Ensemble,en
dc.relation.page184-
dc.identifier.doi10.6342/NTU202203827-
dc.rights.note未授權-
dc.date.accepted2022-09-27-
dc.contributor.author-college管理學院-
dc.contributor.author-dept資訊管理學系-
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-110-2.pdf
  未授權公開取用
4.12 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved