請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94504完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林永松 | zh_TW |
| dc.contributor.advisor | Frank Yeong-Sung Lin | en |
| dc.contributor.author | 吳紅沛 | zh_TW |
| dc.contributor.author | Hung-Pei Wu | en |
| dc.date.accessioned | 2024-08-16T16:25:00Z | - |
| dc.date.available | 2024-08-17 | - |
| dc.date.copyright | 2024-08-16 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-08 | - |
| dc.identifier.citation | S. R. Abdel-Misih and M. Bloomston, “Liver anatomy,” The Surgical clinics of North America, vol. 90, p. 643, Aug. 2010.
J. D. L. Araújo, L. B. da Cruz, J. O. B. Diniz, J. L. Ferreira, A. C. Silva, A. C. de Paiva, and M. Gattass, “Liver segmentation from computed tomography images using cascade deep learning,” Computers in Biology and Medicine, vol. 140, p. 105095, Jan. 2022. F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol. 68, pp. 394–424, Sept. 2018. H. Rumgay, M. Arnold, J. Ferlay, O. Lesi, C. J. Cabasag, J. Vignat, M. Laversanne, K. A. McGlynn, and I. Soerjomataram, “Global burden of primary liver cancer in 2020 and predictions to 2040,” Journal of Hepatology, vol. 77, pp. 1598–1606, Dec. 2022. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, Springer, Oct. 2015. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A nested U-Net architecture for medical image segmentation,” in Proceedings of Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11, Springer, Sept. 2018. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention, pp. 424–432, Springer, Oct. 2016. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571, Oct. 2016. S. Kumar, R. Moni, and J. Rajeesh, “Automatic liver and lesion segmentation: a primary step in diagnosis of liver diseases,” Signal, Image and Video Processing, vol. 7, pp. 163–172, Jan. 2013. S. Basar, A. Adnan, N. H. Khan, and S. Haider, “Color image segmentation using kmeans classification on RGB histogram,” Recent Advances In Telecommunications, Informatics and Educational Technologies, pp. 257–262, Sept. 2014. M. Maška, O. Daněk, S. Garasa, A. Rouzaut, A. Munoz-Barrutia, and C. Ortiz-de Solorzano, “Segmentation and shape tracking of whole fluorescent cells based on the Chan–Vese model,” IEEE Transactions on Medical Imaging, vol. 32, pp. 995–1006, Jan. 2013. D. Gupta and R. S. Anand, “A hybrid edge-based segmentation approach for ultrasound medical images,” Biomedical Signal Processing and Control, vol. 31, pp. 116–126, Jan. 2017. N. Senthilkumaran and S. Vaithegi, “Image segmentation by using thresholding techniques for medical images,” Computer Science & Engineering: An International Journal, vol. 6, pp. 1–13, Feb. 2016. D. Ciresan, A. Giusti, L. Gambardella, and J. Schmidhuber, “Deep neural networks segment neuronal membranes in electron microscopy images,” vol. 2, pp. 2843–2851, Dec. 2012. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, Jun. 2015. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, Springer, Oct. 2015. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Z. Zhang, Q. Liu, and Y. Wang, “Road extraction by deep residual U-Net,” IEEE Geoscience and Remote Sensing Letters, vol. 15, pp. 749–753, Mar. 2018. Y. Liu, N. Qi, Q. Zhu, and W. Li, “CR-U-Net: Cascaded U-Net with residual mapping for liver segmentation in CT images,” in 2019 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4, Jan. 2019. M. Baldeon-Calisto and S. K. Lai-Yuen, “AdaResU-Net: Multiobjective adaptive convolutional neural network for medical image segmentation,” Neurocomputing, vol. 392, pp. 325–340, Jun. 2020. X. Han, “Automatic liver lesion segmentation using a deep convolutional neural network method,” ArXiv, vol. abs/1704.07239, Apr. 2017. S. J. C. Soerensen, R. E. Fan, A. Seetharaman, L. Chen, W. Shao, I. Bhattacharya, Y.-H. Kim, R. Sood, M. Borre, B. I. Chung, K. J. To’o, M. Rusu, and G. A. Sonn, “Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy,” The Journal of Urology, vol. 206, pp. 604–612, Sept. 2021. Y. Ou, Y. Yuan, X. Huang, K. Wong, J. Volpi, J. Z. Wang, and S. T. Wong, “Lambdaunet: 2.5 d stroke lesion segmentation of diffusion-weighted MR images,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, pp. 731–741, Springer, Oct. 2021. S. Motamed, P. Rogalla, and F. Khalvati, “Data augmentation using generative adversarial networks GANs for GAN-based detection of pneumonia and COVID-19 in chest X-ray images,” Informatics in Medicine Unlocked, vol. 27, p. 100779, Jan. 2021. D. Mahapatra, B. Bozorgtabar, and R. Garnavi, “Image super-resolution using progressive generative adversarial networks for medical image analysis,” Computerized Medical Imaging and Graphics, vol. 71, pp. 30–39, Jan. 2019. “Liver tumor segmentation challenge.” https://competitions.codalab.org/competitions/17094, 2017. Accessed: 2024-8-6. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94504 | - |
| dc.description.abstract | 肝癌是全球第二致命的癌症,治療方式包括手術切除和血管灌流治療等。肝臟檢測通常依賴電腦斷層掃描(CT)影像,但手動標記影像耗時且受臨床醫師經驗影響。另外,臨床上許多重要的治療方法都需要精準的計算肝葉體積。因此,本研究首先使用兩個二維模型:ResUNet 以及ResNet50 + FCN16 進行自動化左右肝分割以及體積計算,以提高醫療影像分割效率、減輕臨床醫師負擔。
此外,三維分割模型相較二維模型表現較佳,但耗費大量運算資源。本研究提出三切面 ResUNet 融合架構構來達到自動化肝臟腫瘤分割。該架構藉由融合三切面的二維模型預測結果,來克服過去二維模型在單一切面分割病灶的劣勢。但因為醫療影像切片之間的距離通常比切片內的解析度來得高,因此冠狀面以及矢狀面的影像解析度會因此較低。為了解決三維影像資料在 z 軸的解析度不足的問題,提出一資料擴增做法,採用生成式人工智慧(GAI)策略,利用神經網模型從相鄰切片生成合成切片,從而提高矢狀面和冠狀面的影像解析度。與其他基於幾何轉換的擴增做法不同,該擴增做法除了有效增加訓練樣本切片,也提高單一醫療立體影像內的資訊。 最後在全肝體積估算上,使用 ResUNet 達到了 0.6% 的差異比;使用 ResNet50 + FCN16 達到了 2.7% 的差異比。另外,透過所提出的資料擴增做法,顯著地提高冠狀面以及矢狀面的分割模型成效,也有助於提高三切面ResUNet 融合架構的分割表現。在肝腫瘤分割任務上取得了 74.4% 的 Dice 分數。除此之外,我們進一步的展示了提出的融合架構在判斷邊緣腫瘤像素的精確性上有所提升。 | zh_TW |
| dc.description.abstract | Liver cancer is the second leading cause of cancer-related mortality worldwide. Standard treatments, including surgical resection and intra-arterial infusion, heavily depend on diagnostic imaging, particularly computed tomography (CT) scans. However, manual annotation of CT images is both time-consuming and highly dependent on the clinician’s experience. Furthermore, many critical clinical treatments require precise volumetric calculations of liver lobes. Therefore, this study first utilizes two-dimensional models: ResUNet and ResNet50 combined with FCN16, for the automated segmentation and volumetric calculation of the left and right liver lobes. This approach can enhance labeling efficiency and alleviate the burden on clinicians.
While three-dimensional segmentation models generally perform better than their two-dimensional counterparts, they demand significant computational resources. To address this, we propose a Tri-plane ResUNet Model Fusion architecture for the automated segmentation of liver tumors. This architecture leverages the predictive capabilities of fused three-plane models, effectively addressing the limitations associated with previous two-dimensional models that segmented lesions on a single plane. By integrating the segmentation results from axial, coronal, and sagittal planes, this fusion approach overcomes the shortcomings of single-plane segmentation. A significant challenge in medical imaging is the lower resolution in the z-axis compared to the in-plane resolution, which affects the clarity of sagittal and coronal images. To address this issue, we introduce a generative artificial intelligence (GAI) strategy that employs neural network models to generate synthetic slices from adjacent ones, thereby enhancing the resolution and quality of images in the sagittal and coronal planes. This approach not only augments the training dataset with realistic synthetic slices but also enriches the information content within a single volumetric medical image, surpassing traditional geometric transformation-based augmentation methods. For the overall liver volume estimation, ResUNet achieved a difference ratio of 0.6%, and ResNet50 combined with FCN16 achieved a difference ratio of 2.7%. The proposed data augmentation technique significantly improves the performance of segmentation models for the coronal and sagittal planes, contributing to the overall effectiveness of the Tri-plane ResUNet Model Fusion architecture. In liver tumor segmentation tasks, we achieved a Dice score of 74.4%. Additionally, we demonstrate that the fusion architecture notably improves the accuracy of detecting tumor boundary pixels, highlighting its potential for clinical applications. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-16T16:24:58Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-16T16:25:00Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 ii Abstract iv Contents vii List of Figures x List of Tables xii Chapter 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 2 Literature Review 6 2.1 Medical Image Segmentation . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Region-based Segmentation . . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Edge-detection-based Segmentation . . . . . . . . . . . . . . . . . 7 2.1.3 Deep-learning-based Segmentation . . . . . . . . . . . . . . . . . . 7 2.2 Deep Learning Segmentation Methods . . . . . . . . . . . . . . . . . 7 2.2.1 CNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 FCNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.3 U-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.3.1 2D U-Nets . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.3.2 3D U-Nets . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3.3 2.5D U-Nets . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Data Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 3 Proposed Methods 13 3.1 Data Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1.1 CT Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1.2 Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1.3 Data Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.3.1 Geometric Transformations and Contrast Adjustments . 15 3.1.3.2 Inter-slice Data Augmentation . . . . . . . . . . . . . . 16 3.2 Liver Lobe Segmentation and Volume Calculation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.1 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.2 Volume Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Tumor Segmentation Method . . . . . . . . . . . . . . . . . . . . . 25 3.3.1 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.3.2 Tri-plane ResUNet Model Fusion . . . . . . . . . . . . . . . . . . . 27 3.3.3 Fusion Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3.3.1 Method 1: Sequential fusion . . . . . . . . . . . . . . 29 3.3.3.2 Method 2: Mixed sequential-parallel fusion . . . . . . 29 3.3.3.3 Method 3: Axial-dominant fusion . . . . . . . . . . . . 30 3.3.3.4 Method 4: Tumor Size Filtering fusion . . . . . . . . . 31 3.4 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4.1 Precision and Recall . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4.2 Dice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4.3 Intersection over Union . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4.4 MAE and MSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.5 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5.1 F-score Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Chapter 4 Experiments and Results 38 4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.1.1 Liver Tumor Segmentation Dataset . . . . . . . . . . . . . . . . . . 38 4.1.2 National Taiwan University Hospital Dataset . . . . . . . . . . . . . 40 4.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.1 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.2 Left and Right Liver Segmentation and Volume Calculation . . . . . 41 4.2.3 Inter-slice Data Augmentation . . . . . . . . . . . . . . . . . . . . 41 4.2.4 Tumor Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Chapter 5 Discussion 54 Chapter 6 Conclusion 57 References 59 | - |
| dc.language.iso | en | - |
| dc.subject | 影像分割 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 資料擴增 | zh_TW |
| dc.subject | 肝臟腫瘤分割 | zh_TW |
| dc.subject | 肝臟分割 | zh_TW |
| dc.subject | Liver segmentation | en |
| dc.subject | Liver tumor segmentation | en |
| dc.subject | Data augmentation | en |
| dc.subject | Image segmentation | en |
| dc.subject | Deep learning | en |
| dc.title | 基於三切面ResUNet融合架構與生成式人工智慧的肝臟腫瘤自動分割 | zh_TW |
| dc.title | Automated Segmentation of Liver Tumor Using Tri-Plane ResUNet Model Fusion and Generative Artificial Intelligence | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 呂東武;楊瀅臻;蕭邱漢 | zh_TW |
| dc.contributor.oralexamcommittee | Tung-Wu Lu;Ying-Chen Yang;Chiu-Han Hsiao | en |
| dc.subject.keyword | 深度學習,影像分割,肝臟分割,肝臟腫瘤分割,資料擴增, | zh_TW |
| dc.subject.keyword | Deep learning,Image segmentation,Liver segmentation,Liver tumor segmentation,Data augmentation, | en |
| dc.relation.page | 63 | - |
| dc.identifier.doi | 10.6342/NTU202403965 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2024-08-12 | - |
| dc.contributor.author-college | 管理學院 | - |
| dc.contributor.author-dept | 資訊管理學系 | - |
| 顯示於系所單位: | 資訊管理學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 20.54 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
