請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98527完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林永松 | zh_TW |
| dc.contributor.advisor | Yeong-Sung Lin | en |
| dc.contributor.author | 吳品萱 | zh_TW |
| dc.contributor.author | Pin-Hsuan Wu | en |
| dc.date.accessioned | 2025-08-14T16:27:44Z | - |
| dc.date.available | 2025-08-15 | - |
| dc.date.copyright | 2025-08-14 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-08-03 | - |
| dc.identifier.citation | [1] K. O. Asafo-Agyei and H. Samant, Hepatocellular Carcinoma. Treasure Island (FL): StatPearls Publishing, 2023. Last updated: June 12, 2023.
[2] H. B. El-Serag, Epidemiology of Hepatocellular Carcinoma, ch. 59, pp. 758–772. John Wiley & Sons, Ltd, 2020. [3] F. Rao, T. Lyu, Z. Feng, Y. Wu, Y. Ni, and W. Zhu, “A landmark-supervised registration framework for multi-phase ct images with cross-distillation,” Physics in Medicine & Biology, vol. 69, no. 11, p. 115001, 2024. [4] X. Li, H. Chen, Y. Zhang, Y. Li, X. Li, and X. Li, “Pa-resseg: A phase attention residual network for liver tumor segmentation from multi-phase ct images,” arXiv preprint arXiv:2103.00274, 2021. [5] Radiologist HQ, “Introduction to multiphase ct & mri of the liver,” Radiologist HQ, 2025. [6] A. Puri, “Prediction of multi-phase liver ct volumes using deep neural network,” Purdue University, 2021. [7] A. Al-Sharafi, H. Al-Sharafi, and A. Al-Sharafi, “Illustration of different types of image registration in medical imaging,” Journal of Medical Imaging and Radiation Sciences, vol. 55, no. 2, p. 100999, 2024. [8] Radiopaedia.org, “Ct four-phase liver (protocol),” Radiopaedia.org, 2025. [9] Rad Tech EDU, “Multi-phase liver ct protocol,” Rad Tech EDU, 2025. [10] A. Al-Sharafi, H. Al-Sharafi, and A. Al-Sharafi, “Systematic review of inertial measurement units for surgical navigation,” Sensors, vol. 24, no. 17, p. 7991, 2024. [11] J. A. Schnabel, T. Rohlfing, J.-N. Audet, G. P. Penney, D. L. Hill, and D. J. Hawkes, “Correction of misaligned slices in multi-slice cardiovascular magnetic resonance using slice-to-volume registration,” Journal of Cardiovascular Magnetic Resonance, vol. 10, no. 1, p. 13, 2008. [12] M. Schmidt, J. O’Doherty, P. Schleyer, M. Zacher, M. Schwaiger, and S. Nekolla, “Multimodality pet/ct of the liver: impact of breathing protocols on registration accuracy and artifacts,” Journal of Nuclear Medicine, vol. 48, no. 6, pp. 910–916, 2007. [13] RSNA, “Motion mitigation in abdominal mri,” RSNA News, 2025. [14] H. Wang, X. Li, H. Chen, X. Li, and X. Li, “Fuzzy expected label value map for robust multi-atlas segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 4, pp. 1000–1011, 2021. [15] J. T. Bushberg, J. A. Seibert, E. M. Leidholdt, and J. M. Boone, “Principles of ct: Multislice ct,” Journal of Nuclear Medicine Technology, vol. 36, no. 2, pp. 57–68, 2008. [16] E. K. Fishman, K. M. Horton, D. A. Bluemke, and B. A. Urban, “Multirow detector ct: hepatic imaging,” American Journal of Roentgenology, vol. 175, no. 3, pp. 679–684, 2000. [17] H. Wang, X. Li, H. Chen, Y. Li, X. Li, and X. Li, “Respiratory motion-corrected pet/ct imaging: A review of current techniques and clinical applications,” Journal of Nuclear Medicine, vol. 64, no. 5, pp. 772–780, 2023. [18] MRI Questions, “Flow compensation,” MRI Questions, 2025. [19] Number Analytics, “Advanced image registration techniques,” Number Analytics Blog, 2025. [20] Slicer, “Image registration,” Slicer Documentation, 2025. [21] B. Kainz, A. Khan, V. Kotecha, M. A. Rutherford, J. V. Hajnal, M. Deprez, M. Schneider, and D. Rueckert, “A deformable image registration framework for primary liver radioembolization planning,” Physics in Medicine & Biology, vol. 63, no. 2, p. 025001, 2018. [22] H. Wang, X. Li, H. Chen, Y. Li, X. Li, and X. Li, “Addressing the reference image challenge in downstream evaluation,” arXiv preprint arXiv:2505.07175, 2025. [23] K. H. Zou, S. K. Warfield, A. Bharatha, C. M. Tempany, M. R. Kaus, S. J. Haker, W. M. Wells, F. A. Jolesz, and R. Kikinis, “The dice similarity coefficient, a simple and effective measure of spatial overlap between two segmentation results,” Academic Radiology, vol. 11, no. 5, pp. 579–588, 2004. [24] MathWorks, “generalizeddice,” MathWorks Documentation, 2025. [25] H. Wang, X. Li, H. Chen, Y. Li, X. Li, and X. Li, “Deep learning based liver segmentation for fusion guided intervention,” Medical Image Analysis, vol. 67, p. 101844, 2021. [26] H. Wang, X. Li, H. Chen, Y. Li, X. Li, and X. Li, “Deep multi-layer dictionary learning for patch-based label fusion,” Pattern Recognition, vol. 63, pp. 1–12, 2017. [27] B. Kainz, A. Khan, V. Kotecha, M. A. Rutherford, J. V. Hajnal, M. Deprez, M. Schneider, and D. Rueckert, “Motion correction for fetal body and placenta in in-utero mri: a review,” Journal of Medical Imaging, vol. 7, no. 1, p. 011002, 2020. [28] M. V. Wyawahare, S. B. Deshmukh, and S. R. Deshmukh, “Image registration techniques: An overview,” International Journal of Computer Science and Network Security, vol. 9, no. 1, pp. 247–254, 2009. [29] F. P. M. Oliveira and J. M. R. S. Tavares, “Medical image registration: a review,” Computer Methods in Biomechanics and Biomedical Engineering, vol. 17, no. 2, pp. 73–93, 2014. [30] L. G. Brown, “A survey of image registration techniques,” ACM Computing Surveys(CSUR), vol. 24, no. 4, pp. 325–376, 1992. [31] B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision Computing, vol. 21, no. 11, pp. 977–1000, 2003. [32] S. Nag, “Image registration techniques: a survey,” arXiv preprint arXiv:1712.07540, 2017. [33] J. V. Hajnal, D. L. G. Hill, and D. J. Hawkes, eds., Medical image registration. Boca Raton: CRC Press, 2001. [34] M. A. Audette, F. P. Ferrie, and T. M. Peters, “An algorithmic overview of surface registration techniques for medical imaging,” Medical Image Analysis, vol. 4, no. 3, pp. 201–217, 2000. [35] L. Ding, A. Goshtasby, and M. Satter, “Volume image registration by template matching,” Image and Vision Computing, vol. 19, no. 12, pp. 821–832, 2001. [36] J. West, J. M. Fitzpatrick, M. Y. Wang, B. M. Dawant, C. R. Maurer, R. M. Kessler, R. J. Maciunas, C. A. Barnea, J. R. Anderson, D. Martin, et al., “Comparison and evaluation of retrospective intermodality brain image registration techniques,” Journal of Computer Assisted Tomography, vol. 21, no. 4, pp. 554–566, 1997. [37] D. Sargent, Y. Wang, and T. M. Peters, “Feature detector and descriptor for medical images,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2010, pp. 57–64, Springer, 2010. [38] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004. [39] G. T. Flitton, T. P. Breckon, and N. M. Bouallagu, “Object recognition using 3d sift in complex ct volumes,” in British Machine Vision Conference, BMVC 2010, pp. 11.1–11.12, 2010. [40] Y. Xu, C. Xu, X. Kuang, H. Wang, E. I.-C. Chang, W. Huang, and Y. Fan, “3d-sift-flow for atlas-based ct liver image segmentation,” Medical Physics, vol. 43, no. 5, p. 2229, 2016. [41] H. S. Sikarwar, R. Singh, and T. Kulshreshtha, “Sift and its applications in medical imaging: Advancements, challenges, and future prospects,” International Journal of Mathematics and Computer Research, vol. 13, no. 04, pp. 5061–5068, 2025. [42] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in European conference on computer vision, pp. 404–417, Springer, 2006. [43] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (surf),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. [44] Z. Gu, L. Cai, Y. Yin, Y. Ding, and H. Kan, “Registration of brain medical images based on surf algorithm and r-ransac algorithm,” TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 12, no. 3, pp. 2290–2297, 2014. [45] P. Lukashevich, B. Zalesky, and S. Ablameyko, “Medical image registration based on surf detector,” Pattern Recognition and Image Analysis, vol. 21, no. 3, pp. 519–521, 2011. [46] H. A. Mohammed, “The image registration techniques for medical imaging (mri-ct),” American Journal of Biomedical Engineering, vol. 6, no. 2, pp. 53–58, 2016. [47] S. Kole, C. Agarwal, T. Gupta, and S. Singh, “Surf and ransac: A conglomerative approach to object recognition,” International Journal of Computer Applications, vol. 109, no. 4, pp. 7–9, 2015. [48] Z. Y. Gu, C. M. Du, L. Jin, and H. C. Tan, “Medical image registration combined with surf and improved ransac algorithm,” Applied Mechanics and Materials, vol. 411-414, pp. 1233–1237, 2013. [49] S. Singla and R. Sharma, “Medical image stitching using hybrid of sift & surf techniques,” International Journal of Computer Science and Engineering Technology, vol. 4, no. 8, pp. 908–912, 2014. [50] S. Gul, M. S. Khan, A. Bibi, A. Khandakar, M. A. Ayari, and M. E. H. Chowdhury, “Deep learning techniques for liver and liver tumor segmentation: A review,” Computers in Biology and Medicine, vol. 147, p. 105620, 2022. [51] L. Massoptier and S. Casciaro, “A new fully automatic and robust algorithm for fast segmentation of liver tissue and tumors from CT scans,” European Radiology, vol. 18, pp. 1658–1665, 2008. [52] W. Huang, Y. Yang, Z. Lin, G.-B. Huang, J. Zhou, Y. Duan, and W. Xiong, “Random feature subspace ensemble based extreme learning machine for liver tumor detection and segmentation,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 4675–4678, IEEE. [53] W. Li, F. Jia, and Q. Hu, “Automatic segmentation of liver tumor in CT images with deep convolutional neural networks,” Journal of Computer and Communications, vol. 3, no. 11, pp. 146–151, 2015. [54] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer. [55] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11, Springer, 2018. [56] F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, no. 2, pp. 203–211, 2021. [57] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015. [58] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571, 2016. [59] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, 2014. [60] S. Xun, D. Li, H. Zhu, M. Chen, J. Wang, J. Li, M. Chen, B. Wu, H. Zhang, X. Chai, Z. Jiang, Y. Zhang, and P. Huang, “Generative adversarial networks in medical image segmentation: A review,” Computers in Biology and Medicine, vol. 140, p. 105063, 2022. [61] J. Lee, K. W. Kim, S. Y. Kim, J. Shin, K. J. Park, H. J. Won, and Y. M. Shin, “Automatic detection method of hepatocellular carcinomas using the non-rigid registration method of multi-phase liver CT images,” Journal of X-ray Science and Technology, vol. 23, no. 3, pp. 275–288, 2015. [62] C. Sun, S. Guo, H. Zhang, J. Li, M. Chen, S. Ma, L. Jin, X. Liu, X. Li, and X. Qian, “Automatic segmentation of liver tumors from multiphase contrast-enhanced CT images based on fcns,” Artificial Intelligence in Medicine, vol. 83, pp. 58–66, 2017. [63] Y. Zhang, C. Peng, L. Peng, H. Huang, R. Tong, L. Lin, J. Li, Y.-W. Chen, Q. Chen,and H. Hu, “Multi-phase liver tumor segmentation with spatial aggregation and uncertain region inpainting,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021, pp. 68–77, Springer. [64] S. R. Stahlschmidt, B. Ulfenborg, and J. Synnergren, “Multimodal deep learning for biomedical data fusion: A review,” Brief Bioinform, vol. 23, no. 2, p. bbab569, 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98527 | - |
| dc.description.abstract | 本研究旨在針對多相位電腦斷層(CT)影像中標註不全(incomplete labeling)所帶來的實務挑戰,提出並驗證一套結合影像對齊與模型融合的腫瘤分割策略。多相位影像具備豐富的對比資訊,為腫瘤分割帶來潛在效能提升空間,因此被視為提升腫瘤分割效能的重要方向。然而臨床上常僅對部分相位進行標註,使得無法直接應用於監督式訓練與有效融合。
為此,本研究以提升肝臟腫瘤切割準確度為目標,設計一系列跨相位融合實驗。首先,針對動脈相與靜脈相進行 Z 軸對齊與切片內配準,結合 Greedy Local 與 Bipartite Matching 等策略,搭配 SIFT 與 SURF 特徵進行配對,並引入肝臟區域限制與網格濾波以提升配準品質。透過此流程,我們嘗試將動脈相標註轉移至靜脈相,建構偽標註以進行弱監督訓練。 實驗結果顯示,動脈相模型在完整標註下可達 0.4762 之 DICE 分數,靜脈相模型雖受限於偽標註品質,表現較弱(DICE 約為 0.2),但透過決策層融合(decision-level fusion),結合兩相位模型後整體效能提升至 0.5627,展現弱信號融合之潛力。此外,輸入層融合(input-level fusion)進一步驗證配準的實質效果:未配準的輸入導致顯著效能下降(DICE 0.1807),而配準後之雙相位輸入則可達 0.5453,顯示幾何對齊為跨相位資訊整合的重要前提。 綜合而言,本研究不僅證實多相位融合可在標註不全下提升腫瘤分割效能,也系統性分析不同配準與融合策略之成效,提供日後發展弱監督式多相位醫學影像模型之實證基礎與實作建議。 | zh_TW |
| dc.description.abstract | This study addresses a practical challenge in multi-phase computed tomography (CT) imaging: the issue of incomplete labeling across phases. Although multi-phase imaging provides rich temporal contrast information that is potentially beneficial for tumor segmentation, clinical datasets often include annotations for only a subset of phases. This limitation restricts the feasibility of supervised training and reduces the effectiveness of cross-phase integration.
To overcome this limitation, we designed a series of experiments aimed at improving liver tumor segmentation by integrating image registration and model fusion techniques. Specifically, we performed Z-axis alignment and in-plane registration between the arterial phase (C+A) and the venous phase (C+P), employing pairing strategies such as Greedy Local Matching and Bipartite Matching, along with SIFT and SURF features for keypoint matching. Additional constraints such as liver-region masking and grid-based filtering were applied to enhance registration quality. Based on the resulting transformations, we transferred annotations from the C+A phase to the C+P phase to generate pseudo-labels for weakly supervised training. Experimental results showed that the C+A model trained with complete annotations achieved a DICE score of 0.4762. Although the C+P model trained using pseudo-labels performed poorly with a DICE score around 0.2, decision-level fusion of the C+A and C+P models improved segmentation performance to 0.5627. Furthermore, input-level fusion experiments demonstrated the importance of spatial alignment. Without registration, fused inputs achieved only 0.1807 in DICE score, whereas alignment raised the score to 0.5453. These results confirm that while alignment alone does not resolve semantic differences between phases, it is essential for effective multi-phase integration. In summary, this study demonstrates the feasibility of multi-phase fusion under incomplete labeling and provides a systematic evaluation of registration and fusion strategies. The proposed framework offers empirical insights and methodological guidance for advancing weakly supervised tumor segmentation in real-world clinical imaging scenarios. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-14T16:27:44Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-08-14T16:27:44Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii Abstract v Contents vii List of Figures x List of Tables xii Chapter 1 Introduction 1 1.1 BackgroundOverview ......................... 1 1.2 Motivation ............................... 3 1.3 Objective................................ 4 1.4 ThesisOrganization .......................... 5 Chapter 2 Related Works 6 2.1 ImageRegistration ........................... 6 2.1.1 Z-AxisAlignment........................... 7 2.1.2 In-Plane(XY)Registration ...................... 9 2.1.2.1 Traditional Techniques for Image Registration . . . . . 9 2.1.2.2 Feature-Based Registration: SIFT and SURF . . . . . . 11 2.2 Liver Tumor Segmentation....................... 13 2.2.1 Machine Learning Approaches .................... 13 2.2.2 DeepLearning Techniques ...................... 13 2.2.3 Multi-phase CT Image ........................ 14 2.3 DataFusion............................... 15 2.4 Summary................................ 16 Chapter 3 Method 17 3.1 Dataset ................................. 17 3.2 Framework Overview ......................... 18 3.3 Data Preprocessing........................... 20 3.4 Liver Segmentation Model....................... 21 3.4.1 Model Architecture .......................... 21 3.5 Image Registration ........................... 22 3.5.1 Z-Axis Alignment........................... 22 3.5.2 In-Plane(XY) Registration ...................... 27 3.5.3 Tumor Segmentation ......................... 32 3.6 Evaluation Metrics ........................... 34 3.7 Loss Function.............................. 35 3.7.1 Dice Loss(F-score Loss) ....................... 35 3.7.2 Binary Cross-Entropy Loss...................... 35 Chapter 4 Experiments 37 4.1 Liver Segmentation........................... 37 4.2 Image Registration ........................... 39 4.2.1 Z-Axis Alignment........................... 39 4.2.2 In-Plane(XY) Registration ...................... 40 4.2.2.1 Quantitative Evaluation ................. 40 4.2.2.2 Qualitative Analysis of Bipartite Matching ................. 40 4.3 Tumor Segmentation.......................... 48 4.3.1 Single-Phas Tumor Segmentation.................. 48 4.3.1.1 C+A Tumor Segmentation with Full Annotations . . . 49 4.3.1.2 C+P Tumor Segmentation via Label Transfer: GeometricAlignmentisNotSufficient ............. 50 4.3.1.3 C+P Tumor Segmentation with Domain-Adapted Training............................ 51 4.3.2 Dual-Phase Decision-Level Fusion.................. 51 4.3.3 Dual-Phase Input-Level Fusion.................... 52 4.3.4 Comparison of Input-Level and Decision-Level Fusion . . . . . . . 53 Chapter 5 Discussion 55 Chapter 6 Conclusion and Future Work 60 6.1 Conclusion ............................... 60 6.2 FutureWork .............................. 62 References 64 | - |
| dc.language.iso | en | - |
| dc.subject | 腫瘤分割 | zh_TW |
| dc.subject | 影像配準 | zh_TW |
| dc.subject | 模型融合 | zh_TW |
| dc.subject | 多相位電腦斷層 | zh_TW |
| dc.subject | 弱監督學習 | zh_TW |
| dc.subject | Multi-phase Computed Tomography | en |
| dc.subject | Weakly Super- vised Learning | en |
| dc.subject | Tumor segmentation | en |
| dc.subject | Image Registration | en |
| dc.subject | Model Fusion | en |
| dc.title | 融合多相位 CT 影像特徵及增強對齊技術之肝臟與腫瘤分割 | zh_TW |
| dc.title | Liver and Tumor Segmentation through Fusion of Multi-phase CT Image Features and Enhanced Alignment Techniques | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 呂東武;楊瀅臻 | zh_TW |
| dc.contributor.oralexamcommittee | Tung-Wu Lu;Ying-Chen Yang | en |
| dc.subject.keyword | 多相位電腦斷層,腫瘤分割,弱監督學習,影像配準,模型融合, | zh_TW |
| dc.subject.keyword | Multi-phase Computed Tomography,Tumor segmentation,Weakly Super- vised Learning,Image Registration,Model Fusion, | en |
| dc.relation.page | 72 | - |
| dc.identifier.doi | 10.6342/NTU202503102 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2025-08-06 | - |
| dc.contributor.author-college | 管理學院 | - |
| dc.contributor.author-dept | 資訊管理學系 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 資訊管理學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 16.8 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
