Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 醫學院
  3. 醫療器材與醫學影像研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96688
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor蕭輔仁zh_TW
dc.contributor.advisorFuren Xiaoen
dc.contributor.author連薇瑛zh_TW
dc.contributor.authorWei-Ying Lienen
dc.date.accessioned2025-02-20T16:32:31Z-
dc.date.available2025-02-21-
dc.date.copyright2025-02-20-
dc.date.issued2025-
dc.date.submitted2025-01-23-
dc.identifier.citation[1]K. Park, “The mixed era of stereotactic radiosurgery and radiotherapy,” J. Korean Soc. Stereotact. Funct. Neurosurg., vol. 17, no. 1, pp. 6–13, Jun. 2021.
[2]J. P. Sheehan, C.-P. Yen, C.-C. Lee, and J. S. Loeffler, “Cranial stereotactic radiosurgery: current status of the initial paradigm shifter,” J. Clin. Oncol. Off. J. Am. Soc. Clin. Oncol., vol. 32, no. 26, pp. 2836–2846, Sep. 2014.
[3]D. Kondziolka, S. M. Shin, A. Brunswick, I. Kim, and J. S. Silverman, “The biology of radiosurgery and its clinical applications for brain tumors,” Neuro-Oncol., vol. 17, no. 1, pp. 29–44, Jan. 2015.
[4]M. K. Semwal, “Khan’s The Physics of Radiation Therapy,” J. Med. Phys., vol. 45, no. 2, pp. 134–135, 2020.
[5]A. Rojas-Villabona, K. Miszkiel, N. Kitchen, R. Jäger, and I. Paddick, “Evaluation of the stability of the stereotactic Leksell Frame G in Gamma Knife radiosurgery,” J. Appl. Clin. Med. Phys., vol. 17, no. 3, pp. 75–89, May 2016.
[6]B. Sun, J. Chang, and Y. Rong, “The more IGRT systems, the merrier?,” J. Appl. Clin. Med. Phys., vol. 18, no. 4, pp. 7–11, Jun. 2017.
[7]J. Gao et al., “Stereotactic Body Radiotherapy Boost with the CyberKnife for Locally Advanced Cervical Cancer: Dosimetric Analysis and Potential Clinical Benefits,” Cancers, vol. 14, no. 20, p. 5166, Oct. 2022.
[8]Y. Niu et al., “Comparative analysis of plan quality and delivery efficiency: ZAP-X vs. CyberKnife for brain metastases treatment,” Front. Oncol., vol. 14, p. 1333642, Jun. 2024.
[9]Y. Cheng et al., “Is the CyberKnife© radiosurgery system effective and safe for patients? An umbrella review of the evidence,” Future Oncol. Lond. Engl., vol. 18, no. 14, pp. 1777–1791, May 2022.
[10]V. Kearney, J. P. Cheung, C. McGuinness, and T. D. Solberg, “CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife,” Phys. Med. Biol., vol. 62, no. 14, pp. 5777–5789, Jun. 2017.
[11]J. A. Charters, P. Bertram, and J. M. Lamb, “Offline generator for digitally reconstructed radiographs of a commercial stereoscopic radiotherapy image-guidance system,” J. Appl. Clin. Med. Phys., vol. 23, no. 3, p. e13492, Mar. 2022.
[12]H. Wu, Y. Zhang, Q. Zhao, and B. Lord, “SU‐FF‐I‐122: Assessment of Lung Tumors Treatment Accuracy Using CyberKnife Synchrony Model,” Med. Phys., vol. 36, no. 6Part4, pp. 2463–2463, Jun. 2009.
[13]“Effect of density heterogeneity on absorbed dose with CyberKnife Synchrony Respiratory Tracking System | Semantic Scholar.” Accessed: Jan. 04, 2025.
[14]L. B. E. Shields, C. Bond, A. Odom, D. A. Sun, and A. C. Spalding, “Heterogeneity correction for intensity-modulated frameless SRS in pituitary and cavernous sinus tumors: a retrospective study,” Radiat. Oncol. Lond. Engl., vol. 10, p. 193, Sep. 2015.
[15]S. Dayawansa, D. Schlesinger, G. Mantziaris, C. Dumot, J. H. Donahue, and J. P. Sheehan, “Incorporation of Brain Connectomics for Stereotactic Radiosurgery Treatment Planning,” Oper. Neurosurg. Hagerstown Md, vol. 25, no. 4, pp. e211–e215, Oct. 2023.
[16]R. Culcasi, G. Baran, M. Dominello, and J. Burmeister, “Stereotactic radiosurgery commissioning and QA test cases-A TG-119 approach for Stereotactic radiosurgery,” Med. Phys., vol. 48, no. 12, pp. 7568–7579, Dec. 2021.
[17]G. Prentou et al., “Dosimetric impact of rotational errors on the quality of VMAT‐SRS for multiple brain metastases: Comparison between single‐ and two‐isocenter treatment planning techniques,” J. Appl. Clin. Med. Phys., vol. 21, no. 3, pp. 32–44, Feb. 2020.
[18]E. Ippolito et al., “Radiotherapy for HER 2 Positive Brain Metastases: Urgent Need for a Paradigm Shift,” Cancers, vol. 14, no. 6, Art. no. 6, Jan. 2022.
[19]K. C. Cuneo et al., “Stereotactic radiotherapy for malignancies involving the trigeminal and facial nerves,” Technol. Cancer Res. Treat., vol. 11, no. 3, pp. 221–228, Jun. 2012.
[20]F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nat. Methods, vol. 18, no. 2, pp. 203–211, Feb. 2021.
[21]V. S. Fonov, P. Coupé, S. Eskildsen, J. Manjón, and D. L. Collins, “Multi-atlas labeling with population-specific template and non-local patch-based label fusion,” Oct. 2012.
[22]H. Lee et al., “Clinical Evaluation of Commercial Atlas-Based Auto-Segmentation in the Head and Neck Region,” Front. Oncol., vol. 9, p. 239, 2019.
[23]J. Bertels, D. Robben, R. Lemmens, and D. Vandermeulen, “Convolutional neural networks for medical image segmentation,” Nov. 17, 2022, arXiv: arXiv:2211.09562.
[24]D. Ciresan, A. Giusti, L. Gambardella, and J. Schmidhuber, “Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2012. Accessed: Jan. 09, 2025.
[25]J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” Mar. 08, 2015, arXiv: arXiv:1411.4038.
[26]O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” May 18, 2015, arXiv: arXiv:1505.04597.
[27]N. Siddique, P. Sidike, C. Elkin, and V. Devabhaktuni, “U-Net and its variants for medical image segmentation: theory and applications,” IEEE Access, vol. 9, pp. 82031–82057, 2021.
[28]F. Isensee, P. Kickingereder, W. Wick, M. Bendszus, and K. H. Maier-Hein, “Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge,” Feb. 28, 2018, arXiv: arXiv:1802.10508.
[29]Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A Nested U-Net Architecture for Medical Image Segmentation,” Jul. 18, 2018, arXiv: arXiv:1807.10165.
[30]F. Isensee et al., “nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation,” Sep. 27, 2018, arXiv: arXiv:1809.10486.
[31]G. Podobnik, P. Strojan, P. Peterlin, B. Ibragimov, and T. Vrtovec, “HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset,” Med. Phys., vol. 50, no. 3, pp. 1917–1927, 2023.
[32]S. Quetin, A. Heschl, M. Murillo, R. Murali, S. A. Enger, and F. Maleki, “Automatic segmentation of Organs at Risk in Head and Neck cancer patients from CT and MRI scans,” May 23, 2024, arXiv: arXiv:2405.10833.
[33]L. J. Isaksson et al., “Automatic Segmentation with Deep Learning in Radiotherapy,” Cancers, vol. 15, no. 17, p. 4389, Sep. 2023.
[34]J. Jiang et al., “Self-derived organ attention for unpaired CT-MRI deep domain adaptation based MRI segmentation,” Phys. Med. Biol., vol. 65, no. 20, p. 205001, Oct. 2020.
[35]K. Li, L. Yu, S. Wang, and P.-A. Heng, “Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge Distillation,” Oct. 04, 2020, arXiv: arXiv:2010.
[36]Y. Liu et al., “Head and neck multi-organ auto-segmentation on CT images aided by synthetic MRI,” Med. Phys., vol. 47, no. 9, pp. 4294–4302, Sep. 2020.
[37]G. Podobnik, P. Strojan, P. Peterlin, B. Ibragimov, and T. Vrtovec, “Multimodal CT and MR Segmentation of Head and Neck Organs-at-Risk,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2023, H. Greenspan, A. Madabhushi, P. Mousavi, S. Salcudean, J. Duncan, T. Syeda-Mahmood, and R. Taylor, Eds., Cham: Springer Nature Switzerland, 2023, pp. 745–755.
[38]M. Meng, M. Fulham, D. Feng, L. Bi, and J. Kim, “AutoFuse: Automatic Fusion Networks for Deformable Medical Image Registration,” Sep. 11, 2023, arXiv: arXiv:2309.05271.
[39]S. Agarwal, S. O.P, and D. Nagaria, “Implementation of Image Registration Techniques and its Applications in Medical Image Analysis,” Int. J. Eng. Technol., vol. 9, no. 2, pp. 759–765, Apr. 2017.
[40]N. J. Tustison et al., “The ANTsX ecosystem for quantitative biological and medical imaging,” Oct. 2020.
[41]J. E. Iglesias, “A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI,” Sci. Rep., vol. 13, p. 6657, Apr. 2023.
[42]C. P. Lee et al., “Evaluation of Five Image Registration Tools for Abdominal CT: Pitfalls and Opportunities with Soft Anatomy,” Proc. SPIE-- Int. Soc. Opt. Eng., vol. 9413, p. 94131N, Mar. 2015.
[43]S. Baba and T. Kamiya, “An Image Registration Technique for Brain MR Images Using Linear Transform by 3DCNN,” 2023 23rd Int. Conf. Control Autom. Syst. ICCAS, pp. 1762–1765, Oct. 2023.
[44]H. S. Prajapati, K. Merchant-Borna, J. J. Bazarian, C. A. Linte, and N. D. Cahill, “Transitive Inverse Consistent Rigid Longitudinal Registration of Diffusion Weighted Magnetic Resonance Imaging: A Case Study in Athletes With Repetitive Non-Concussive Head Injuries,” Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. IEEE Eng. Med. Biol. Soc. Annu. Int. Conf., vol. 2021, pp. 3906–3911, Nov. 2021.
[45]T. Vrtovec, D. Močnik, P. Strojan, F. Pernuš, and B. Ibragimov, “Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods,” Med. Phys., vol. 47, no. 9, pp. e929–e950, Sep. 2020.
[46]K. Mackay, D. Bernstein, B. Glocker, K. Kamnitsas, and A. Taylor, “A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy,” Clin. Oncol., vol. 35, no. 6, pp. 354–369, Jun. 2023.
[47]“Metrics reloaded: recommendations for image analysis validation | Nature Methods.” Accessed: Dec. 08, 2024.
[48]S. Margaj, “Image Pre-Processing of USG Images by Adjusting Varying Luminance Trend and Applying Morphological Operations on Segmented Image using Thresholding,” Int. J. Sci. Res. IJSR, vol. 12, no. 2, pp. 1080–1084, Feb. 2023.
[49]F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation,” Jun. 15, 2016, arXiv: arXiv:1606.04797.
[50]“Organs at risk in the brain and their dose-constraints in adults and in children: a radiation oncologist’s guide for delineation in everyday practice - PubMed.” Accessed: Jan. 13, 2025.
[51]F. C. F. Restini et al., “Vestibulocochlear Delineation for Vestibular Schwannoma Treated With Radiation Therapy,” Adv. Radiat. Oncol., vol. 8, no. 4, p. 101171, Jan. 2023.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96688-
dc.description.abstract本研究評估具自動決定超參數的深度學習框架 nnU-Net 在顱內放射手術中對危及器官自動分割的表現。基於先前腫瘤分割的研究成果,我們將研究重點擴展至危及器官的分割,以彌補現有文獻中發現的不足之處。根據最新研究進展,我們利用多模態影像開發了用於危及器官自動分割的模型。本研究使用台大醫院電腦刀中心的大規模數據集進行實驗,著重於六個重要器官:腦幹、雙側眼球、視交叉和雙側視神經。實驗方法包含三個部分:評估不同影像模態的分割準確度、探討聯合危及器官和靶體積分割的可行性,以及分析危及器官和腫瘤之間的距離關係。研究結果顯示,整合多模態的三維低解析度模型達到最佳表現。然而,視交叉因其體積小,在自動及手動描繪上都面臨重大挑戰,因此呈現相對較低的分割準確度。儘管如此,我們的危及器官模型在腦幹分割上展現了與專家手動描繪相當的準確度,並成功驗證了聯合的危及器官-目標體積分割模型的可行性,為臨床自動化應用提供了新的方向。zh_TW
dc.description.abstractThis study evaluates the performance of nnU-Net, a deep learning framework that automatically determines hyperparameters, in the automatic segmentation of organs at risk (OARs) for intracranial radiosurgery. Building upon previous research on tumor segmentation, we expanded our focus to include the segmentation of OARs to address the insufficiencies identified in the existing literature. Based on recent advances, we developed an automatic model for OAR segmentation utilizing multimodal imaging. The study utilizes a large-scale dataset from the CyberKnife Center at National Taiwan University Hospital, focusing on six critical organs: brainstem, bilateral eyes, optic chiasm, and bilateral optic nerves. Our experimental methodology consists of three components: evaluating segmentation accuracy across different imaging modalities, examining the feasibility of joint OAR and target volume (TV) segmentation, and analyzing their spatial relationships. The results demonstrate that the 3D low-resolution model with multimodal integration achieves optimal performance. However, the optic chiasm exhibits relatively lower segmentation accuracy, as its small volume poses significant challenges for both automatic and manual delineation. Nevertheless, our OAR model demonstrates accuracy comparable to expert manual delineation in brainstem segmentation and successfully validates the feasibility of joint OAR-TV segmentation models, offering new directions for clinical automation applications.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-20T16:32:31Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-20T16:32:31Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌 謝 I
中文摘要 II
ABSTRACT III
CONTENTS V
LIST OF FIGURES VII
LIST OF TABLE IX
CHAPTER 1 INTRODUCTION 1
1.1、 Stereotactic radiosurgery (SRS) 1
1.2、 CyberKnife robotic radiosurgery 2
1.3、 Importance of OAR segmentation in SRS 4
1.4、 Research objectives 6
CHAPTER 2 RELATED WORK 8
2.1、 Progress of image segmentation 8
2.2、 Challenges in automatic segmentation of OAR 10
CHAPTER 3 METHODOLOGY 13
3.1、 Dataset 13
3.2、 Preprocessing 17
3.3、 nnU-Net 18
3.4、 Performance metric 21
3.5、 Post-processing 22
CHAPTER 4 RESULT 25
4.1、 Experiment (1) : OAR segmentation across modalities 25
4.2、 Experiment (2) : joint OAR-TV segmentation 33
4.3、 Experiment (3) : OAR-TV distance analysis 40
CHAPTER 5 DISCUSSION 50
CHAPTER 6 LIMITATION 55
CHAPTER 7 CONCLUSION 62
REFERENCE 63
APPENDIX 68
-
dc.language.isoen-
dc.subject電腦刀zh_TW
dc.subject危及器官zh_TW
dc.subject放射手術zh_TW
dc.subject自動分割zh_TW
dc.subject深度學習zh_TW
dc.subjectAutomatic Segmentationen
dc.subjectCyberKnifeen
dc.subjectRadiosurgeryen
dc.subjectOrgans at Risken
dc.subjectDeep Learningen
dc.title顱內放射手術之危及器官分割zh_TW
dc.titleSegmentation of Organs at Risk for Intracranial Radiosurgeryen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee廖俊智;蔡巧琳zh_TW
dc.contributor.oralexamcommitteeChun-Chih Liao;Chiao-Ling Tsaien
dc.subject.keyword深度學習,自動分割,放射手術,危及器官,電腦刀,zh_TW
dc.subject.keywordDeep Learning,Automatic Segmentation,Radiosurgery,Organs at Risk,CyberKnife,en
dc.relation.page72-
dc.identifier.doi10.6342/NTU202500256-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2025-01-23-
dc.contributor.author-college醫學院-
dc.contributor.author-dept醫療器材與醫學影像研究所-
dc.date.embargo-lift2030-01-22-
顯示於系所單位:醫療器材與醫學影像研究所

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf
  未授權公開取用
4.16 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved