請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101181| 標題: | 多模態人工智慧於放射治療:從臨床情境至自動勾勒 Multimodal AI for Radiotherapy: From Clinical Context to Auto-Contouring |
| 作者: | 張漢庭 JOSEPH CHANG |
| 指導教授: | 陳中明 Chung-Ming Chen |
| 關鍵字: | 放射治療計畫,自動勾勒大型語言模型多模態深度學習臨床文本萃取 Radiotherapy Planning,Auto-ContouringLarge Language ModelsMultimodal Deep LearningClinical Context Integration |
| 出版年 : | 2025 |
| 學位: | 博士 |
| 摘要: | 放射治療計畫的準確性仰賴標靶區與危及器官的精確勾勒,此工作不僅耗時費力,亦為臨床工作流程中的重要挑戰。深度學習自動勾勒技術雖在標準解剖患者上常展現良好成效,但在術後改變解剖、先前接受過放射治療、或複雜病史之患者上,其準確度往往下降,顯示單純依賴影像資訊可能不足。本論文探討整合大型語言模型(Large Language Model, LLM)臨床文本萃取與深度學習影像分割之多模態人工智慧系統,期能改善複雜臨床案例中器官勾勒之準確度。
本研究採用開源大型語言模型,並以放射腫瘤科臨床文本進行微調訓練,使其能從手術紀錄、病歷摘要等臨床文件中萃取器官存在性、解剖關係等關鍵資訊。此臨床資訊透過不確定性加權損失函數整合至深度學習分割系統,該函數依據患者解剖複雜度與器官空間關係進行動態調整。系統於包含術後改變解剖患者在內之回溯性病例上進行驗證。 研究結果顯示,整合臨床文本資訊之系統相較於僅使用影像之基準方法,在多項器官勾勒上展現準確度改善。準確度提升與解剖複雜度相關,在標準解剖患者上呈現適度改善,而在解剖結構具挑戰性之患者上則有更明顯的提升。針對術後切除器官的偽陽性預測獲得降低。分析顯示,多任務關係學習及不確定性加權損失函數各自對整體效能有所貢獻。 本論文探討透過整合臨床文本資訊輔助影像分割,以改善複雜解剖案例之勾勒準確度。研究成果顯示,較小型語言模型經專家驗證資料微調後,可達臨床實用效能,且具有開源部署之優勢。此方法透過納入臨床脈絡資訊,有助於處理非典型解剖之分割挑戰。儘管樣本量及單一機構驗證等限制存在,研究結果表明整合臨床文本與影像之多模態方法,提供了一個改善複雜案例器官勾勒的可行途徑。後續進行多中心前瞻性驗證,將有助評估此系統在臨床放射治療工作流程中的應用價值。 Radiotherapy treatment planning relies on accurate delineation of target volumes and organs-at-risk, a time-intensive process that creates bottlenecks in cancer care delivery. While artificial intelligence-based auto-contouring systems have demonstrated promising performance on standard anatomical presentations, their performance often degrades when applied to post-surgical anatomy, cases with prior radiotherapy, or complex medical histories, scenarios where protocol-driven approaches may be insufficient. This thesis investigates a multimodal artificial intelligence system that integrates large language model (LLM)-based clinical context extraction with deep learning image segmentation to enable context-aware radiotherapy planning. A fine-tuned open-source LLM was developed to extract structured anatomical information from clinical documentation. This clinical context was integrated into a deep learning segmentation system through uncertainty-weighted adaptive loss functions incorporating patient-level anatomical complexity and anatomical relationships. The system was validated on a retrospective data with anatomically complex presentations including post-surgical cases. The LLM-guided system demonstrated improved segmentation accuracy across multiple anatomical structures compared to baseline image-only approaches. Performance improvements were associated with anatomical complexity, with modest gains in straightforward cases and more pronounced improvements in anatomically challenging scenarios. False positive predictions for surgically absent structures were reduced. Ablation studies indicated that multi-task relationship learning and uncertainty-weighted loss functions each contributed to overall performance. This research explores techniques to integrate clinical information through domain-specific LLMs to improve segmentation for complex cases. The findings suggest that smaller models can achieve useful performance after fine-tuning on expert-validated data. The approach addresses limitations of image-only systems by incorporating medical context. Further prospective validation and cross-institutional testing would be valuable to assess generalizability and clinical utility in real-world radiotherapy workflows. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101181 |
| DOI: | 10.6342/NTU202504749 |
| 全文授權: | 未授權 |
| 電子全文公開日期: | N/A |
| 顯示於系所單位: | 醫學工程學研究所 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf 未授權公開取用 | 2.82 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
