請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90172| 標題: | 肺部電腦斷層掃描之非小細胞癌PD-L1表現預測 : 結合遮蓋圖像模型與生成對抗網路 Prediction of PD-L1 Expression in Non-Small Cell Lung Cancer on Chest CT Scans: A masked image model approach combined with a GAN method |
| 作者: | 周姵妤 Pei-Yu Chou |
| 指導教授: | 陳中明 Chung-Ming Chen |
| 關鍵字: | 自監督學習,遮蓋圖像模型,生成對抗網路,PD-L1表現量,免疫治療,非小細胞肺癌,深度學習, Self-Supervised Learning,Masked Image Modeling,Generative adversarial network,PD-L1 expression levels,Immunotherapy,Non-small cell lung cancer,Deep learning, |
| 出版年 : | 2023 |
| 學位: | 碩士 |
| 摘要: | 肺癌是全球常見且致死率最高的癌症,儘管醫學科學在肺癌的治療方面取得了一些重大進展,晚期肺癌的五年存活率仍然很低。近年來,針對晚期肺癌的治療中引入了標靶治療和免疫療法,這些新的治療方法為患者帶來了一些希望。目前在非小細胞肺癌治療中應用最廣的是PD-1/PD-L1抑制劑,透過反制腫瘤的逃脫機制,利用自身免疫反應排除腫瘤。使用PD-1/PD-L1抑制劑在晚期治療上的顯著效果,但仍然需要識別可以獲益於此治療的病患族群。然而,目前用於判定免疫療法適用性的PD-L1表現量檢測方法存在問題(例如腫瘤異質性、染色標準不一),其準確度尚有進步空間。
因此,本研究希望利用非侵入性且能夠整體判讀腫瘤的CT影像來建立電腦輔助診斷(CAD),進而幫助PD-L1表現量的檢測。目前建立CAD系統的方法可分為機器學習以及深度學習,深度學習模型在訓練過程中可自行提取特徵,但需要大量的標記樣本,而醫學影像樣本相對不足。近年來,自監督學習提出新的訓練架構,可以使用未標記的數據進行訓練,降低樣本門檻。縱使目前基於自監督學習的遮蓋圖像模型在自然影像尚有著不錯的成績,面對醫學影像這種目標較不明確的資料仍存在一些限制。 為了克服上述醫學影像不足與目標物不明確的問題,進而提高PD-L1表達的預測準確性,本研究針對醫學影像的特性提出了一Multi-task Masked Autoencoder(MTMAE)方法。MTMAE具有以下三個特點:(1)使用基於自監督學習的遮蓋圖像模型,使模型具有較高的遷移能力;(2)在多任務學習中加入分割任務,使模型在提取特徵時能夠區分前景和背景,更好地捕捉腫瘤的特徵;(3)使用生成對抗網絡(GAN)生成影像,使模型能夠學習到大量多樣的特徵,以克服學習受限的問題。 通過實驗驗證,在本研究共188個PD-L1樣本上使用上述提出的模型進行PD-L1 50%表現量分類,AUC為0.735,準確率為0.724。相比於傳統的監督式預訓練(AUC : 0.695)和訓練單一重建任務的MAE(AUC : 0.712),本研究提出的MTMAE模型在實驗中表現更好。本研究結合了自監督學習、多任務學習和GAN生成影像的特點,針對醫學影像特性與資料量依賴性進行改善,進而應用於幫助分類PD-L1表現量。 Lung cancer is a common and highly fatal malignancy worldwide. Despite significant advancements in medical science, the five-year survival rate for advanced-stage lung cancer remains low. Recent therapeutic strategies, such as targeted therapy and immunotherapy, have brought hope to patients with advanced lung cancer. Immunotherapy, specifically the use of PD-1/PD-L1 inhibitors, has shown promising results in late-stage treatments by counteracting the tumor's evasion mechanisms and harnessing the body's immune response to eliminate the tumor. However, accurately identifying patients who are likely to benefit from this therapy remains challenging. The current methods used to assess PD-L1 expression levels, which are crucial in determining the suitability of immunotherapy, suffer from issues such as tumor heterogeneity and inconsistent staining standards, leading to suboptimal accuracy. To address these limitations and improve the accuracy of predicting PD-L1 expression levels, this study proposes a computer-aided diagnosis (CAD) system that utilizes non-invasive CT imaging to comprehensively analyze tumors. CAD systems can be broadly categorized into machine learning and deep learning approaches. While deep learning models have the advantage of automatically extracting features during training, they require a substantial amount of annotated data, which is often lacking in medical imaging. In recent years, Self-Supervised Learning has introduced a new training framework that can leverage unlabeled data, thus reducing the dependency on annotated samples. However, applying existing Self-Supervised Learning-based masked image models to medical imaging, which involves less-defined targets, poses certain limitations. To overcome the challenges associated with limited medical imaging data and unclear target objects, and to enhance the prediction accuracy of PD-L1 expression levels, this study introduces the Multi-task Masked Autoencoder (MTMAE) method, specifically tailored to the characteristics of medical imaging. The MTMAE method incorporates the following three key features: (1) harnessing a Self-Supervised Learning-based masked image model with enhanced capability in transfer learning, (2) the inclusion of a segmentation task in the multi-task learning framework to better distinguish foreground and background and capture tumor features effectively, and (3) the integration of a generative adversarial network (GAN) to generate diverse images, enabling the model to overcome learning constraints by learning a wide range of features. Experimental validation on a dataset comprising 188 PD-L1 samples demonstrates the effectiveness of the proposed model in classifying PD-L1 expression levels using a 50% threshold, achieving an AUC of 0.735 and an accuracy of 0.724. Compared to traditional supervised pretraining (AUC: 0.695) and single-task reconstruction-based MAE (AUC: 0.712), the MTMAE model exhibits superior performance in the experiments. By combining the characteristics of Self-Supervised Learning, multi-task learning, and GAN-generated images, this study aims to address the challenges associated with medical imaging characteristics and data dependency, ultimately assisting in the accurate classification of PD-L1 expression levels. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90172 |
| DOI: | 10.6342/NTU202301777 |
| 全文授權: | 未授權 |
| 顯示於系所單位: | 醫學工程學研究所 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 未授權公開取用 | 2.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
