請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78916| 標題: | 肺部電腦斷層影像之結節偵測:多灰階動態範圍通道之深度學習 Nodule Detection in Pulmonary CT Images:Multiple Window Convolutional Neural Network |
| 作者: | Chia-Chen Li 李佳真 |
| 指導教授: | 陳中明(Chung-Ming Chen) |
| 關鍵字: | 肺結節,電腦斷層掃描,電腦輔助偵測,多灰階動態,深度學習,卷積神經網路, Lung nodule,Computed tomography,Computer-aided detection (CADe),Multiple window,Deep learning,Convolutional neural network, |
| 出版年 : | 2018 |
| 學位: | 碩士 |
| 摘要: | 過去數十年來,肺癌持續高居國內十大死因之首,為了達到早期診斷,電腦斷層掃描影像已成為肺癌篩檢的主要工具之一。然而,電腦斷層影像為三維影像,包含多張不同切面,若單由放射科醫師逐張判讀,容易產生結節較小、影像對比度過低、或結節所在位置而被忽略;另一方面,不同的放射科醫師判讀的結果可能呈現不同,根據美國國家肺癌篩檢試驗中心(NLST)研究顯示,16位放射科醫師診斷相同之影像案例,兩兩醫師間的偵測結節之平均一致度僅82%[1]。 為了減少潛在的人為誤差並提高判讀之效率,許多專家學者已致力於發展電腦輔助偵測系統(CADe),而近來CADe的發展主要致力於基於機器學習的方法與基於深度學習的方法兩類。以機器學習為基礎之演算法,其困難在於提取特徵前分割結果之重現性與正確性;其次,機器學習之特徵的提取必須仰賴大量的經驗以及有效的工程技術,得以從影像中提取出有鑑別力之特徵;相較於深度學習,深度學習能有效從已標記的訓練數據中自行學習影像特徵,其優點是不必對影像進行分割,能直接輸入原始影像可避免由分割方法的不同而造成的差異影響結果,將有潛力解決機器學習所遇到之挑戰。但為了充分利用深度學習,需要仰賴大量的數據以確保參數被訓練完整,然而和自然影像相比,自然影像擁有大量影像數據可提供深度學習進行訓練,相對於現有醫學影像中肺部結節的影像資料顯得十分不足。 因此,為解決上述問題,本研究提出以深度學習架構引入醫師在臨床上之先驗知識的結節偵測模型Multiple Window Convolutional neural network (Multi-Window CNN),本研究使用morphology-based方法生成結節候選點,接著所有候選點使用4種不同的Window,包含CT影像原始HU值、Lung Window、Abdomen Window、及Bone Window,應用4種不同Window做為輸入影像,在3層CNN的架構下自動化提取特徵,藉由結合此類臨床之先驗知識,以提供更多的灰階資訊強化整體深度學習網路之效能;另一方面也增加了訓練樣本的數量,以克服醫學影像數據取得不易的問題。研究結果採用兩不同之測試樣本,包括LIDC-IDRI及NTUH資料庫,其結節偵測率分別為93.8%,76.1%,平均誤報率為3.6,3.7。 For more than a decade, lung cancer has claimed the first spots as the leading causes of death in Taiwan. The thoracic computed tomography (CT) has become a major screening tool for the diagnosis of lung cancer to achieve early detection. However, a CT scan is three-dimensional, which contains multiple slices. Implementation of lung cancer screening is a challenge for radiologists. A critical issue is that nodules may be ignored because of their small size, implicit location, or in low images contrast. Another is that the screening result may vary between radiologists. For example, in a National Lung Screening Trial (NLST) study, the average percentage of reader pairs in agreement on the screening result per case among 16 radiologists was only 82%. To reduce potential human error and improve the efficiency of image reading, many researchers have been made for Computer aided detection (CADe) approaches. In the past few years, machine learning-based and convolutional neural networks methods are widely used. The difficulty of algorithms based on machine learning is that the machine learning-based method lies in the reproducibility and correctness of the lesion segmentation before feature extraction. Secondly, the feature extraction of machine learning must rely on the experience and engineering technology to extract the discriminating features from the image; and for deep learning, it provides the potential to merge the extraction of relevant features with the classification procedure automatically. It can process raw image directly which means, there should be no need for pre-processing, such as segmentation. It will have the potential to solve the challenges encountered by machine learning. However, in order to ensure the huge number of parameters needed to be tuned by the learning model, deep learning approaches require a large amount of labeled training dataset. Compared with natural images, natural images have a large amount of image data that can provide deep learning for training, however, it may be difficult to meet in the medical domain where the expert annotation is expensive and the diseases (e.g., lesions) are scarce in the datasets. Therefore, in order to solve the above stated problems, we proposed a three-dimension convolutional neural network based on multiple CT observation window setting inputs, namely Multi-Window CNN, was proposed for lung nodule detection, which was inspired by the way of nodule screening of radiologists. In this study, a set of nodule candidates was generated by using a morphology-based method. All candidates VOI were set into 4 different window settings, including full dynamic window, lung window, bone window, and abdomen window. Applying the 3 layers multi-channel CNN architecture fed with 4 different windows to extract the features. It provides the network more grey-level information, on the other hand, it also increases the training sample quantity, in order to overcome the problem of difficult access to medical images. We applied two different datasets, including LIDC-IDRI and NTUH database. The sensitivities of the proposed Multi-Window CNN algorithm were 93.8% and 76.1%, respectively with 3.6 and 3.7 FPs/scan in the LIDC dataset and NTUH dataset. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78916 |
| DOI: | 10.6342/NTU201803938 |
| 全文授權: | 有償授權 |
| 電子全文公開日期: | 2028-12-31 |
| 顯示於系所單位: | 醫學工程學研究所 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1708201811160800.pdf 未授權公開取用 | 6.21 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
