請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8315| 標題: | 預訓練對於醫療影像的探討 Rethinking Pre-training in Medical Imaging |
| 作者: | Yu-Cheng Chang 張友誠 |
| 指導教授: | 徐宏民(Winston Hsu) |
| 關鍵字: | 深度學習,醫療影像,預訓練, Deep Learning,Medical Imaging,Pre-training, |
| 出版年 : | 2020 |
| 學位: | 碩士 |
| 摘要: | 模型預訓練 (Pre-training) 在電腦視覺及自然語言處理的領域中被廣泛地使用,透過巨量的訓練資料,模型可以習得泛用的影像或文字的特徵提取能力,進一步幫助網路模型有更好的表現。然而,鮮少的文獻深入探討是否有哪些因素會影響模型預訓練在醫療影像領域的效果,因此本篇研究著重探討預訓練的本質,透過大量的分析闡明預訓練對於醫療影像處理的限制,且進一步提出新穎的解決方法。 透過實驗分析驗證,問題複雜度與模型的預訓練資料型態 (Modality) 皆對於預訓練的效果有著明顯的影響。我們也分析了批量標準化 (Batch normalization) 當中的縮放參數 (Scaling term γ),並且建立一套高效率的模型能力評估方法。除此之外,我們更進一步提出神經網路煉金術 (Network Alchemy),有系統性的激發模型的潛能,以妥善利用模型所有的可學習參數。大量的實驗結果說明在各式各樣的實驗設定中,我們的方法均能夠提升模型的表現,展示其泛化性與穩定性。 Pre-training is a well-developed technique to extract general feature representations from abundant data. However, the factors affecting how pre-training works in medical imaging is rarely studied. In this work, we fully explore the essence of pre-training in medical imaging and provide comprehensive analysis. We conclude that both the target task complexity and the pre-trained data modality have considerable impact on the effectiveness of pre-training in medical imaging. In addition, we analyze the trainable parameter γ in batch normalization (BatchNorm) and establish an original standard to efficiently assess the effectiveness of pre-trained weights. We further propose the Network Alchemy to stimulate the considerable potential of the network and fully utilize model parameters in fine-tuning stage. Extensive experimental results exhibit the robustness and the generalization ability of our proposed methodology in various experimental scenarios |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8315 |
| DOI: | 10.6342/NTU202002487 |
| 全文授權: | 同意授權(全球公開) |
| 顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-0508202017013100.pdf | 2.93 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
