請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92996完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林致廷 | zh_TW |
| dc.contributor.advisor | Chih-Ting Lin | en |
| dc.contributor.author | 陳冠穎 | zh_TW |
| dc.contributor.author | Guan-Ying Chen | en |
| dc.date.accessioned | 2024-07-12T16:11:37Z | - |
| dc.date.available | 2024-07-13 | - |
| dc.date.copyright | 2024-07-12 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-07-10 | - |
| dc.identifier.citation | REFERENCES
[1] W. Wang et al., "Detection of SARS-CoV-2 in Different Types of Clinical Specimens," JAMA, vol. 323, no. 18, pp. 1843-1844, May 12 2020, doi: 10.1001/jama.2020.3786. [2] N. Chen et al., "Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study," Lancet, vol. 395, no. 10223, pp. 507-513, Feb 15 2020, doi: 10.1016/S0140-6736(20)30211-7. [3] C. Sohrabi et al., "World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19)," Int J Surg, vol. 76, pp. 71-76, Apr 2020, doi: 10.1016/j.ijsu.2020.02.034. [4] R. Maia et al., "Diagnosis Methods for COVID-19: A Systematic Review," (in eng), Micromachines (Basel), vol. 13, no. 8, Aug 19 2022, doi: 10.3390/mi13081349. [5] S. Xia and X. Chen, "Single-copy sensitive, field-deployable, and simultaneous dual-gene detection of SARS-CoV-2 RNA via modified RT-RPA," Cell Discov, vol. 6, no. 1, p. 37, 2020, doi: 10.1038/s41421-020-0175-x. [6] V. M. Corman et al., "Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR," Euro Surveill, vol. 25, no. 3, Jan 2020, doi: 10.2807/1560-7917.ES.2020.25.3.2000045. [7] Y. Yang et al., "Laboratory diagnosis and monitoring the viral shedding of SARS-CoV-2 infection," The innovation, vol. 1, no. 3, 2020. [8] Y. H. Baek et al., "Development of a reverse transcription-loop-mediated isothermal amplification as a rapid early-detection method for novel SARS-CoV-2," Emerg Microbes Infect, vol. 9, no. 1, pp. 998-1007, Dec 2020, doi: 10.1080/22221751.2020.1756698. [9] T. Ishige et al., "Highly sensitive detection of SARS-CoV-2 RNA by multiplex rRT-PCR for molecular diagnosis of COVID-19 by clinical laboratories," Clin Chim Acta, vol. 507, pp. 139-142, Aug 2020, doi: 10.1016/j.cca.2020.04.023. [10] J. P. Broughton et al., "CRISPR-Cas12-based detection of SARS-CoV-2," Nat Biotechnol, vol. 38, no. 7, pp. 870-874, Jul 2020, doi: 10.1038/s41587-020-0513-4. [11] R. W. Peeling et al., "Scaling up COVID-19 rapid antigen tests: promises and challenges," Lancet Infect Dis, vol. 21, no. 9, pp. e290-e295, Sep 2021, doi: 10.1016/S1473-3099(21)00048-7. [12] S. Wang et al., "Graphene field-effect transistor biosensor for detection of biotin with ultrahigh sensitivity and specificity," Biosens Bioelectron, vol. 165, p. 112363, Oct 1 2020, doi: 10.1016/j.bios.2020.112363. [13] M. Z. Rashed et al., "Rapid detection of SARS-CoV-2 antibodies using electrochemical impedance-based detector," Biosens Bioelectron, vol. 171, p. 112709, Jan 1 2021, doi: 10.1016/j.bios.2020.112709. [14] S. Manigandan et al., "A systematic review on recent trends in transmission, diagnosis, prevention and imaging features of COVID-19," Process Biochem, vol. 98, pp. 233-240, Nov 2020, doi: 10.1016/j.procbio.2020.08.016. [15] D. Ippolito et al., "Diagnostic impact of bedside chest X-ray features of 2019 novel coronavirus in the routine admission at the emergency department: case series from Lombardy region," Eur J Radiol, vol. 129, p. 109092, Aug 2020, doi: 10.1016/j.ejrad.2020.109092. [16] I. Castiglioni et al., "Machine learning applied on chest x-ray can aid in the diagnosis of COVID-19: a first experience from Lombardy, Italy," European radiology experimental, vol. 5, no. 1, pp. 1-10, 2021. [17] S. Ghosh et al., "Imaging algorithm for COVID-19: A practical approach," Clinical imaging, vol. 72, pp. 22-30, 2021. [18] P. Rajpurkar et al., "Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning," arXiv preprint arXiv:1711.05225, 2017. [19] E. Çallı et al., "Deep learning for chest X-ray analysis: A survey," Medical Image Analysis, vol. 72, p. 102125, 2021. [20] J. D. Lopez-Cabrera et al., "Current limitations to identify COVID-19 using artificial intelligence with chest X-ray imaging," Health Technol (Berl), vol. 11, no. 2, pp. 411-424, 2021, doi: 10.1007/s12553-021-00520-2. [21] B. G. Santa Cruz et al., "Public covid-19 x-ray datasets and their impact on model bias–a systematic review of a significant problem," Medical image analysis, vol. 74, p. 102225, 2021. [22] G. Maguolo and L. Nanni, "A critic evaluation of methods for COVID-19 automatic detection from X-ray images," Inf Fusion, vol. 76, pp. 1-7, Dec 2021, doi: 10.1016/j.inffus.2021.04.008. [23] J. D. Arias-Londono et al., "Artificial Intelligence Applied to Chest X-Ray Images for the Automatic Detection of COVID-19. A Thoughtful Evaluation Approach," IEEE Access, vol. 8, pp. 226811-226827, 2020, doi: 10.1109/ACCESS.2020.3044858. [24] A. J. DeGrave et al., "AI for radiographic COVID-19 detection selects shortcuts over signal," Nature Machine Intelligence, vol. 3, no. 7, pp. 610-619, 2021. [25] I. D. Apostolopoulos and T. A. Mpesiana, "Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks," Phys Eng Sci Med, vol. 43, no. 2, pp. 635-640, Jun 2020, doi: 10.1007/s13246-020-00865-4. [26] H. M. Emara et al., "Deep convolutional neural networks for COVID-19 automatic diagnosis," Microsc Res Tech, vol. 84, no. 11, pp. 2504-2516, Nov 2021, doi: 10.1002/jemt.23713. [27] E. T. Hastuti et al., "Performance of True Transfer Learning using CNN DenseNet121 for COVID-19 Detection from Chest X-Ray Images," in 2021 IEEE International Conference on Health, Instrumentation & Measurement, and Natural Sciences (InHeNce), 2021: IEEE, pp. 1-5. [28] B. Sekeroglu and I. Ozsahin, "Detection of COVID-19 from Chest X-Ray Images Using Convolutional Neural Networks," SLAS Technol, vol. 25, no. 6, pp. 553-565, Dec 2020, doi: 10.1177/2472630320958376. [29] Y. Oh et al., "Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets," IEEE Trans Med Imaging, vol. 39, no. 8, pp. 2688-2700, Aug 2020, doi: 10.1109/TMI.2020.2993291. [30] B. G. S. Cruz et al., "On the composition and limitations of publicly available covid-19 x-ray imaging datasets," arXiv preprint arXiv:2008.11572, 2020. [31] A. Krizhevsky et al., "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, 2012. [32] H. Gu et al., "Blind channel identification aided generalized automatic modulation recognition based on deep learning," IEEE Access, vol. 7, pp. 110722-110729, 2019. [33] A. G. Howard et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017. [34] M. Sandler et al., "Mobilenetv2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510-4520. [35] J. Hu et al., "Squeeze-and-Excitation Networks," IEEE Trans Pattern Anal Mach Intell, vol. 42, no. 8, pp. 2011-2023, Aug 2020, doi: 10.1109/TPAMI.2019.2913372. [36] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017. [37] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020. [38] K. T. Chitty-Venkata et al., "Neural architecture search for transformers: A survey," IEEE Access, vol. 10, pp. 108374-108412, 2022. [39] H. Wu et al., "Cvt: Introducing convolutions to vision transformers," in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 22-31. [40] Z. Dai et al., "Coatnet: Marrying convolution and attention for all data sizes," Advances in neural information processing systems, vol. 34, pp. 3965-3977, 2021. [41] S. Mehta and M. Rastegari, "Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer," arXiv preprint arXiv:2110.02178, 2021. [42] S. Rajaraman et al., "Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-rays," IEEE Access, vol. 8, pp. 115041-115050, 2020, doi: 10.1109/access.2020.3003810. [43] T. Ozturk et al., "Automated detection of COVID-19 cases using deep neural networks with X-ray images," Comput Biol Med, vol. 121, p. 103792, Jun 2020, doi: 10.1016/j.compbiomed.2020.103792. [44] A. I. Khan et al., "CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images," Comput Methods Programs Biomed, vol. 196, p. 105581, Nov 2020, doi: 10.1016/j.cmpb.2020.105581. [45] L. Wang et al., "COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images," Sci Rep, vol. 10, no. 1, p. 19549, Nov 11 2020, doi: 10.1038/s41598-020-76550-z. [46] M. Yamac et al., "Convolutional Sparse Support Estimator-Based COVID-19 Recognition From X-Ray Images," IEEE Trans Neural Netw Learn Syst, vol. 32, no. 5, pp. 1810-1820, May 2021, doi: 10.1109/TNNLS.2021.3070467. [47] A. Abbas et al., "Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network," Appl Intell (Dordr), vol. 51, no. 2, pp. 854-864, 2021, doi: 10.1007/s10489-020-01829-7. [48] A. Waheed et al., "CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection," IEEE Access, vol. 8, pp. 91916-91923, 2020, doi: 10.1109/ACCESS.2020.2994762. [49] R. Gulakala et al., "Generative adversarial network based data augmentation for CNN based detection of Covid-19," Sci Rep, vol. 12, no. 1, p. 19186, Nov 10 2022, doi: 10.1038/s41598-022-23692-x. [50] N. O’Mahony et al., "Deep learning vs. traditional computer vision," in Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Volume 1 1, 2020: Springer, pp. 128-144. [51] H. C. Kniep et al., "Radiomics of Brain MRI: Utility in Prediction of Metastatic Tumor Type," Radiology, vol. 290, no. 2, pp. 479-487, Feb 2019, doi: 10.1148/radiol.2018180946. [52] J. Cervantes et al., "A comprehensive survey on support vector machine classification: Applications, challenges and trends," Neurocomputing, vol. 408, pp. 189-215, 2020. [53] S. Dreiseitl and L. Ohno-Machado, "Logistic regression and artificial neural network classification models: a methodology review," J Biomed Inform, vol. 35, no. 5-6, pp. 352-9, Oct-Dec 2002, doi: 10.1016/s1532-0464(03)00034-0. [54] I. Wickramasinghe and H. Kalutarage, "Naive Bayes: applications, variations and vulnerabilities: a review of literature with code snippets for implementation," Soft Computing, vol. 25, no. 3, pp. 2277-2293, 2021. [55] P. Cunningham and S. J. Delany, "K-nearest neighbour classifiers-a tutorial," ACM computing surveys (CSUR), vol. 54, no. 6, pp. 1-25, 2021. [56] B. Charbuty and A. Abdulazeez, "Classification based on decision tree algorithm for machine learning," Journal of Applied Science and Technology Trends, vol. 2, no. 01, pp. 20-28, 2021. [57] I. D. Mienye and Y. Sun, "A survey of ensemble learning: Concepts, algorithms, applications, and prospects," IEEE Access, vol. 10, pp. 99129-99149, 2022. [58] M. J. Van der Laan et al., "Super learner," Statistical applications in genetics and molecular biology, vol. 6, no. 1, 2007. [59] K. Cheng et al., "Cardiac Computed Tomography Radiomics for the Non-Invasive Assessment of Coronary Inflammation," Cells, vol. 10, no. 4, Apr 12 2021, doi: 10.3390/cells10040879. [60] J. N. Hasoon et al., "COVID-19 anomaly detection and classification method based on supervised machine learning of chest X-ray images," Results Phys, vol. 31, p. 105045, Dec 2021, doi: 10.1016/j.rinp.2021.105045. [61] M. A. Elaziz et al., "New machine learning method for image-based diagnosis of COVID-19," PLoS One, vol. 15, no. 6, p. e0235187, 2020, doi: 10.1371/journal.pone.0235187. [62] A. Zargari Khuzani et al., "COVID-Classifier: an automated machine learning model to assist in the diagnosis of COVID-19 infection in chest X-ray images," Sci Rep, vol. 11, no. 1, p. 9887, May 10 2021, doi: 10.1038/s41598-021-88807-2. [63] C. Ortiz-Toro et al., "Automatic detection of pneumonia in chest X-ray images using textural features," Comput Biol Med, vol. 145, p. 105466, Jun 2022, doi: 10.1016/j.compbiomed.2022.105466. [64] L. Moura et al., "Texture-based Feature Extraction for COVID-19 Pneumonia Classification using Chest Radiography," EAI Endorsed Transactions on Bioengineering and Bioinformatics, 2021. [65] A. M. Ismael and A. Sengur, "Deep learning approaches for COVID-19 detection based on chest X-ray images," Expert Syst Appl, vol. 164, p. 114054, Feb 2021, doi: 10.1016/j.eswa.2020.114054. [66] M. Togacar et al., "COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches," Comput Biol Med, vol. 121, p. 103805, Jun 2020, doi: 10.1016/j.compbiomed.2020.103805. [67] S. H. Yoo et al., "Deep Learning-Based Decision-Tree Classifier for COVID-19 Diagnosis From Chest X-ray Imaging," Front Med (Lausanne), vol. 7, p. 427, 2020, doi: 10.3389/fmed.2020.00427. [68] K. B. Ahmed et al., "Discovery of a Generalization Gap of Convolutional Neural Networks on COVID-19 X-Rays Classification," IEEE Access, vol. 9, pp. 72970-72979, 2021, doi: 10.1109/access.2021.3079716. [69] A. Signoroni et al., "BS-Net: Learning COVID-19 pneumonia severity on a large chest X-ray dataset," Med Image Anal, vol. 71, p. 102046, Jul 2021, doi: 10.1016/j.media.2021.102046. [70] P. Lakhani et al., "The 2021 SIIM-FISABIO-RSNA Machine Learning COVID-19 Challenge: Annotation and Standard Exam Classification of COVID-19 Chest Radiographs," J Digit Imaging, pp. 1-8, Sep 28 2022, doi: 10.1007/s10278-022-00706-8. [71] S. Desai et al., "Chest imaging representing a COVID-19 positive rural U.S. population," Sci Data, vol. 7, no. 1, p. 414, Nov 24 2020, doi: 10.1038/s41597-020-00741-6. [72] A. Bustos et al., "Padchest: A large chest x-ray image dataset with multi-label annotated reports," Medical image analysis, vol. 66, p. 101797, 2020. [73] R. Summers, "NIH Chest X-ray Dataset of 14 Common Thorax Disease Categories," ed, 2019. [74] D. S. Kermany et al., "Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning," Cell, vol. 172, no. 5, pp. 1122-1131 e9, Feb 22 2018, doi: 10.1016/j.cell.2018.02.010. [75] V. Iglovikov and A. Shvets, "Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation," arXiv preprint arXiv:1801.05746, 2018, doi: https://doi.org/10.48550/arXiv.1801.05746. [76] Y. Oh and J. C. Ye, "CXR Segmentation by AdaIN-Based Domain Adaptation and Knowledge Distillation," in European Conference on Computer Vision, 2022: Springer, pp. 627-643. [77] K. He et al., "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [78] G. Huang et al., "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708. [79] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [80] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International conference on machine learning, 2019: PMLR, pp. 6105-6114. [81] S. Jaeger et al., "Two public chest X-ray datasets for computer-aided screening of pulmonary diseases," Quant Imaging Med Surg, vol. 4, no. 6, pp. 475-7, Dec 2014, doi: 10.3978/j.issn.2223-4292.2014.11.20. [82] J. Irvin et al., "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison," in Proceedings of the AAAI conference on artificial intelligence, 2019, vol. 33, no. 01, pp. 590-597. [83] J. J. M. van Griethuysen et al., "Computational Radiomics System to Decode the Radiographic Phenotype," Cancer Res, vol. 77, no. 21, pp. e104-e107, Nov 1 2017, doi: 10.1158/0008-5472.CAN-17-0339. [84] T. Akiba et al., "Optuna: A next-generation hyperparameter optimization framework," in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 2623-2631. [85] W. Ye et al., "Weakly supervised lesion localization with probabilistic-cam pooling," arXiv preprint arXiv:2005.14480, 2020, doi: https://doi.org/10.48550/arXiv.2005.14480. [86] T. Chen et al., "A simple framework for contrastive learning of visual representations," in International conference on machine learning, 2020: PMLR, pp. 1597-1607. [87] T.-Y. Lin et al., "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988. [88] P. Khosla et al., "Supervised contrastive learning," Advances in neural information processing systems, vol. 33, pp. 18661-18673, 2020. [89] Y. Wang et al., "Understanding how dimension reduction tools work: an empirical approach to deciphering t-SNE, UMAP, TriMAP, and PaCMAP for data visualization," The Journal of Machine Learning Research, vol. 22, no. 1, pp. 9129-9201, 2021. [90] S. Woo et al., "Convnext v2: Co-designing and scaling convnets with masked autoencoders," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16133-16142. [91] Y. Chen et al., "Mobile-former: Bridging mobilenet and transformer," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5270-5279. [92] J. Cleverley et al., "The role of chest radiography in confirming covid-19 pneumonia," BMJ, vol. 370, p. m2426, Jul 16 2020, doi: 10.1136/bmj.m2426. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92996 | - |
| dc.description.abstract | 全球大流行病對於創建可靠的胸部X光(CXR)診斷模型帶來了很大的挑戰。2019年新型冠狀病毒(COVID-19)爆發以來,電腦視覺領域紛紛將傳統機器學習與深度學習模型適應到COVID-19檢測任務上,帶來出色的表現。然而,由於疫情突然爆發,加上收集和標註COVID-19 CXR數據需要大量的人力和時間資源,這些模型通常只使用有限的數據進行開發,導致大多數模型傾向於識別與疾病無關而與數據集相關的特徵,這使得這些模型難以廣泛應用。
處在後疫情時代,我們重新蒐集資料量相對足夠的COVID-19訓練資料集與兩個外部測試資料集,審視影響模型泛化的因素。我們將重點放在於尋找圖像空間和頻率域中可能的資料偏差來源。在空間域中,我們使用兩種肺部分割模型(傳統肺分割模型與生成式肺分割模型),比較不同的肺部分割方式去除背景雜訊對模型泛化的幫助;在頻域,我們則是使用離散小波轉換(DWT)或二分複小波轉換(DTCWT),產生不同的頻域組合,藉此排除具有噪聲的頻段。實驗結果顯示,使用兩種肺部分割模型共同協作,並使用小波傳換去除高頻訊號,深度學習模型和傳統機器學習模型在兩個外部資料集測試的平均準確率相較於用原始圖片訓練的模型,分別約提升7%與16%。但我們也指出,這樣的架構並非完美無缺,肺部分割模型也會受其訓練資料集的偏差影響,錯誤的肺部分割結果會導致後續特徵萃取與分類效果不佳。 為了使模型能夠學習到目標(COVID-19)數據集的穩健特徵,我們提出了一種新的訓練方法,稱為多任務監督對比學習(MTSCL)。利用大型胸部X光公開數據集的多樣性,MTSCL可藉著對比來自大型數據集和目標數據集的圖像表示,幫助模型專注於目標數據集中與疾病相關的特徵。我們使用MTSCL訓練肺部異常分類模型,並將其結合樹模型形成一個兩階段分層分類框架用於COVID-19分類任務。研究結果顯示,兩階段分層分類框架在兩個外部資料集的平均準確率達到了75.19%,優於最先進的深度學習模型(74.08%)和傳統機器學習模型(68.67%)。並且經測試,第一階段的MTSCL模型檢測肺部異常的性能優於其他學習框架訓練的模型,平均AUC提升率為1.03%-5.33%。而第二階段,通過MTSCL產生的肺部病理區域分割遮罩,並配合小波濾波進行圖像處理,提高了樹模型區分COVID-19和典型肺炎的泛化性能,平均AUC分數提升了12.38%。結果顯示,MTSCL架構可以使模型學習到資料有限的目標資料庫的關鍵特徵,幫助提升模型泛化到其他資料的能力。 | zh_TW |
| dc.description.abstract | The global pandemic has presented challenges in creating reliable chest X-ray (CXR) diagnostic models. Since the outbreak of COVID-19 in 2019, the field of computer vision has been adapting traditional machine learning and deep learning models for COVID-19 detection, yielding impressive performances. However, these models are often developed using limited data at the early stage of the pandemic. As a result, most models tend to identify features unrelated to the disease but specific to the datasets, making it challenging for these models to be widely applicable.
In the post-pandemic era, we re-collected a sufficiently large training dataset and two external testing datasets to examine factors influencing model generalization. We focus on identifying potential sources of data bias in both the spatial and frequency domains of the images. In the spatial domain, we employ two lung segmentation models (traditional and generative lung segmentation models). We compare their abilities to remove background noise. In the frequency domain, we utilize discrete wavelet transform (DWT) or dual-tree complex wavelet transform (DTCWT) to generate different combinations of frequency sub-bands, aiming to eliminate noisy signals from images. The experimental results demonstrate that by the collaborative efforts of the two lung segmentation models and utilizing wavelet transformation for image processing, the deep learning models and classical machine learning models achieve an average accuracy improvement of approximately 7% and 16%, respectively. However, we also note that such an architecture can be flawed, as the lung segmentation models can be biased with their training datasets, leading to suboptimal performance in feature extraction and classification. To enable the model to learn robust features from the target (COVID-19) dataset, we propose a novel training method called Multi-Task Supervised Contrastive Learning (MTSCL). Leveraging the diversity of large-scale public CXR datasets, MTSCL helps the model focus on disease-related features in the target dataset by contrasting image representations from the large-scale and the target datasets. We use MTSCL to train a lung abnormality classification model and combine it with tree models to form a two-stage hierarchical classification framework for COVID-19 classification tasks. The research results demonstrate that the framework achieves an average accuracy of 75.19% on the two external datasets, outperforming state-of-the-art deep learning models (74.08%) and traditional machine learning models (68.67%). Additionally, in the first stage of the framework, the MTSCL model outperforms in detecting lung abnormalities compared to models trained with other learning frameworks. The average AUC improvement rate is 1.03%-5.33%. In the second stage, the lung lesion masks generated by MTSCL, combined with wavelet filtering for image processing, enhance the generalization performance of the tree model in distinguishing COVID-19 from typical pneumonia. It achieves an average AUC score improvement of 12.38%. These results demonstrate that the MTSCL architecture enables the model to learn critical features from a limited target dataset, thereby enhancing the model's ability to generalize with other data. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-12T16:11:37Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-07-12T16:11:37Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 中文摘要 iii ABSTRACT v CONTENTS vii LIST OF FIGURES x LIST OF TABLES xii Chapter 1 Research background 1 1.1 Introduction to the novel coronavirus (COVID-19) 1 1.2 COVID-19 diagnosis methods 2 1.3 Development and challenges of computer-aided diagnosis using chest X-ray imaging 5 1.4 Research motivation 7 1.5 The organization of the dissertation 8 Chapter 2 Application of chest X-ray images for COVID-19: Model principles and literature review 11 2.1 Deep learning models and COVID-19 detection applications 11 2.1.1 Deep learning model architectures for image tasks 11 2.1.2 Applications of Deep Models in COVID-19 Detection 17 2.2 Traditional machine learning models and COVID-19 detection applications 20 2.2.1 Traditional machine learning models for image tasks 20 2.2.2 Applications of traditional machine learning models in COVID-19 detection.. 23 2.2.3 Combining traditional machine learning models with deep learning models for COVID-19 detection 24 2.3 Generalization issues of COVID-19 detection models 25 Chapter 3 COVID-19 chest X-ray image dataset 28 3.1 Potential risks of COVID-19 chest X-ray image datasets 28 3.2 Introduction to collected primary datasets 30 3.3 COVID-19 training and external test datasets 33 3.3.1 Dataset composition 33 3.3.2 Data Preprocessing 35 Chapter 4 Exploration of factors influencing COVID-19 model generalization… 37 4.1 Experimental framework 38 4.1.1 Lung segmentation module 39 4.1.2 Wavelet transform 41 4.1.3 Classification models 43 4.2 Datasets 45 4.3 Implementation details 46 4.3.1 Training parameters and mask post-processing for lung segmentation module…. 46 4.3.2 Parameter Settings for Wavelet Transform 47 4.3.3 Training of deep learning models and tree models 47 4.4 Experimental results 48 4.4.1 Testing the generalization of the model with raw data 48 4.4.2 Lung segmentation and model generalization 51 4.4.3 Investigation of image frequency domain signals and model generalization based on lung-segmented images 56 4.5 Discussion and conclusion 58 Chapter 5 Exploring model generalization based on multi-task contrastive learning 63 5.1 Experimental framework 65 5.1.1 Multi-Task Contrastive Learning (MTCL) 65 5.1.2 Lung segmentation module 71 5.1.3 Training steps for the tree models 71 5.2 Datasets 72 5.3 Implementation details 73 5.4 Experimental results 74 5.4.1 Multi-task contrastive learning for lung abnormality classification 75 5.4.2 Impact of lung lesion segmentation and wavelet transform on the generalization of typical pneumonia and COVID-19 classification models 81 5.5 Discussion and conclusion 87 Chapter 6 Conclusion and future work 92 REFERENCES 97 | - |
| dc.language.iso | en | - |
| dc.subject | 新型冠狀病毒 | zh_TW |
| dc.subject | 胸部X光 | zh_TW |
| dc.subject | 模型泛化 | zh_TW |
| dc.subject | 對比損失 | zh_TW |
| dc.subject | 對比學習 | zh_TW |
| dc.subject | COVID-19 | en |
| dc.subject | contrastive loss | en |
| dc.subject | contrastive learning | en |
| dc.subject | chest X-ray | en |
| dc.subject | model generalization | en |
| dc.title | 胸部X光影像模型的泛化研究:使用COVID-19案例探討 | zh_TW |
| dc.title | A model-generalization study with chest X-ray images: a case study on COVID-19 | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 郭柏齡;張瑞峰;劉浩澧;傅楸善;劉遠楨 | zh_TW |
| dc.contributor.oralexamcommittee | Po-Ling Kuo;Ruey-Feng Chang;Hao-Li Liu;Chiou-Shann Fuh;Yuan-Chen Liu | en |
| dc.subject.keyword | 胸部X光,新型冠狀病毒,對比學習,對比損失,模型泛化, | zh_TW |
| dc.subject.keyword | chest X-ray,COVID-19,contrastive learning,contrastive loss,model generalization, | en |
| dc.relation.page | 102 | - |
| dc.identifier.doi | 10.6342/NTU202401567 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-07-10 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 生醫電子與資訊學研究所 | - |
| 顯示於系所單位: | 生醫電子與資訊學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 5.92 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
