Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84632
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張瑞峰zh_TW
dc.contributor.advisorRuey-Feng Changen
dc.contributor.author陳華嚴zh_TW
dc.contributor.authorHua-Yan Chenen
dc.date.accessioned2023-03-19T22:18:21Z-
dc.date.available2023-12-26-
dc.date.copyright2022-10-07-
dc.date.issued2022-
dc.date.submitted2002-01-01-
dc.identifier.citationR. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, "Cancer statistics, 2022," CA: A Cancer Journal for Clinicians, vol. 72, no. 1, pp. 7-33, 2022, doi: https://doi.org/10.3322/caac.21708.
N. S. Weiss, "Breast cancer mortality in relation to clinical breast examination and breast self‐examination," The Breast Journal, vol. 9, pp. S86-S89, 2003.
L. Wang, "Early diagnosis of breast cancer," Sensors, vol. 17, no. 7, p. 1572, 2017.
L. S. Caplan, "Patient delay in seeking help for potential breast cancer," Public Health Reviews, vol. 23, no. 3, pp. 263-274, 1995.
L. S. Caplan and K. J. Helzlsouer, "Delay in breast cancer: a review of the literature," Public Health Reviews, vol. 20, no. 3-4, pp. 187-214, 1992.
I. Mittra, "Breast screening: the case for physical examination without mammography," The Lancet, vol. 343, no. 8893, pp. 342-344, 1994.
R. A. Smith et al., "American Cancer Society guidelines for breast cancer screening: update 2003," CA: a cancer journal for clinicians, vol. 53, no. 3, pp. 141-169, 2003.
R. J. Hooley, L. Andrejeva, and L. M. Scoutt, "Breast cancer screening and problem solving using mammography, ultrasound, and magnetic resonance imaging," Ultrasound quarterly, vol. 27, no. 1, pp. 23-47, 2011.
D. Thigpen, A. Kappler, and R. Brem, "The role of ultrasound in screening dense breasts—A review of the literature and practical solutions for implementation," Diagnostics, vol. 8, no. 1, p. 20, 2018.
W. A. Berg and A. Vourtsis, "Screening breast ultrasound using handheld or automated technique in women with dense breasts," Journal of Breast Imaging, vol. 1, no. 4, pp. 283-296, 2019.
W. A. Berg, "Tailored supplemental screening for breast cancer: what now and what next?," American Journal of Roentgenology, vol. 192, no. 2, pp. 390-399, 2009.
H. K. McIsaac, D. S. Thordarson, R. Shafran, S. Rachman, and G. Poole, "Claustrophobia and the magnetic resonance imaging procedure," Journal of behavioral medicine, vol. 21, no. 3, pp. 255-268, 1998.
W. A. Berg et al., "Combined screening with ultrasound and mammography vs mammography alone in women at elevated risk of breast cancer," Jama, vol. 299, no. 18, pp. 2151-2163, 2008.
C. M. Sehgal, S. P. Weinstein, P. H. Arger, and E. F. Conant, "A review of breast ultrasound," Journal of mammary gland biology and neoplasia, vol. 11, no. 2, pp. 113-123, 2006.
M. Golatta et al., "Interobserver reliability of automated breast volume scanner (ABVS) interpretation and agreement of ABVS findings with hand held breast ultrasound (HHUS), mammography and pathology results," European journal of radiology, vol. 82, no. 8, pp. e332-e336, 2013.
J. Wild and D. Neal, "Use of high-frequency ultrasonic waves for detecting changes of texture in living tissues," The Lancet, vol. 257, no. 6656, pp. 655-657, 1951.
A. Evans et al., "Breast ultrasound: recommendations for information to women and referring physicians by the European Society of Breast Imaging," Insights into imaging, vol. 9, no. 4, pp. 449-461, 2018.
G. Rizzatto, "Towards a more sophisticated use of breast ultrasound," European radiology, vol. 11, no. 12, pp. 2425-2435, 2001.
R. J. Hooley, L. M. Scoutt, and L. E. Philpotts, "Breast ultrasonography: state of the art," Radiology, vol. 268, no. 3, pp. 642-659, 2013.
H.-Y. Wang et al., "Differentiation of benign and malignant breast lesions: a comparison between automatically generated breast volume scans and handheld ultrasound examinations," European journal of radiology, vol. 81, no. 11, pp. 3190-3200, 2012.
J.-H. Choi, B. J. Kang, J. E. Baek, H. S. Lee, and S. H. Kim, "Application of computer-aided diagnosis in breast ultrasound interpretation: improvements in diagnostic performance according to reader experience," Ultrasonography, vol. 37, no. 3, p. 217, 2018.
Y. Shen et al., "Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams," Nature communications, vol. 12, no. 1, pp. 1-13, 2021.
D.-R. Chen and Y.-H. Hsiao, "Computer-aided diagnosis in breast ultrasound," Journal of Medical Ultrasound, vol. 16, no. 1, pp. 46-56, 2008.
K. Doi, "Current status and future potential of computer-aided diagnosis in medical imaging," The British journal of radiology, vol. 78, no. suppl_1, pp. s3-s19, 2005.
A. Jalalian, S. Mashohor, R. Mahmud, B. Karasfi, M. I. B. Saripan, and A. R. B. Ramli, "Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection," EXCLI journal, vol. 16, p. 113, 2017.
L. Nanni, S. Ghidoni, and S. Brahnam, "Handcrafted vs. non-handcrafted features for computer vision classification," Pattern Recognition, vol. 71, pp. 158-172, 2017.
V. Kumar et al., "Radiomics: the process and the challenges," Magnetic resonance imaging, vol. 30, no. 9, pp. 1234-1248, 2012.
Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, 2012.
R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, "Convolutional neural networks: an overview and application in radiology," Insights into imaging, vol. 9, no. 4, pp. 611-629, 2018.
J.-G. Lee et al., "Deep learning in medical imaging: general overview," Korean journal of radiology, vol. 18, no. 4, pp. 570-584, 2017.
P. W. Battaglia et al., "Relational inductive biases, deep learning, and graph networks," arXiv preprint arXiv:1806.01261, 2018.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
X. Zhu, H. Hu, H. Wang, J. Yao, D. Ou, and D. Xu, "Region aware transformer for automatic breast ultrasound tumor segmentation," in Medical Imaging with Deep Learning, 2021.
F. Shamshad et al., "Transformers in medical imaging: A survey," arXiv preprint arXiv:2201.09873, 2022.
L. Liu, X. Liu, J. Gao, W. Chen, and J. Han, "Understanding the difficulty of training transformers," arXiv preprint arXiv:2004.08249, 2020.
A. Arnab, M. Dehghani, G. Heigold, C. Sun, M. Lučić, and C. Schmid, "Vivit: A video vision transformer," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6836-6846.
Y.-C. Chang et al., "Automatic selection of representative slice from cine-loops of real-time sonoelastography for classifying solid breast masses," Ultrasound in medicine & biology, vol. 37, no. 5, pp. 709-718, 2011.
H. Zheng, Y. Zhang, L. Yang, C. Wang, and D. Z. Chen, "An annotation sparsification strategy for 3D medical image segmentation via representative selection and self-training," in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, no. 04, pp. 6925-6932.
S. Bharati, P. Podder, M. Mondal, and V. Prasath, "CO-ResNet: Optimized ResNet model for COVID-19 diagnosis from X-ray images," International Journal of Hybrid Intelligent Systems, no. Preprint, pp. 1-15, 2021.
Q. Guan, Y. Huang, Z. Zhong, Z. Zheng, L. Zheng, and Y. Yang, "Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification," arXiv preprint arXiv:1801.09927, 2018.
H. Tanaka, S.-W. Chiu, T. Watanabe, S. Kaoku, and T. Yamaguchi, "Computer-aided diagnosis system for breast ultrasound images using deep learning," Physics in Medicine & Biology, vol. 64, no. 23, p. 235013, 2019.
A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012-10022.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE conference on computer vision and pattern recognition, 2009: Ieee, pp. 248-255.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
O. Russakovsky et al., "Imagenet large scale visual recognition challenge," International journal of computer vision, vol. 115, no. 3, pp. 211-252, 2015.
K. He, X. Zhang, S. Ren, and J. Sun, "Identity mappings in deep residual networks," in European conference on computer vision, 2016: Springer, pp. 630-645.
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
X. Wang, R. Girshick, A. Gupta, and K. He, "Non-local neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794-7803.
T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, and M. Li, "Bag of tricks for image classification with convolutional neural networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 558-567.
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500.
H. Zhang et al., "Resnest: Split-attention networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2736-2746.
X. Li, W. Wang, X. Hu, and J. Yang, "Selective kernel networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 510-519.
P. Li, J. Xie, Q. Wang, and W. Zuo, "Is second-order information helpful for large-scale visual recognition?," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2070-2078.
Z. Gao, J. Xie, Q. Wang, and P. Li, "Global second-order pooling convolutional networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3024-3033.
C. Qi and F. Su, "Contrastive-center loss for deep neural networks," in 2017 IEEE international conference on image processing (ICIP), 2017: IEEE, pp. 2851-2855.
K. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026-1034.
R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection," in Ijcai, 1995, vol. 14, no. 2: Montreal, Canada, pp. 1137-1145.
I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.
L. Perez and J. Wang, "The effectiveness of data augmentation in image classification using deep learning," arXiv preprint arXiv:1712.04621, 2017.
S.-A. Rebuffi, S. Gowal, D. A. Calian, F. Stimberg, O. Wiles, and T. A. Mann, "Data Augmentation Can Improve Robustness," Advances in Neural Information Processing Systems, vol. 34, 2021.
Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, "Random erasing data augmentation," in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, no. 07, pp. 13001-13008.
A.-M. Šimundić, "Measures of diagnostic accuracy: basic definitions," ejifcc, vol. 19, no. 4, p. 203, 2009.
J. N. Mandrekar, "Receiver operating characteristic curve in diagnostic test assessment," Journal of Thoracic Oncology, vol. 5, no. 9, pp. 1315-1316, 2010.
Q. McNemar, "Note on the sampling error of the difference between correlated proportions or percentages," Psychometrika, vol. 12, no. 2, pp. 153-157, 1947.
Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A convnet for the 2020s," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976-11986.
J. Yang et al., "Focal self-attention for local-global interactions in vision transformers," arXiv preprint arXiv:2107.00641, 2021.
M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International conference on machine learning, 2019: PMLR, pp. 6105-6114.
A. Steiner, A. Kolesnikov, X. Zhai, R. Wightman, J. Uszkoreit, and L. Beyer, "How to train your vit? data, augmentation, and regularization in vision transformers," arXiv preprint arXiv:2106.10270, 2021.
J. Frankle and M. Carbin, "The lottery ticket hypothesis: Finding sparse, trainable neural networks," arXiv preprint arXiv:1803.03635, 2018.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84632-
dc.description.abstract乳癌為十大癌症之一,也是女性癌症致死率第二高的主因,若及早對其進行偵測、診斷及治療,就可以將病程延緩甚至治癒。在臨床篩查(Clinical screening)上,超音波檢測有助於評估乳房腫瘤的良惡性,其中手持型超音波(Handheld ultrasound, HHUS)是目前評估乳癌的主流工具。醫師在檢測有疑慮的區域時會鉅細靡遺地記錄掃描過程中可能有異常的部分,並且針對高度可疑的腫瘤做活體組織檢查(Biopsy),然而詳細地篩查會使得成像數量繁多,因而造成後續醫師在檢視時有不小的負擔。因此,我們提出一個自動挑片暨診斷系統以減輕醫護人員的重擔。本系統分為二階段:在第一階段中,我們使用基於ImageNet的預訓練Transformer模型,自每位患者的影像序列中挑出較有疑慮的影像;而在第二階段則使用改良過的卷積神經網路(Convolutional neural network, CNN)模型對這些有疑慮的影像做良惡性的診斷,該模型以預訓練的ResNeSt-50為基底,結合了全域二階池化模組(Global Second-order Pooling, GSoP),為了更進一步地獲得更泛化的結果,我們也於該模型中引入了對比中心損失函數(Contrastive center loss)。
在本研究中,我們使用了807位病人來評估我們所提出的系統,醫生對這些病人拍攝了至少5張以上的影像序列(平均影像數為44張),並且特別挑選出有疑慮的影像。在第一階段實驗,依照模型給予的評分,取用前一及前五高分的影像能涵蓋醫生所標註有疑慮的影像的正確率分別為74.35% (600/807)和97.27% (785/807);而在第二階段則使用每位病人前五高分的所有影像來評估其良惡性,總計有4035張影像,這些影像涵蓋兩種類別:分別是良性影像3,421張(包含僅有乳房正常組織而沒有腫瘤的影像)及具有惡性腫瘤之影像614張。實驗結果表明,提出的方法的診斷準確度、靈敏度、特異性和ROC曲線下面積(AUC)分別為79.85%、80.13%、79.80%和0.8641。兩個階段的實驗結果顯示了提出的自動挑片暨診斷系統可以減少人員挑片的負擔,並在臨床上提供診斷的第二意見。
zh_TW
dc.description.abstractBreast cancer is one of the top ten cancers and the second most common cause of death in women worldwide. The disease progression can be postponed or even cured if detected, diagnosed, and treated in the early stage. In general, ultrasound is beneficial in clinical screening to assess the benignity and malignancy of breast cancers, and the use of handheld ultrasound (HHUS) remains the mainstream tool for breast cancer assessment today. When investigating a suspicious region in the breast, operators meticulously record the abnormalities in the scanning process and suggest a biopsy examination for a highly suspicious lesion. However, a detailed screening records many slices, which might increase the review burden for clinicians. Hence, we proposed a computer-aided automatic slice selection and diagnosis system to relieve the burden on the medical staff. It consisted of two stages: In the first stage, we used a Transformer-based model to select suspicious slices from each patient's slice sequence. In the second stage, we employed a modified convolutional neural network (CNN) model to assess whether suspicious slices were benign or malignant. It contained a modified pre-trained ResNeSt-50 model embedded with Global Second-order Pooling (GSoP) blocks. Besides, we also introduced the contrastive center loss to increase the model's generalizability.
In this study, we used a total of 807 patients to evaluate our proposed methods, and experienced clinicians recorded the screening slice sequence (range: 5 to 117 slices, average: 44 slices) and indicated which were of interest. In the first stage, the selection accuracy based on top-1 and top-5 scores was 74.35% and 97.27%, respectively. In the second stage, we utilized the top-5 slices from 807 patients to assess their benignity and malignancy, in which there were 3,421 benign slices (normal breast tissue and no tumor) and 614 slices containing malignant tumors, summing to 4,035 slices. The experimental results demonstrated that the accuracy, sensitivity, specificity, and AUC of diagnosis were 79.85%, 80.13%, 79.80%, and 0.8641, respectively. The two experiment results showed that the proposed system could reduce personal strain and provide a second diagnostic opinion for the clinic settings.
en
dc.description.provenanceMade available in DSpace on 2023-03-19T22:18:21Z (GMT). No. of bitstreams: 1
U0001-3008202223181200.pdf: 1801661 bytes, checksum: 098563487accc472472d2e681a7c8d39 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents口試委員會審定書 i
致謝 ii
摘要 iii
Abstract v
Table of Contents vii
List of Figures ix
List of Tables xi
Chapter 1 Introduction 1
Chapter 2 Materials 6
Chapter 3 Methods 12
3.1. Automatic Slice Selection Stage 13
3.1.1. Shifted Windows Transformer 14
3.1.2. Slice Selection Method 17
3.2. Diagnosis Stage 19
3.2.1. Deep Residual Network Backbone 19
3.2.2. Global Second-order Pooling (GSoP) 21
3.2.3. The Proposed Model (ResNeSt-50 with GSoP blocks) 23
Chapter 4 Experimental Results 25
4.1. Experimental Setting 25
4.2. Training Strategies 27
4.3. Evaluation and Statistics 28
4.4. Comparisons of Different Models for Slice Selection 29
4.5. Comparisons of Different CNN Models for Diagnosis 31
4.6. Ablation Study of the Proposed Model for Diagnosis 34
Chapter 5 Discussion and Conclusion 38
References 43
-
dc.language.isoen-
dc.subject乳房篩查zh_TW
dc.subject自注意力zh_TW
dc.subject全局注意力zh_TW
dc.subject卷積神經網絡zh_TW
dc.subject電腦輔助診斷zh_TW
dc.subject手持型超音波影像zh_TW
dc.subject乳房篩查zh_TW
dc.subject乳癌zh_TW
dc.subject自注意力zh_TW
dc.subject全局注意力zh_TW
dc.subject卷積神經網絡zh_TW
dc.subject電腦輔助診斷zh_TW
dc.subject手持型超音波影像zh_TW
dc.subject乳癌zh_TW
dc.subjectSelf-attentionen
dc.subjectBreast canceren
dc.subjectBreast screeningen
dc.subjectHandheld ultrasound imagingen
dc.subjectComputer-aided diagnosisen
dc.subjectConvolutional neural networken
dc.subjectGlobal attentionen
dc.subjectSelf-attentionen
dc.subjectBreast canceren
dc.subjectBreast screeningen
dc.subjectHandheld ultrasound imagingen
dc.subjectComputer-aided diagnosisen
dc.subjectConvolutional neural networken
dc.subjectGlobal attentionen
dc.title使用深度學習自動選取超音波影像及診斷zh_TW
dc.titleAutomatic Slice Selection and Diagnosis of Breast Ultrasound Image Using Deep Learningen
dc.typeThesis-
dc.date.schoolyear110-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳啟禎;羅崇銘zh_TW
dc.contributor.oralexamcommittee;;en
dc.subject.keyword乳癌,乳房篩查,手持型超音波影像,電腦輔助診斷,卷積神經網絡,全局注意力,自注意力,zh_TW
dc.subject.keywordBreast cancer,Breast screening,Handheld ultrasound imaging,Computer-aided diagnosis,Convolutional neural network,Global attention,Self-attention,en
dc.relation.page48-
dc.identifier.doi10.6342/NTU202202993-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2022-09-19-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2024-09-16-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-110-2.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
1.76 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved