Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工程科學及海洋工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8297
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張恆華(Herng-Hua Chang)
dc.contributor.authorShih-Hsin Hoen
dc.contributor.author何世馨zh_TW
dc.date.accessioned2021-05-20T00:51:35Z-
dc.date.available2020-08-21
dc.date.available2021-05-20T00:51:35Z-
dc.date.copyright2020-08-21
dc.date.issued2020
dc.date.submitted2020-08-07
dc.identifier.citation[1] W. H. Organization. 'Top 10 causes of death.' https://www.who.int/gho/mortality_burden_disease/causes_death/top_10/en/ (accessed.
[2] J. A. Chalela et al., 'Magnetic resonance imaging and computed tomography in emergency assessment of patients with suspected acute stroke: a prospective comparison,' The Lancet, vol. 369, no. 9558, pp. 293-298 0140-6736, 2007.
[3] R. J. Mural et al., 'A comparison of whole-genome shotgun-derived mouse chromosome 16 and the human genome,' Science, vol. 296, no. 5573, pp. 1661-1671 0036-8075, 2002.
[4] T. M. Woodruff, J. Thundyil, S.-C. Tang, C. G. Sobey, S. M. Taylor, and T. V. Arumugam, 'Pathophysiology, treatment, and animal and cellular models of human ischemic stroke,' Molecular neurodegeneration, vol. 6, no. 1, pp. 11 1750-1326, 2011.
[5] A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, 'A survey of the recent architectures of deep convolutional neural networks,' arXiv preprint arXiv:1901.06032, 2019.
[6] S. Dodge and L. Karam, 'A study and comparison of human and deep learning recognition performance under visual distortions,' 2017: IEEE, pp. 1-7 1509029915.
[7] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, 'Gradient-based learning applied to document recognition,' Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324 0018-9219, 1998.
[8] F. Rosenblatt, 'The perceptron: a probabilistic model for information storage and organization in the brain,' Psychological review, vol. 65, no. 6, pp. 386 1939-1471, 1958.
[9] X. Glorot, A. Bordes, and Y. Bengio, 'Deep sparse rectifier neural networks,' 2011, pp. 315-323.
[10] C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, 'Activation functions: Comparison of trends in practice and research for deep learning,' arXiv preprint arXiv:1811.03378, 2018.
[11] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, 'Learning representations by back-propagating errors,' nature, vol. 323, no. 6088, pp. 533-536 1476-4687, 1986.
[12] D. P. Kingma and J. Ba, 'Adam: A method for stochastic optimization,' arXiv preprint arXiv:1412.6980, 2014.
[13] S. Ruder, 'An overview of gradient descent optimization algorithms,' arXiv preprint arXiv:1609.04747, 2016.
[14] N. Qian, 'On the momentum term in gradient descent learning algorithms,' Neural networks, vol. 12, no. 1, pp. 145-151 0893-6080, 1999.
[15] G. Hinton, N. Srivastava, and K. Swersky. ' Neural Networks for Machine Learning: Lecture 6a: Overview of mini-batch gradient descent.' http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf (accessed.
[16] J. Long, E. Shelhamer, and T. Darrell, 'Fully convolutional networks for semantic segmentation,' 2015, pp. 3431-3440.
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' 2012, pp. 1097-1105.
[18] O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' 2015: Springer, pp. 234-241.
[19] C. Szegedy et al., 'Going deeper with convolutions,' 2015, pp. 1-9.
[20] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' 2016, pp. 770-778.
[21] S. Ioffe and C. Szegedy, 'Batch normalization: Accelerating deep network training by reducing internal covariate shift,' arXiv preprint arXiv:1502.03167, 2015.
[22] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, 'Densely connected convolutional networks,' 2017, pp. 4700-4708.
[23] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, 'Rethinking the inception architecture for computer vision,' 2016, pp. 2818-2826.
[24] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, 'Inception-v4, inception-resnet and the impact of residual connections on learning,' 2017.
[25] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, 'Mobilenetv2: Inverted residuals and linear bottlenecks,' 2018, pp. 4510-4520.
[26] A. G. Howard et al., 'Mobilenets: Efficient convolutional neural networks for mobile vision applications,' arXiv preprint arXiv:1704.04861, 2017.
[27] J. Hu, L. Shen, and G. Sun, 'Squeeze-and-excitation networks,' 2018, pp. 7132-7141.
[28] M. Tan and Q. V. Le, 'Efficientnet: Rethinking model scaling for convolutional neural networks,' arXiv preprint arXiv:1905.11946, 2019.
[29] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, 'Aggregated residual transformations for deep neural networks,' 2017, pp. 1492-1500.
[30] E. Kreyszig, 'Advanced Engineering Mathematics, 10th Eddition,' ed: Wiley, 2009.
[31] K. He, X. Zhang, S. Ren, and J. Sun, 'Identity mappings in deep residual networks,' 2016: Springer, pp. 630-645.
[32] S. Gao, M.-M. Cheng, K. Zhao, X.-Y. Zhang, M.-H. Yang, and P. H. S. Torr, 'Res2net: A new multi-scale backbone architecture,' IEEE transactions on pattern analysis and machine intelligence 0162-8828, 2019.
[33] X. Glorot and Y. Bengio, 'Understanding the difficulty of training deep feedforward neural networks,' 2010, pp. 249-256.
[34] K. He, X. Zhang, S. Ren, and J. Sun, 'Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,' 2015, pp. 1026-1034.
[35] P. S. Foundation. 'Python 3.5.7.' https://www.python.org/downloads/release/python-357/ (accessed.
[36] keras-team. 'Keras 2.2.4.' https://github.com/keras-team/keras (accessed.
[37] Google. 'tensorflow-gpu 1.15.2.' https://github.com/tensorflow/tensorflow (accessed.
[38] L. R. Dice, 'Measures of the amount of ecologic association between species,' Ecology, vol. 26, no. 3, pp. 297-302 1939-9170, 1945.
[39] H.-H. Chang, A. H. Zhuang, D. J. Valentino, and W.-C. Chu, 'Performance measure characterization for evaluating neuroimage segmentation algorithms,' Neuroimage, vol. 47, no. 1, pp. 122-135 1053-8119, 2009.
[40] P. Jaccard, 'The distribution of the flora in the alpine zone. 1,' New phytologist, vol. 11, no. 2, pp. 37-50 0028-646X, 1912.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8297-
dc.description.abstract腦中風長期是造成人類死亡的主要原因之一,依型態可以分為腦缺血及腦出血兩大類,而磁共振成像 ( Magnetic Resonance Imaging, MRI ) 之影像可以作為初步之腦中風判讀依據。在相關實驗中,齧齒動物老鼠時常選擇為實驗動物,以其中風腦部影像作為研究基礎。而在研究上,腦中風影像之中風區域選取及判讀需要腦神經專家處置,這是一件繁鎖且耗時的任務,並有標準一致性上的問題。本研究以深度學習為基礎,針對磁共振成像中 T2 加權成像及擴散加權成像 ( Diffusion Weighted Imaging, DWI ) 之影像,開發腦中風影像自動分割中風區域之模型。透過全卷積神經網路為基礎架構,當中加入本研究所提出之混合殘差塊,並使用 3528 張 T2 和 3024 張 DWI 增強後之影像分別進行訓練。檢視實驗結果,以本架構針對兩種影像各別訓練之模型,能在訓練資料以外之測試資料有優異的分割結果。在分割 T2 影像中風區域中達到 86.84% 的準確度,並在 DWI 影像中達到了 87.36% 的準確度。zh_TW
dc.description.abstractStroke has been one of the main causes of human deaths for a long time. Stroke can be divided into two types: ischemic and hemorrhagic. Magnetic Resonance Imaging (MRI) can be used as a preliminary interpretation basis for stroke. In related experiments, mice are often selected as experimental animals, and their brain images are used for research. Usually, the determination and segmentation of stroke areas in rat brain stroke images requires manual processing by neurological experts. This process is complicated and time-consuming, and may have different standards. This thesis develops a model based on deep learning for automatically segmenting stroke regions in T2 Weighted and Diffusion Weighted image (DWI) images. Into the fully convolutional network as the basic structure, the mix block proposed by this thesis is incorporated. The network is trained using 3528 T2 and 3024 DWI augmented images. The experimental results show that the two models trained separately for the two types of rat images with this architecture have excellent stroke segmentation results in the test data. The T2 image segmentation result has 86.84% accuracy, and the DWI image achieved 87.36% accuracy.en
dc.description.provenanceMade available in DSpace on 2021-05-20T00:51:35Z (GMT). No. of bitstreams: 1
U0001-0708202014502200.pdf: 7082962 bytes, checksum: 95dba5b27de00e7cf0a8c96cb251e1d3 (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents致謝 i
中文摘要 ii
Abstract iii
目錄 iv
圖目錄 vii
表目錄 x
第 1 章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 研究目的 2
1.4 論文架構 2
第 2 章 文獻探討 3
2.1 卷積神經網路 3
2.1.1 神經元 3
2.1.2 卷積層 5
2.1.3 池化層 6
2.2 神經網路訓練 6
2.2.1 前饋 7
2.2.2 反傳遞演算法 7
2.2.3 自適應矩估計優化器 8
2.3 全卷積網路 9
2.3.1 編碼-解碼模型 9
2.3.2 U 型網路 11
2.4 殘差學習 12
2.5 批量標準化 13
2.6 不同網路架構核心 14
2.6.1 Dense block 14
2.6.2 Inception module 14
2.6.3 倒置殘差塊 15
2.6.4 Squeeze-and-Excitation block 16
2.6.5 ResNeXt block 17
第 3 章 研究設計與方法 18
3.1 研究流程 18
3.2 資料前處理 19
3.3 資料增強 19
3.4 卷積神經網路架構 21
3.4.1 殘差塊 21
3.4.2 混合殘差塊 22
3.4.3 卷積層架構 24
3.4.4 卷積核初始化 24
3.4.5 其他網路架構 24
第 4 章 實驗結果 25
4.1 老鼠腦部 MR 影像資料集 25
4.2 神經網路訓練模型 26
4.2.1 實驗環境 26
4.2.2 神經網路訓練參數 26
4.3 實驗結果評估標準 27
4.3.1 一致度 28
4.3.2 敏感度 29
4.3.3 識別度 29
4.3.4 戴斯係數 29
4.3.5 雅卡爾指數 29
4.4 T2 影像實驗結果 30
4.4.1 神經網路模型預測結果 30
4.4.2 其他方法比較 41
4.5 DWI 影像實驗結果 49
4.5.1 神經網路模型預測結果 49
4.5.2 其他方法比較 60
第 5 章 結論及未來展望 68
5.1 結論 68
5.2 未來展望 69
附錄 70
參考文獻 72
dc.language.isozh-TW
dc.title基於深度學習之老鼠腦部磁振影像中風區域分割研究zh_TW
dc.titleStroke Lesion Segmentation in Rat Brain MR Images Based on Deep Learningen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee丁肇隆(Chao-Lung Ting),江明彰(Ming-Chang Chiang),張瑞益(Ray-I Chang)
dc.subject.keyword深度學習,卷積神經網路,磁共振成像,缺血型腦中風,影像分割,zh_TW
dc.subject.keyworddeep learning,convolutional neural network (CNN),magnetic resonance imaging (MRI),ischemic stroke,image segmentation,en
dc.relation.page74
dc.identifier.doi10.6342/NTU202002636
dc.rights.note同意授權(全球公開)
dc.date.accepted2020-08-07
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept工程科學及海洋工程學研究所zh_TW
顯示於系所單位:工程科學及海洋工程學系

文件中的檔案:
檔案 大小格式 
U0001-0708202014502200.pdf6.92 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved