Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83356
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林永松(Yeong-Sung Lin)
dc.contributor.authorChieh-Yun Chengen
dc.contributor.author鄭捷云zh_TW
dc.date.accessioned2023-03-19T21:05:29Z-
dc.date.copyright2022-10-14
dc.date.issued2022
dc.date.submitted2022-09-21
dc.identifier.citation[1] M. Shehata, A. Alksas, R. T. Abouelkheir, A. Elmahdy, A. Shaffie, A. Soliman, M. Ghazal, H. A. Khalifeh, A. A. Razek, and A. ElBaz, “A new computer-aided diagnostic (Cad) system for precise identification of renal tumors,” in 2021 IEEE 18th International Symposium on Biomedical Imaging, pp. 1378–1381, 2021. [2] K. H. Park, K. S. Ryu, and K. H. Ryu, “Determining minimum feature number of classification on clear cell renal cell carcinoma clinical dataset,” in 2016 International Conference on Machine Learning and Cybernetics, vol. 2, pp. 894–898, 2016. [3] S. H. Raza, Y. Sharma, Q. Chaudry, A. N. Young, and M. D. Wang, “Automated classification of renal cell carcinoma subtypes using scale invariant feature transform,” in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 6687–6690, 2009. [4] F. Han, S. Liao, S. Yuan, R. Wu, Y. Zhao, and Y. Xie, “Explainable prediction of renal cell carcinoma from contrast-enhanced CT images using deep convolutional transfer learning and the shapley additive explanations approach,” in 2021 IEEE International Conference on Image Processing, pp. 3802–3806, 2021. [5] A. M. OsowskaKurczab, T. Markiewicz, M. Dziekiewicz, and M. Lorent, “Combining texture analysis and deep learning in renal tumour classification task,” in 2020 IEEE 21st International Conference on Computational Problems of Electrical Engineering, pp. 1–4, 2020. [6] H. S. Lee, H. Hong, and J. Kim, “Detection and segmentation of small renal masses in contrast-enhanced CT images using texture and context feature classification,” in 2017 IEEE 14th International Symposium on Biomedical Imaging, pp. 583–586, 2017. [7] S. Ahmed, K. M. Iftekharuddin, and A. Vossough, “Efficacy of texture, shape, and intensity feature fusion for posterior-fossa tumor segmentation in MRI,” IEEE Transactions on Information Technology in Biomedicine, vol. 15, no. 2, pp. 206–213, 2011. [8] W. S. H. M. W. Ahmad and M. F. Ahmad Fauzi, “Comparison of different feature extraction techniques in content-based image retrieval for CT brain images,” in 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 503–508, 2008. [9] A. B. Mathews and M. Jeyakumar, “Analysis of lung tumor detection using various segmentation techniques,” in 2020 International Conference on Inventive Computation Technologies, pp. 454–458, 2020. [10] E. Hadjidemetriou, M. Grossberg, and S. Nayar, “Multiresolution histograms and their use for recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 7, pp. 831–847, 2004. [11] M. P. Arakeri and G. R. M. Reddy, “Medical image retrieval system for diagnosis of brain tumor based on classification and content similarity,” in 2012 Annual IEEE India Conference, pp. 416–421, 2012. [12] C. Li, X. Wang, J. Li, S. Eberl, M. Fulham, Y. Yin, and D. D. Feng, “Joint probabilistic model of shape and intensity for multiple abdominal organ segmentation from volumetric CT images,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 1, pp. 92–102, 2013. [13] A. Oliveira, S. Pereira, and C. A. Silva, “Augmenting data when training a CNN for retinal vessel segmentation: How to warp?,” in 2017 IEEE 5th Portuguese Meeting on Bioengineering, pp. 1–4, 2017. [14] H. Jiang, T. Shi, Z. Bai, and L. Huang, “AHCNet: An application of attention mechanism and hybrid connection for liver tumor segmentation in CT volumes,” IEEE Access, vol. 7, pp. 24898–24909, 2019. [15] L. B. da Cruz, J. D. L. Ara?jo, J. L. Ferreira, J. O. B. Diniz, A. C. Silva, J. D. S. de Almeida, A. C. de Paiva, and M. Gattass, “Kidney segmentation from computed tomography images using deep neural network,” Computers in Biology and Medicine, vol. 123, pp. 103906–103918, August 2020. [16] S. Sandabad, Y. S. Tahri, A. Benba, and A. Hammouch, “New tumor detection method using Nlmeans filter and histogram study,” in 2015 Third World Conference on Complex Systems, pp. 1–5, 2015. [17] S. Sara, B. Achraf, S. T. Yassine, and H. Ahmed, “New method of tumor extraction using a histogram study,” in 2015 SAI Intelligent Systems Conference, pp. 813–817, 2015. [18] M. Ahmad, D. Ai, G. Xie, S. F. Qadri, H. Song, Y. Huang, Y. Wang, and J. Yang, “Deep belief network modeling for automatic liver segmentation,” IEEE Access, vol. 7, pp. 20585–20595, 2019. [19] M. Islam, K. N. Khan, and M. S. Khan, “Evaluation of preprocessing techniques for UNet based automated liver segmentation,” in 2021 International Conference on Artificial Intelligence, pp. 187–192, 2021. [20] M. K. Akter, S. M. Khan, S. Azad, and S. A. Fattah, “Automated brain tumor segmentation from MRI data based on exploration of histogram characteristics of the cancerous hemisphere,” in 2017 IEEE Region 10 Humanitarian Technology Conference, pp. 815–818, 2017. [21] D.T. Lin, C.C. Lei, and S.W. Hung, “Computer-aided kidney segmentation on abdominal CT images,” IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 59–65, 2006. [22] A. Skalski, J. Jakubowski, and T. Drewniak, “Kidney tumor segmentation and detection on computed tomography data,” in 2016 IEEE International Conference on Imaging Systems and Techniques, pp. 238–242, 2016. [23] M. Syed Abdaheer and E. Khan, “Shape based classification of breast tumors using fractal analysis,” in 2009 International Multimedia, Signal Processing and Communication Technologies, pp. 272–275, 2009. [24] B. H. Asodekar, S. A. Gore, and A. D. Thakare, “Brain tumor analysis based on shape features of MRI using machine learning,” in 2019 5th International Conference On Computing, Communication, Control And Automation, pp. 1–5, 2019. [25] T. Qin, Z. Wang, K. He, Y. Shi, Y. Gao, and D. Shen, “Automatic data augmentation via deep reinforcement learning for effective kidney tumor segmentation,” in ICASSP 2020 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1419–1423, 2020. [26] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain tumor segmentation using convolutional neural networks in MRI images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1240–1251, 2016. [27] N. Goceri and E. Goceri, “A neural network based kidney segmentation from MR images,” in 2015 IEEE 14th International Conference on Machine Learning and Applications, pp. 1195–1198, 2015. [28] Q. Yu, Y. Shi, J. Sun, Y. Gao, J. Zhu, and Y. Dai, “CrossbarNet: A novel convolutional neural network for kidney tumor segmentation in CT images,” IEEE Transactions on Image Processing, vol. 28, no. 8, pp. 4060–4074, 2019. [29] C.C. Lee, P.C. Chung, and H.M. Tsai, “Identifying multiple abdominal organs from CT image series using a multimodule contextual neural network and spatial fuzzy rules,” IEEE Transactions on Information Technology in Biomedicine, vol. 7, no. 3, pp. 208–217, 2003. [30] Z. Tang, X. Peng, K. Li, and D. N. Metaxas, “Towards efficient UNets: A coupled and quantized approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 8, pp. 2038–2050, 2020. [31] J. Wu, Y. Zhang, K. Wang, and X. Tang, “Skip connection UNet for white matter hyperintensities segmentation from MRI,” IEEE Access, vol. 7, pp. 155194–155202, 2019. [32] Q. Huang, Y. Zhou, and L. Tao, “Dualterm loss function for shape-aware medical image segmentation,” in 2021 IEEE 18th International Symposium on Biomedical Imaging, pp. 1798–1802, 2021. [33] S. Dangoury, S. Abouzahir, A. Alali, and M. Sadik, “Impacts of losses functions on the quality of the ultrasound image by using machine learning algorithms,” in 2021 IEEE International Conference on Automatic Control Intelligent Systems, pp. 380– 385, 2021. [34] X.F. Xi, L. Wang, V. S. Sheng, Z. Cui, B. Fu, and F. Hu, “Cascade UResNets for simultaneous liver and lesion segmentation,” IEEE Access, vol. 8, pp. 68944–68952, 2020. [35] X. Hou, C. Xie, F. Li, J. Wang, C. Lv, G. Xie, and Y. Nan, “A triple-stage self-guided network for kidney tumor segmentation,” in 2020 IEEE 17th International Symposium on Biomedical Imaging, pp. 341–344, 2020. [36] A. M. OsowskaKurczab, T. Markiewicz, M. Dziekiewicz, and M. Lorent, “Textural and deep learning methods in recognition of renal cancer types based on CT images,” in 2020 International Joint Conference on Neural Networks, pp. 1–8, 2020. [37] Y. Han and J. C. Ye, “Framing UNet via deep convolutional framelets: Application to sparse-view CT,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1418–1429, 2018. [38] J. Guo, W. Zeng, S. Yu, and J. Xiao, “RAUNet: UNet model based on residual and attention for kidney and kidney tumor segmentation,” in 2021 IEEE International Conference on Consumer Electronics and Computer Engineering, pp. 353– 356, 2021. [39] H. Qassim, A. Verma, and D. Feinzimer, “Compressed residual-VGG16 CNN model for big data places image recognition,” in 2018 IEEE 8th Annual Computing and Communication Workshop and Conference, pp. 169–175, 2018. [40] L. T. Thu Hong, N. Chi Thanh, and T. Q. Long, “Polyp segmentation in colonoscopy images using ensembles of UNets with EfficientNet and asymmetric similarity loss function,” in 2020 RIVF International Conference on Computing and Communication Technologies, pp. 1–6, 2020. [41] C.H. Hsiao, T.L. Sun, P.C. Lin, T.Y. Peng, Y.H. Chen, C.Y. Cheng, F.J. Yang, S.Y. Yang, C.H. Wu, F. Y.S. Lin, and Y. Huang, “A deep learning-based precision volume calculation approach for kidney and tumor segmentation on computed tomography images,” Computer Methods and Programs in Biomedicine, vol. 221, p. 106861, 2022. [42] B. Woo and M. Lee, “Comparison of tissue segmentation performance between 2D UNet and 3D UNet on brain MR images,” in 2021 International Conference on Electronics, Information, and Communication, pp. 1–4, 2021. [43] S. Chen, G. Hu, and J. Sun, “Medical image segmentation based on 3D U-net,” in 2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science, pp. 130–133, 2020. [44] N. Zettler and A. Mastmeyer, “Comparison of 2D vs 3D UNet organ segmentation in abdominal 3D CT images,” ArXiv, vol. abs/2107.04062, 2021. [45] M. Srikrishna, R. A. Heckemann, J. B. Pereira, G. Volpe, A. Zettergren, S. Kern, E. Westman, I. Skoog, and M. Sch?ll, “Comparison of TwoDimensionaland ThreeDimensionalBased UNet architectures for brain tissue classification in OneDimensional brain CT,” Front Comput Neurosci, vol. 15, p. 785244, Jan. 2022. [46] J. B. S. Carvalho, J.M. Moreira, M. A. T. Figueiredo, and N. Papanikolaou, “Automatic detection and segmentation of lung lesions using deep residual CNNs,” in 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering, pp. 977–983, 2019. [47] K. Hu, C. Liu, X. Yu, J. Zhang, Y. He, and H. Zhu, “A 2.5d cancer segmentation for MRI images based on UNet,” in 2018 5th International Conference on Information Science and Control Engineering, pp. 6–10, 2018. [48] J. Chen, L. Yang, Y. Zhang, M. Alber, and D. Z. Chen, “Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation,” in Advances in Neural Information Processing Systems (D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, eds.), vol. 29, Curran Associates, Inc., 2016. [49] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: An astounding baseline for recognition,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 512–519, 2014. [50] L. Shao, F. Zhu, and X. Li, “Transfer learning for visual categorization: A survey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 5, pp. 1019–1034, 2015. [51] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional neural networks for medical image analysis: Full training or fine tuning?,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1299– 1312, 2016. [52] O. A. B. Penatti, K. Nogueira, and J. A. dos Santos, “Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 44–51, 2015. [53] L. Li and W. Seo, “Deep learning and transfer learning for skin cancer segmentation and classification,” in 2021 IEEE 21st International Conference on Bioinformatics and Bioengineering, pp. 1–5, 2021. [54] X. Yan, K. Yuan, W. Zhao, S. Wang, Z. Li, and S. Cui, “An efficient hybrid model for kidney tumor segmentation in CT images,” in 2020 IEEE 17th International Symposium on Biomedical Imaging, pp. 333–336, 2020. [55] Q. Xie, M.T. Luong, E. Hovy, and Q. V. Le, “Selftraining with noisy student improves ImageNet classification,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2020. [56] K. Homepage. https://kits19.grand-challenge.org/, 2019.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83356-
dc.description.abstract在腎臟腫瘤的醫療診斷上,醫生往往需要透過電腦斷層掃描進行判斷。傳統上,醫療影像的判讀高度依賴醫生的臨床經驗與專業知識,再加上電腦斷層掃描的大量切片與低對比度導致醫生需要花費大量的時間與精力。隨著影像辨識技術的進步,開始有許多自動分割的技術產生從而輔助醫生提升診斷的效率。 本文提出一種五階段架構設計,引入了EfficientNet with noisy student作為U-Net編碼器。在第一階段分割腎臟區域,並以其為基礎在第二階段分別訓練可以針對不同腫瘤特色的模型。在第三階段綜合第二階段中三個模型的優點進行加權投票從而提升腫瘤分割的表現。接著在第四階段檢查預測結果,降低偽陰性的發生。最後,在第五階段補充腫瘤的端點區域。 我們使用2019 Kidney Tumor Segmentation Challenge (KiTS19) 資料集進行實驗,該資料集包含210位有標記的病患與90位未標記的病患。在KiTS19官網中,我們得到腎臟分割的dice分數為0.9660,腫瘤分割的dice分數為0.7407。在本研究中,我們主要有兩個貢獻。第一點為提升2D模型在腎臟與腫瘤分割的表現。第二點為加快預測速度和降低所需的計算資源,本架構可以使模型靈活應用於資料有限或運算資源有限的場景中。另外,我們以本論文所提出的架構建立了系統,希望通過這個系統讓醫生在臨床診斷更為便利。zh_TW
dc.description.abstractIn the medical diagnosis of kidneys and tumors, doctors often need to judge through computer tomography (CT). Traditionally, the interpretation of medical images is highly dependent on the clinical experience and professional knowledge of doctors. Coupled with a large number of slices and low contrast of the computed tomography scan, the doctor must spend a lot of time and energy. With the advancement of image segmentation technology, many automatic segmentation technologies have begun to be produced to assist doctors in improving the efficiency of diagnosis. This thesis proposes a five-stage architecture design and introduces EfficientNet with noisy student as U-Net encoder. The kidney region is segmented in the first stage. Based on it, models that can target different tumor characteristics are trained separately in the second stage. In the third stage, the advantages of the three models in the second stage are combined for weighted voting to improve the performance of tumor segmentation. The predicted results are then checked in the fourth stage to reduce the occurrence of false negatives. Finally, the endpoint region of the tumor is replenished in the fifth stage. We performed experiments using the 2019 Kidney Tumor Segmentation Challenge (KiTS19) dataset containing 210 labeled and 90 unlabeled patients. On the KiTS19 official website, our dice score for kidney segmentation is 0.9660, and the dice score for tumor segmentation is 0.7407. In this study, we have two contributions. The first point is to improve the performance of 2D models in kidney and tumor segmentation. The second point is to speed up the prediction and reduce the required computing resources. This architecture can make the model flexible for scenarios with limited data or computing resources. In addition, we have established a system with the architecture proposed in this thesis, hoping to make clinical diagnosis more convenient for doctors through this system.en
dc.description.provenanceMade available in DSpace on 2023-03-19T21:05:29Z (GMT). No. of bitstreams: 1
U0001-2109202212195600.pdf: 6427961 bytes, checksum: dfd9a9af4a27531e97dda49c1ea7f7bf (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents口試委員審定書i 致謝ii 摘要iii Abstract iv Contents vi List of Figures ix List of Tables xi Chapter 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2 Literature Review 6 2.1 Dataset-features-based Method . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 Intensity Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.2 Shape-based Features . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.3 Image Rotating and Cropping . . . . . . . . . . . . . . . . . . . . . 9 2.1.4 CT Image Content . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Model-based Method . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.1 Process Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.2 Model Improvements . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2.1 UNet Introduction . . . . . . . . . . . . . . . . . . . 12 2.2.2.2 2D UNet. . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2.3 3D UNet. . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2.4 2.5D UNet. . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2.5 Transfer Learning . . . . . . . . . . . . . . . . . . . . 16 2.2.3 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3 Methodology 20 3.1 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.1 First Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2 Second Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.3 Third Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.4 Fourth Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.5 Fifth Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2 Data Preprocessing. . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1 CT Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.2 Labeling Alternative . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.3 Data Augmentation & Filtering . . . . . . . . . . . . . . . . . . . . 33 3.3 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.4 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.1 First Stage Loss Function . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.2 Second Stage Loss Function . . . . . . . . . . . . . . . . . . . . . 37 Chapter 4 Experiments 38 4.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 Training and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3.1 First Stage Training and Analysis . . . . . . . . . . . . . . . . . . . 40 4.3.2 Second Stage Training and Analysis . . . . . . . . . . . . . . . . . 42 4.3.3 Third Stage Training and Analysis . . . . . . . . . . . . . . . . . . 46 4.3.4 Fourth Stage Training and Analysis . . . . . . . . . . . . . . . . . . 48 4.3.5 Fifth Stage Training and Analysis . . . . . . . . . . . . . . . . . . 52 4.3.6 Lightweight Model Experiment . . . . . . . . . . . . . . . . . . . . 57 4.3.6.1 EfficientNetB5. . . . . . . . . . . . . . . . . . . . . 57 4.3.6.2 EfficientNetB4. . . . . . . . . . . . . . . . . . . . . 59 4.4 Compared Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Chapter 5 Conclusions and Future Work 65 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 References 68
dc.language.isoen
dc.subject2D U-Netzh_TW
dc.subject腫瘤zh_TW
dc.subject影像分割zh_TW
dc.subject醫療影像zh_TW
dc.subject腎臟zh_TW
dc.subjectImage Segmentationen
dc.subject2D U-Neten
dc.subjectTumoren
dc.subjectKidneyen
dc.subjectMedical Imageen
dc.title應用多階段U-Net優化模型於電腦斷層影像進行腎臟腫瘤分割zh_TW
dc.titleAn Optimization-based Multi-Stage U-Net Model on Computer Tomography Images for Kidney Tumor Segmentationen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee李家岩(Chia-Yen Lee),孔令傑(Ling-Chieh Kung),鍾順平(Shun-Ping Chung),呂俊賢(Chun-Hsien Lu)
dc.subject.keyword影像分割,醫療影像,腎臟,腫瘤,2D U-Net,zh_TW
dc.subject.keywordImage Segmentation,Medical Image,Kidney,Tumor,2D U-Net,en
dc.relation.page76
dc.identifier.doi10.6342/NTU202203711
dc.rights.note未授權
dc.date.accepted2022-09-23
dc.contributor.author-college管理學院zh_TW
dc.contributor.author-dept資訊管理學研究所zh_TW
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
U0001-2109202212195600.pdf
  未授權公開取用
6.28 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved