請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90140完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 張瑞峰 | zh_TW |
| dc.contributor.advisor | Ruey-Feng Chang | en |
| dc.contributor.author | 王成中 | zh_TW |
| dc.contributor.author | Cheng-Zhong Wang | en |
| dc.date.accessioned | 2023-09-22T17:35:05Z | - |
| dc.date.available | 2023-11-10 | - |
| dc.date.copyright | 2023-09-22 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-08-11 | - |
| dc.identifier.citation | [1] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, "Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries," CA: a cancer journal for clinicians, vol. 68, no. 6, pp. 394-424, 2018.
[2] L. Wang, "Early diagnosis of breast cancer," Sensors, vol. 17, no. 7, p. 1572, 2017. [3] R. Gerami, S. S. Joni, N. Akhondi, A. Etemadi, M. Fosouli, A. F. Eghbal, and Z. Souri, "A literature review on the imaging methods for breast cancer," International Journal of Physiology, Pathophysiology and Pharmacology, vol. 14, no. 3, p. 171, 2022. [4] W.-Q. Shen, Y. Guo, W.-E. Ru, C. Li, G.-C. Zhang, N. Liao, and G.-Q. Du, "Using an improved residual network to identify PIK3CA mutation status in breast cancer on ultrasound image," Frontiers in Oncology, vol. 12, p. 850515, 2022. [5] T. J. Allen et al., "Automated Placement of Scan and Pre-Scan Volumes for Breast MRI Using a Convolutional Neural Network," Tomography, vol. 9, no. 3, pp. 967-980, 2023. [6] R. Tenajas, D. Miraut, C. I. Illana, R. Alonso-Gonzalez, F. Arias-Valcayo, and J. L. Herraiz, "Recent Advances in Artificial Intelligence-Assisted Ultrasound Scanning," Applied Sciences, vol. 13, no. 6, p. 3693, 2023. [7] C. D’Orsi, L. Bassett, and S. Feig, "Breast imaging reporting and data system (BI-RADS)," Breast imaging atlas, 4th edn. American College of Radiology, Reston, 2018. [8] U. Raghavendra, A. Gudigar, E. J. Ciaccio, K. H. Ng, W. Y. Chan, K. Rahmat, and U. R. Acharya, "2DSM vs FFDM: A computeraided diagnosis based comparative study for the early detection of breast cancer," Expert Systems, vol. 38, no. 6, p. e12474, 2021. [9] S. Park, D. K. Shin, and J. S. Kim, "Components of computer-aided diagnosis for breast ultrasound," Journal of IT Convergence Practice, vol. 1, no. 4, pp. 50-63, 2013. [10] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, "Unet++: A nested u-net architecture for medical image segmentation," in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, 2018: Springer, pp. 3-11. [11] M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, and V. K. Asari, "Recurrent residual U-Net for medical image segmentation," Journal of Medical Imaging, vol. 6, no. 1, pp. 014006-014006, 2019. [12] F. Isensee et al., "nnu-net: Self-adapting framework for u-net-based medical image segmentation," arXiv preprint arXiv:1809.10486, 2018. [13] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [14] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A convnet for the 2020s," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 11976-11986. [15] S. Woo, S. Debnath, R. Hu, X. Chen, Z. Liu, I. S. Kweon, and S. Xie, "Convnext v2: Co-designing and scaling convnets with masked autoencoders," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16133-16142. [16] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587. [17] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, 2015. [18] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018. [19] Z. Liu et al., "Swin transformer v2: Scaling up capacity and resolution," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 12009-12019. [20] C. Qin, J. Lin, J. Zeng, Y. Zhai, L. Tian, S. Peng, and F. Li, "Joint Dense Residual and Recurrent Attention Network for DCE-MRI Breast Tumor Segmentation," Computational Intelligence and Neuroscience, vol. 2022, 2022. [21] G. Chen, L. Li, J. Zhang, and Y. Dai, "Rethinking the unpretentious U-net for medical ultrasound image segmentation," Pattern Recognition, p. 109728, 2023. [22] N. S. Punn and S. Agarwal, "RCA-IUnet: a residual cross-spatial attention-guided inception U-Net model for tumor segmentation in breast ultrasound imaging," Machine Vision and Applications, vol. 33, no. 2, p. 27, 2022. [23] J. Hernández-López and W. Gómez-Flores, "Predicting the BI-RADS Lexicon for Mammographie Masses Using Hybrid Neural Models," in 2020 17th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), 2020: IEEE, pp. 1-6. [24] J. Hernández-López, W. Gómez-Flores, and W. C. de Albuquerque-Pereira, "Modeling of shape attributes of the bi-rads lexicon for breast lesions based on multi-class classification," in XXVI Brazilian Congress on Biomedical Engineering: CBEB 2018, Armação de Buzios, RJ, Brazil, 21-25 October 2018 (Vol. 2), 2019: Springer, pp. 327-333. [25] J.-Y. Chiao, K.-Y. Chen, K. Y.-K. Liao, P.-H. Hsieh, G. Zhang, and T.-C. Huang, "Detection and classification the breast tumors using mask R-CNN on sonograms," Medicine, vol. 98, no. 19, 2019. [26] J. Chowdary, P. Yogarajah, P. Chaurasia, and V. Guruviah, "A multi-task learning framework for automated segmentation and classification of breast tumors from ultrasound images," Ultrasonic imaging, vol. 44, no. 1, pp. 3-12, 2022. [27] R. Rouhi, M. Jafari, S. Kasaei, and P. Keshavarzian, "Benign and malignant breast tumors classification based on region growing and CNN segmentation," Expert Systems with Applications, vol. 42, no. 3, pp. 990-1002, 2015. [28] W.-J. Wu and W. K. Moon, "Ultrasound breast tumor image computer-aided diagnosis with texture and morphological features," Academic radiology, vol. 15, no. 7, pp. 873-880, 2008. [29] D.-R. Chen, R.-F. Chang, W.-J. Kuo, M.-C. Chen, and Y.-L. Huang, "Diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks," Ultrasound in medicine & biology, vol. 28, no. 10, pp. 1301-1310, 2002. [30] E. Brempong Asiedu, S. Kornblith, T. Chen, N. Parmar, M. Minderer, and M. Norouzi, "Decoder Denoising Pretraining for Semantic Segmentation," p. arXiv:2205.11423doi: 10.48550/arXiv.2205.11423. [31] Y. Yan, X.-J. Yao, S.-H. Wang, and Y.-D. Zhang, "A Survey of Computer-Aided Tumor Diagnosis Based on Convolutional Neural Network," Biology, vol. 10, no. 11, p. 1084, 2021. [Online]. Available: https://www.mdpi.com/2079-7737/10/11/1084. [32] O. Oktay et al., "Attention U-Net: Learning Where to Look for the Pancreas," p. arXiv:1804.03999doi: 10.48550/arXiv.1804.03999. [33] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [34] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, "CNN Features off-the-shelf: an Astounding Baseline for Recognition," p. arXiv:1403.6382doi: 10.48550/arXiv.1403.6382. [35] A. Radford et al., "Learning Transferable Visual Models From Natural Language Supervision," p. arXiv:2103.00020doi: 10.48550/arXiv.2103.00020. [36] A. van den Oord, Y. Li, and O. Vinyals, "Representation Learning with Contrastive Predictive Coding," p. arXiv:1807.03748doi: 10.48550/arXiv.1807.03748. [37] R. Devon Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio, "Learning deep representations by mutual information estimation and maximization," p. arXiv:1808.06670doi: 10.48550/arXiv.1808.06670. [38] A. Vaswani et al., "Attention Is All You Need," p. arXiv:1706.03762doi: 10.48550/arXiv.1706.03762. [39] Z. Liu et al., "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows," p. arXiv:2103.14030doi: 10.48550/arXiv.2103.14030. [40] W. K. Moon, Y.-W. Lee, H.-H. Ke, S. H. Lee, C.-S. Huang, and R.-F. Chang, "Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks," Computer methods and programs in biomedicine, vol. 190, p. 105361, 2020. [41] E. Lazarus, M. B. Mainiero, B. Schepps, S. L. Koelliker, and L. S. Livingston, "BI-RADS lexicon for US and mammography: interobserver variability and positive predictive value," Radiology, vol. 239, no. 2, pp. 385-391, 2006. [42] A. S. Hong, E. L. Rosen, M. S. Soo, and J. A. Baker, "BI-RADS for sonography: positive and negative predictive values of sonographic features," AJR. American journal of roentgenology, vol. 184 4, pp. 1260-5, 2005. [43] G. Rahbar et al., "Benign versus Malignant Solid Breast Masses: US Differentiation," Radiology, vol. 213, no. 3, pp. 889-894, 1999, doi: 10.1148/radiology.213.3.r99dc20889. [44] J. Shan, S. K. Alam, B. Garra, Y. Zhang, and T. Ahmed, "Computer-aided diagnosis for breast ultrasound using computerized BI-RADS features and machine learning methods," Ultrasound in medicine & biology, vol. 42, no. 4, pp. 980-988, 2016. [45] T. M. Oliveira, T. K. Brasileiro Sant'Anna, F. M. Mauad, J. Elias Jr, and V. F. Muglia, "Breast imaging: is the sonographic descriptor of orientation valid for magnetic resonance imaging?," Journal of Magnetic Resonance Imaging, vol. 36, no. 6, pp. 1383-1388, 2012. [46] S. Raza, S. A. Chikarmane, S. S. Neilsen, L. M. Zorn, and R. L. Birdwell, "BI-RADS 3, 4, and 5 lesions: value of US in management—follow-up and outcome," Radiology, vol. 248, no. 3, pp. 773-781, 2008. [47] A. T. Stavros, D. Thickman, C. L. Rapp, M. A. Dennis, S. H. Parker, and G. A. Sisney, "Solid breast nodules: use of sonography to distinguish between benign and malignant lesions," Radiology, vol. 196, no. 1, pp. 123-134, 1995. [48] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988. [49] H. Zhang, K. Zu, J. Lu, Y. Zou, and D. Meng, "EPSANet: An efficient pyramid squeeze attention block on convolutional neural network," in Proceedings of the Asian Conference on Computer Vision, 2022, pp. 1161-1177. [50] K. H. Zou et al., "Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports," Academic radiology, vol. 11, no. 2, pp. 178-189, 2004. [51] L. Gou, S. Wu, J. Yang, H. Yu, and X. Li, "Gaussian guided IoU: A better metric for balanced learning on object detection," IET Computer Vision, vol. 16, no. 6, pp. 556-566, 2022. [52] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, "Comparing images using the Hausdorff distance," IEEE Transactions on pattern analysis and machine intelligence, vol. 15, no. 9, pp. 850-863, 1993. [53] Q. McNemar, "Note on the sampling error of the difference between correlated proportions or percentages," Psychometrika, vol. 12, no. 2, pp. 153-157, 1947. [54] E. R. DeLong, D. M. DeLong, and D. L. Clarke-Pearson, "Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach," Biometrics, pp. 837-845, 1988. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90140 | - |
| dc.description.abstract | 乳癌是一種影響全世界女性的常見癌症。全球癌症統計數據表明,早期發現和診斷乳癌可以顯著提高生存機會,而超音波成像因其成本低、解析度高、非侵入式、無放射性等優點,逐漸成為常用診斷工具中主要的乳癌篩查方法。然而,超聲圖像的判讀仍需要廣泛的專業知識,而且主觀評估可能導致診斷差異。
美國放射學會(ACR)提出了乳房影像報告和數據系統(BI-RADS)旨在標準化乳房影像報告,最大限度地減少主觀性並提高一致性。雖然BI-RADS為標準化報告做出了貢獻,但整體標註過程仍然複雜且耗時。 近年來,深度學習技術取得了顯著進步,並擴展到各種類型的電腦視覺任務,包括分割、分類和對象檢測。深度學習技術給許多領域帶來了革命性的變化,因其具有自動特徵提取和優化技術等顯著優勢。憑藉上述優勢,將深度學習技術整合到 CAD 系統中可以提高工作流程效率,有助於更準確、更及時的醫療診斷。在本研究中,我們提出了一個整合的CADx系統,其中包含具有解碼器去噪預訓練的腫瘤分割方法、BI-RADS 病理特徵預測和基於Transformer的分類器模型,以簡化報告過程並增強乳房超聲圖像的診斷結果。 研究主要可分為四個部分。首先,我們引入一種預訓練方法稱為解碼器去噪預訓練,來提高腫瘤分割模型在邊界上的準確性。接下來,通過將分割結果(分割的腫瘤圖像和腫瘤形狀圖像)與原始超聲圖像整合到RGB通道中來生成融合圖像,以提高診斷準確性。第三,使用修改後的Swin Transformer V2並以融合圖像作為輸入來預測BI-RADS病理特徵,並為每個病理特徵設計不同的預測頭。最後,我們提出了一種Integrated PSA Transformer (IPSAT)架構,它將融合圖像和我們預測的BI-RADS病理特徵作為輸入以產生精確的診斷結果。 在這項研究中,我們使用了來自274名患者的 334 個腫瘤的數據集來評估我們提出的方法,其中包括147個良性腫瘤和187個惡性腫瘤。在腫瘤分割中,使用解碼器去噪預訓練的Attention U-Net實現了Dice (0.7424)、IoU (0.8371)和Hausdorff distance 95 (25.7770)的切割效能。接下來,我們使用融合圖像代替原始超音波圖像,並獲得準確度(83.53%)、靈敏度(88.77%)、特異性(76.87%)、陽性預測值(83.00%)、陰性預測值(84.33%)和AUC (0.8679),在準確度、靈敏度和陰性預測值這些指標中分別比使用超音波影像提高了0.6%、3.21%和3.08%。在BI-RADS病理特徵預測中,我們也做到了準確的預測。形狀、方向、邊緣、異質性和後部特徵的準確率分別為 84.13%、79.94%、76.35%、82.63%和73.35%。最後,對於腫瘤診斷,我們通過使用IPSAT並結合預測出來的病理特徵提高了腫瘤診斷的性能,達到準確性(84.13%)、靈敏度(89.30%)、特異性(77.55%)、陽性預測值(83.50%)、陰性預測值(85.07%)和AUC (0.8703)。這些實驗結果表明,我們提出的系統可以簡化註釋過程,並有助於提高乳癌診斷的效率和準確性。 | zh_TW |
| dc.description.abstract | Breast cancer is a common cancer that impacts women worldwide; global cancer statistics have demonstrated that early detection and diagnosis of breast cancer can significantly enhance the chances of survival, and ultrasound (US) imaging has gradually become a primary breast cancer screening method among common diagnostic tools due to its low cost, high resolution, non-invasive, and non-radioactive advantage. Nevertheless, the interpretation of ultrasound images requires extensive expertise; subjective assessments can lead to diagnostic differences (reader dependency).
The introduction of the Breast Imaging Reporting and Data System (BI-RADS) by the American College of Radiology (ACR) aimed to standardize breast imaging reporting, minimizing subjectivity and enhancing consistency. While BI-RADS has contributed to standardized reporting, the annotation process remains complex and time-consuming. In recent years, deep learning (DL) techniques have made significant advancements and have expanded to various types of computer vision tasks, including segmentation, classification, and object detection. DL technology has brought revolutionary changes to many fields, with significant advantages such as automatic feature extraction and optimization techniques. With the above advantages, integrating DL techniques into CAD systems can improve workflow efficiency and contribute to more accurate and timely medical diagnoses. In this study, we proposed an integrated CADx system that contained tumor segmentation methods with decoder denoising pretraining, BI-RADS lexicon predictions, and a transformer-based classifier model to simplify the reporting process and enhance diagnostic outcomes in breast ultrasound images. The study can be mainly divided into four parts. First, we introduce a pretraining approach (Decoder Denoising Pretraining (DDeP)) to enhance the accuracy of tumor segmentation models in boundaries. Next, fused images are generated by integrating the segmentation results (segmented tumor image and tumor shape image) with the original US image into RGB channels to enhance diagnostic accuracy. Third, the modified Swin Transformer V2 is employed for predicting BI-RADS lexicons using fused images as input data and adding distinct prediction heads for each lexicon. Lastly, we propose an Integrated PSA Transformer (IPSAT) architecture, which integrates fused images and our predicted BI-RADS lexicons as input to produce precise diagnostic outcomes. In this study, we used a dataset of 334 tumors from 274 patients to evaluate our proposed methods, including 147 benign and 187 malignant tumors. In the tumor segmentation, the Attention U-Net with DDeP achieved the Dice coefficient (0.7424), IoU (0.8371), and Hausdorff distance 95 (25.7770). Next, we use fused images instead of original US images and achieve the accuracy (83.53%), sensitivity (88.77%), specificity (76.87%), positive predictive value (83.00%), negative predictive value (84.33%), and AUC (0.8679), improving the accuracy, sensitivity, and negative predictive value by 0.6%, 3.21%, and 3.08%, respectively. In the BI-RADS lexicons prediction, we achieved accurate prediction. The accuracies for shape, orientation, margins, heterogeneity, and posterior features are 84.13%, 79.94%, 76.35%, 82.63%, and 73.35%, respectively. Lastly, for the tumor diagnosis, we improved the performance of tumor diagnosis by using IPSAT and incorporation with the predicted lexicons, achieving accuracy (84.13%), sensitivity (89.30%), specificity (77.55%), positive predictive value (83.50%), negative predictive value (85.07%), and AUC (0.8703). These experiment results showed that our proposed system can streamline the annotation process and improve efficiency and accuracy in the diagnosis of breast cancer. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-22T17:35:05Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-09-22T17:35:05Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 I
致謝 II 摘要 III ABSTRACT V TABLE OF CONTENTS VIII LIST OF FIGURES XI LIST OF TABLES XIV CHAPTER 1. INTRODUCTION 1 CHAPTER 2. MATERIAL 7 CHAPTER 3. METHODS 9 3.1 TUMOR SEGMENTATION 10 3.1.1 Attention U-Net with ResNet-18 11 3.1.2 Decoder Denoising Pretraining (DDeP) Method 14 3.2 TUMOR DIAGNOSIS 19 3.2.1 Swin Transformer and Swin Transformer V2 19 3.2.2 Fusion Image 24 3.3 BI-RADS LEXICONS PREDICTION 25 3.3.1 BI-RADS Lexicons 26 3.3.2 Method for BI-RADS Lexicons Prediction 30 3.4 THE PROPOSED DIAGNOSIS ARCHITECTURE (IPSAT) 31 3.4.1 PSA Module and Modified PSA Module 31 3.4.2 Integrated Pyramid Squeeze Attention Module 34 3.4.3 Integrated Pyramid Squeeze Attention Transformer (IPSAT) 35 CHAPTER 4. EXPERIMENT RESULT 37 4.1 ENVIRONMENT AND EXPERIMENTAL SETTING AND EVALUATION METRICS 37 4.2 COMPARISON OF DIFFERENT MODELS FOR SEGMENTATION 40 4.3 COMPARISON OF DIFFERENT BASE MODELS AND DIFFERENT TYPES OF IMAGES 44 4.4 BI-RADS LEXICONS PREDICTION 50 4.5 THE DIAGNOSIS OF TUMOR 51 4.5.1 The Incorporation of Predicted BI-RADS Lexicons 51 4.5.2 The Diagnosis with IPSAT 55 CHAPTER 5. DISCUSSION AND CONCLUSION 58 REFERENCES 64 | - |
| dc.language.iso | en | - |
| dc.subject | BI-RADS病理特徵預測 | zh_TW |
| dc.subject | 腫瘤診斷 | zh_TW |
| dc.subject | 神經網絡 | zh_TW |
| dc.subject | 乳癌 | zh_TW |
| dc.subject | 電腦輔助診斷 | zh_TW |
| dc.subject | 手持型超音波影像 | zh_TW |
| dc.subject | handheld ultrasound imaging | en |
| dc.subject | computer-aided diagnosis | en |
| dc.subject | neural network | en |
| dc.subject | BI-RADS lexicons prediction | en |
| dc.subject | tumor diagnosis | en |
| dc.subject | breast cancer | en |
| dc.title | 引入PSA注意力機制強化Transformer網路於乳房超音波診斷 | zh_TW |
| dc.title | IPSAT: Integrated PSA Transformer for Breast Ultrasound Diagnosis | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 羅崇銘;陳啓禎 | zh_TW |
| dc.contributor.oralexamcommittee | Chung-Ming Lo;Chii-Jen Chen | en |
| dc.subject.keyword | 乳癌,手持型超音波影像,電腦輔助診斷,神經網絡,BI-RADS病理特徵預測,腫瘤診斷, | zh_TW |
| dc.subject.keyword | breast cancer,handheld ultrasound imaging,computer-aided diagnosis,neural network,BI-RADS lexicons prediction,tumor diagnosis, | en |
| dc.relation.page | 68 | - |
| dc.identifier.doi | 10.6342/NTU202304117 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-08-13 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| dc.date.embargo-lift | 2025-08-11 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 3.11 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
