Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 共同教育中心
  3. 統計碩士學位學程
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93030
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor藍俊宏zh_TW
dc.contributor.advisorJakey Blueen
dc.contributor.author林少穎zh_TW
dc.contributor.authorShao-Ying Linen
dc.date.accessioned2024-07-12T16:21:30Z-
dc.date.available2024-07-13-
dc.date.copyright2024-07-12-
dc.date.issued2024-
dc.date.submitted2024-07-12-
dc.identifier.citationAchanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Süsstrunk, S. (2012). SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11), 2274-2282.
Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in Neural Information Processing Systems, 31.
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., ... & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805.
Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1983). Classification and Regression Trees.
Bosch, A., Zisserman, A., & Munoz, X. (2007). Image classification using random forests and ferns. In 2007 IEEE 11th International Conference on Computer Vision (pp. 1-8). IEEE.
Cox, D. R. (1958). The regression analysis of binary sequences. Journal of the Royal Statistical Society Series B: Statistical Methodology, 20(2), 215-232.
Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 248-255). IEEE.
Doran, J. (1967). Book Review. Experiments in Induction Earl B. Hunt, Janet Marin, and Phillip J. Stone , 1966 ; 247. (New York and London : Academic Press , 76s.). The Computer Journal, 10, 299-299.
Du, W., & Zhan, Z. (2002). Building decision tree classifier on private data.
Forgy, E. W. (1965). Cluster analysis of multivariate data: efficiency versus interpretability of classifications. Biometrics, 21, 768-769.
Ghorbani, A., Wexler, J., Zou, J. Y., & Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32.
Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44-58.
Ho, T. K. (1995, August). Random decision forests. In Proceedings of 3rd International Conference on Document Analysis and Recognition (Vol. 1, pp. 278-282). IEEE.
Khaleel, M., Tavanapong, W., Wong, J., Oh, J., & De Groen, P. (2021, June). Hierarchical visual concept interpretation for medical image classification. In 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS) (pp. 25-30). IEEE.
Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., & Viegas, F. (2018, July). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning (pp. 2668-2677). PMLR.
Kindermans, P. J., Hooker, S., Adebayo, J., Alber, M., Schütt, K. T., Dähne, S., Erhan, D., & Kim, B. (2019). The (un) reliability of saliency methods. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 267-280.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
Letham, B., Rudin, C., McCormick, T.H., & Madigan, D. (2015). Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. arXiv preprint arXiv: 1511.01644.
Milletari, F., Navab, N., & Ahmadi, S. A. (2016, October). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV) (pp. 565-571). IEEE.
Ng, A., Jordan, M., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. Advances in Neural Information Processing Systems, 14.
Petsiuk, V., Das, A., & Saenko, K. (2018). Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18 (pp. 234-241). Springer International Publishing.
Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20, 53-65.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (pp. 618-626).
Subasi, A., & Ercelebi, E. (2005). Classification of EEG signals using neural network and logistic regression. Computer Methods and Programs in Biomedicine, 78(2), 87-99.
Smilkov, D., Thorat, N., Kim, B., Viégas, F., & Wattenberg, M. (2017). Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825.
Sundararajan, M., Taly, A., & Yan, Q. (2017, July). Axiomatic attribution for deep networks. In International Conference on Machine Learning (pp. 3319-3328). PMLR.
Turek, M. (2018). Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency Retrieved from https://www.darpa.mil/program/explainable-artificial-intelligence
Wang, D., Cui, X., & Wang, Z. J. (2020). Chain: Concept-harmonized hierarchical inference interpretation of deep convolutional neural networks. arXiv preprint arXiv:2002.01660.
Wang, D., Cui, X., Chen, X., Ward, R., & Wang, Z. J. (2021). Interpreting Bottom-Up Decision-Making of CNNs via Hierarchical Inference. IEEE Transactions on Image Processing, 30, 6701-6714.
Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing.
Zhang, Q., Cao, R., Shi, F., Wu, Y. N., & Zhu, S. C. (2018, April). Interpreting CNN knowledge via an explanatory graph. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
Zhao, S., Wang, Y., Yang, Z., & Cai, D. (2019). Region mutual information loss for semantic segmentation. Advances in Neural Information Processing Systems, 32.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2921-2929).
邱秉誠。(2022)。利用圖像概念分割之影像分類器可釋性萃取 [未發表碩士論文]。國立台灣大學。
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93030-
dc.description.abstract隨著人工智慧的持續發展,深度學習模型在各個領域展現出驚人的應用潛力,尤其在影像辨識的相關研究中,深度神經網路能夠從資料中自動萃取具潛力的特徵,且於各式預測任務上表現優異。然而,類神經網路模型被視為黑盒子模型、缺乏可解釋性之能力。對於許多應用情境而言,了解模型的推理過程至關重要,例如在製造業中,產品品質的掃描影像儘管可快速透過深度神經網路模型辨識瑕疵,卻無法透視模型內部,瞭解其推論的焦點所在。因此即使模型於測試階段表現出色,缺乏可解釋性導致模型難以取得使用者信任,也限制其在實際場景的運用。
目前針對圖像分類器的解釋方法研究,主要聚焦於像素級別的解釋,本論文則採用基於概念級別的可解釋方法結構,概念可視為像素的集合,並且由深度神經網路模型中萃取而得,以不同尺寸的概念作為特徵,評估何種概念對模型的貢獻更為重要,同時綜合概念相似度指標,解析概念彼此之間的相互關係,進而建立屬於概念的階層架構,最終成功檢視模型在進行預測時,概念的大小與順序如何作用,本研究也藉由實務驗證,確立萃取的概念以及概念層級與人類直觀相符,提升模型的透明度,也有效探究概念在模型決策過程的貢獻度與偏好的順序性。
zh_TW
dc.description.abstractAs AI continues to evolve, deep learning models demonstrate remarkable potential in various fields, particularly in computer vision. Deep Neural Networks (DNNs) can automatically extract underlyingly significant features from data and perform impressively in prediction tasks. However, DNNs are often considered "black box" models due to their lack of interpretability. In many practical scenarios, understanding model reasoning process is crucial. For example, in the manufacturing domain, while product quality can be rapidly assessed for defects using DNNs, the inability to see inside the model and understand its focus in reasoning hampers trust and limits practical application, despite excellent performance in the training/testing phases.
Modern research on interpretability methods for image classifiers primarily focuses on pixel-level explanations. This study adopts a concept-level explanatory framework, where concepts are viewed as collections of pixels extracted from the layers inside DNNs. It evaluates which concepts contribute more significantly to the model by utilizing different sizes of concepts as features. This approach also integrates a concept similarity index to analyze the interrelationships between concepts, thereby establishing a hierarchical structure of concepts. Ultimately, this research examines how the size and sequence of concepts affect predictions. Practical verification confirms that the extracted concepts and their hierarchical levels align with human intuition, enhancing model transparency and effectively understanding the contributions and preferential sequence of concepts in the decision-making process of the model.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-12T16:21:30Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-07-12T16:21:30Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents摘要 i
Abstract ii
目次 iii
圖次 v
表次 vii
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 3
1.3 研究架構 5
第二章 文獻探討 6
2.1 圖像分類模型 6
2.1.1 傳統機器學習模型 6
2.1.2 卷積神經網路 7
2.2 人工智慧可解釋性 10
2.2.1 基於單點特徵之可釋性 11
2.2.2 基於概念之可釋性 15
2.2.3 概念相似度衡量指標 20
2.3 模型歸納之解釋性發展契機 23
第三章 圖像概念可釋性解析 25
3.1 圖像概念萃取 28
3.2 概念層級建立 30
3.3 概念推論解析 32
3.3.1 推論順序 32
3.3.2 貢獻度 33
第四章 案例探討 34
4.1 資料集介紹 34
4.2 卷積神經網路訓練 36
4.3 萃取之概念分析 37
4.4 建構概念層級 43
4.5 概念順序探討 45
4.5.1 獲取概念順序 45
4.5.2 跨類別之概念影響 50
第五章 結論與未來展望 53
5.1 研究結論 53
5.2 未來展望 55
參考文獻 57
附錄A 61
-
dc.language.isozh_TW-
dc.subject概念層級萃取zh_TW
dc.subject概念相似度zh_TW
dc.subject模型可解釋性zh_TW
dc.subject卷積神經網路zh_TW
dc.subject概念順序zh_TW
dc.subjectConvolutional Neural Networksen
dc.subjectExplainable AI (XAI)en
dc.subjectConcept Sequenceen
dc.subjectConcept Similarity Indexen
dc.subjectConcept Hierarchical Extractionen
dc.title圖像概念萃取及其順序解釋性框架之發展zh_TW
dc.titleAn Explainable Framework of Concept Extraction and Its Sequential Interpretability for Image Classification Modelsen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee許嘉裕;洪子晏zh_TW
dc.contributor.oralexamcommitteeChia-Yu Hsu;Tzu-Yen Hongen
dc.subject.keyword卷積神經網路,模型可解釋性,概念順序,概念相似度,概念層級萃取,zh_TW
dc.subject.keywordConvolutional Neural Networks,Explainable AI (XAI),Concept Sequence,Concept Similarity Index,Concept Hierarchical Extraction,en
dc.relation.page64-
dc.identifier.doi10.6342/NTU202401672-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2024-07-12-
dc.contributor.author-college共同教育中心-
dc.contributor.author-dept統計碩士學位學程-
dc.date.embargo-lift2029-07-11-
顯示於系所單位:統計碩士學位學程

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf
  未授權公開取用
10.22 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved