請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93030
標題: | 圖像概念萃取及其順序解釋性框架之發展 An Explainable Framework of Concept Extraction and Its Sequential Interpretability for Image Classification Models |
作者: | 林少穎 Shao-Ying Lin |
指導教授: | 藍俊宏 Jakey Blue |
關鍵字: | 卷積神經網路,模型可解釋性,概念順序,概念相似度,概念層級萃取, Convolutional Neural Networks,Explainable AI (XAI),Concept Sequence,Concept Similarity Index,Concept Hierarchical Extraction, |
出版年 : | 2024 |
學位: | 碩士 |
摘要: | 隨著人工智慧的持續發展,深度學習模型在各個領域展現出驚人的應用潛力,尤其在影像辨識的相關研究中,深度神經網路能夠從資料中自動萃取具潛力的特徵,且於各式預測任務上表現優異。然而,類神經網路模型被視為黑盒子模型、缺乏可解釋性之能力。對於許多應用情境而言,了解模型的推理過程至關重要,例如在製造業中,產品品質的掃描影像儘管可快速透過深度神經網路模型辨識瑕疵,卻無法透視模型內部,瞭解其推論的焦點所在。因此即使模型於測試階段表現出色,缺乏可解釋性導致模型難以取得使用者信任,也限制其在實際場景的運用。
目前針對圖像分類器的解釋方法研究,主要聚焦於像素級別的解釋,本論文則採用基於概念級別的可解釋方法結構,概念可視為像素的集合,並且由深度神經網路模型中萃取而得,以不同尺寸的概念作為特徵,評估何種概念對模型的貢獻更為重要,同時綜合概念相似度指標,解析概念彼此之間的相互關係,進而建立屬於概念的階層架構,最終成功檢視模型在進行預測時,概念的大小與順序如何作用,本研究也藉由實務驗證,確立萃取的概念以及概念層級與人類直觀相符,提升模型的透明度,也有效探究概念在模型決策過程的貢獻度與偏好的順序性。 As AI continues to evolve, deep learning models demonstrate remarkable potential in various fields, particularly in computer vision. Deep Neural Networks (DNNs) can automatically extract underlyingly significant features from data and perform impressively in prediction tasks. However, DNNs are often considered "black box" models due to their lack of interpretability. In many practical scenarios, understanding model reasoning process is crucial. For example, in the manufacturing domain, while product quality can be rapidly assessed for defects using DNNs, the inability to see inside the model and understand its focus in reasoning hampers trust and limits practical application, despite excellent performance in the training/testing phases. Modern research on interpretability methods for image classifiers primarily focuses on pixel-level explanations. This study adopts a concept-level explanatory framework, where concepts are viewed as collections of pixels extracted from the layers inside DNNs. It evaluates which concepts contribute more significantly to the model by utilizing different sizes of concepts as features. This approach also integrates a concept similarity index to analyze the interrelationships between concepts, thereby establishing a hierarchical structure of concepts. Ultimately, this research examines how the size and sequence of concepts affect predictions. Practical verification confirms that the extracted concepts and their hierarchical levels align with human intuition, enhancing model transparency and effectively understanding the contributions and preferential sequence of concepts in the decision-making process of the model. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93030 |
DOI: | 10.6342/NTU202401672 |
全文授權: | 同意授權(限校園內公開) |
顯示於系所單位: | 統計碩士學位學程 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-112-2.pdf 目前未授權公開取用 | 10.22 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。