請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85006
標題: | 利用圖像概念分割之影像分類器可釋性萃取 Explainability Extraction of Image Classification based on Concept Segmentation |
作者: | 邱秉誠 Bing-Cheng Qiu |
指導教授: | 藍俊宏 Jakey Blue |
關鍵字: | 深度神經網路,圖像分類器,模型可釋性,影像概念萃取, Image Classification,Deep Neural Net,Explainable AI (XAI),Image Concept Extraction, |
出版年 : | 2022 |
學位: | 碩士 |
摘要: | 自類神經網路的模型困境有所突破、重受學術、產業界青睞以來,影像辨識的技術亦突飛猛進,尤其是搭配大幅提昇的電腦硬體運算能力,常使用深度神經網路模型來進行圖像的分類或辨識。深度神經網路擅長從資料中找出錯綜複雜的規律並自動萃取隱藏特徵。因此得以攻克以前難以完成的預測任務,然而深度神經網路常被視為難以理解的黑盒子,模型訓練完成後無法知悉其內部運作機制,倘若模型運作機制與人類認知產生落差、甚至相左,在特定應用領域上恐難以協助決策、甚至造成危害,縱然有高度的預測效果,也因其不可解釋的特質而降低了實用性。 針對圖像分類器的解析,現有主流解釋性方法多聚焦在像素層級的解釋,本研究發展基於概念區塊的解釋性框架,其特色是萃取之概念能夠維持在圖像相近區域,並建立以概念作為特徵的可自釋模型來逼近黑盒子,最後綜合不同預測類別的概念重要性排序,檢測影像分類器的推論規則是否合乎人類的判斷邏輯,進而增加實務採用深度神經網路技術的信心。透過實例驗證,本研究提出的概念萃取符合直覺,並能有效解釋圖像分類結果。 As the model limitation of Artificial Neural Networks (ANNs) has been broken through, AI techniques are back to the center stage again for academics and industries. The capability of image classification has also advanced significantly, and many applications are realized especially thanks to the greatly improved computing power. Deep Neural Nets (DNNs) are good at finding intricate rules/patterns from data and automatically extracting hidden features. The prediction tasks which were difficult to solve can be overcome quickly now. However, DNNs are often regarded as incomprehensible black boxes which cannot be unfolded once the model is trained. If its internal inference mechanism deviates or even contradicts human cognition, it may be difficult to support decision-making in specific application fields. For the explanation decomposition of image classifiers, the mainstream methods focus on the interpretation at the pixel level. Significant pixels, which may spread sparsely, are then aggregated to explain the model. This thesis develops an explaining framework based on the image concept, which is a block of neighboring pixels once extracted. A concept-based and thus explainable model is built to approximate the black box model. Concept importance ranking across various predicting classes is then investigated and compared with the intuitive inference logic. Hopefully, the creditability of adopting DNN-based image classification can be increased. Through the proper case study, the proposed method can extract intuitive concepts as well as explain the black-box model logically. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85006 |
DOI: | 10.6342/NTU202202524 |
全文授權: | 同意授權(限校園內公開) |
電子全文公開日期: | 2027-08-17 |
顯示於系所單位: | 工業工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-110-2.pdf 目前未授權公開取用 | 4.3 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。