Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工業工程學研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85006
Title: 利用圖像概念分割之影像分類器可釋性萃取
Explainability Extraction of Image Classification based on Concept Segmentation
Authors: 邱秉誠
Bing-Cheng Qiu
Advisor: 藍俊宏
Jakey Blue
Keyword: 深度神經網路,圖像分類器,模型可釋性,影像概念萃取,
Image Classification,Deep Neural Net,Explainable AI (XAI),Image Concept Extraction,
Publication Year : 2022
Degree: 碩士
Abstract: 自類神經網路的模型困境有所突破、重受學術、產業界青睞以來,影像辨識的技術亦突飛猛進,尤其是搭配大幅提昇的電腦硬體運算能力,常使用深度神經網路模型來進行圖像的分類或辨識。深度神經網路擅長從資料中找出錯綜複雜的規律並自動萃取隱藏特徵。因此得以攻克以前難以完成的預測任務,然而深度神經網路常被視為難以理解的黑盒子,模型訓練完成後無法知悉其內部運作機制,倘若模型運作機制與人類認知產生落差、甚至相左,在特定應用領域上恐難以協助決策、甚至造成危害,縱然有高度的預測效果,也因其不可解釋的特質而降低了實用性。

針對圖像分類器的解析,現有主流解釋性方法多聚焦在像素層級的解釋,本研究發展基於概念區塊的解釋性框架,其特色是萃取之概念能夠維持在圖像相近區域,並建立以概念作為特徵的可自釋模型來逼近黑盒子,最後綜合不同預測類別的概念重要性排序,檢測影像分類器的推論規則是否合乎人類的判斷邏輯,進而增加實務採用深度神經網路技術的信心。透過實例驗證,本研究提出的概念萃取符合直覺,並能有效解釋圖像分類結果。
As the model limitation of Artificial Neural Networks (ANNs) has been broken through, AI techniques are back to the center stage again for academics and industries. The capability of image classification has also advanced significantly, and many applications are realized especially thanks to the greatly improved computing power. Deep Neural Nets (DNNs) are good at finding intricate rules/patterns from data and automatically extracting hidden features. The prediction tasks which were difficult to solve can be overcome quickly now. However, DNNs are often regarded as incomprehensible black boxes which cannot be unfolded once the model is trained. If its internal inference mechanism deviates or even contradicts human cognition, it may be difficult to support decision-making in specific application fields.

For the explanation decomposition of image classifiers, the mainstream methods focus on the interpretation at the pixel level. Significant pixels, which may spread sparsely, are then aggregated to explain the model. This thesis develops an explaining framework based on the image concept, which is a block of neighboring pixels once extracted. A concept-based and thus explainable model is built to approximate the black box model. Concept importance ranking across various predicting classes is then investigated and compared with the intuitive inference logic. Hopefully, the creditability of adopting DNN-based image classification can be increased. Through the proper case study, the proposed method can extract intuitive concepts as well as explain the black-box model logically.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85006
DOI: 10.6342/NTU202202524
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2027-08-17
Appears in Collections:工業工程學研究所

Files in This Item:
File SizeFormat 
ntu-110-2.pdf
  Restricted Access
4.3 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved