Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80436
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor王鈺強(Yu-Chiang Wang)
dc.contributor.authorYuan-Chia Chengen
dc.contributor.author鄭元嘉zh_TW
dc.date.accessioned2022-11-24T03:06:38Z-
dc.date.available2022-02-21
dc.date.available2022-11-24T03:06:38Z-
dc.date.copyright2022-02-21
dc.date.issued2021
dc.date.submitted2022-01-21
dc.identifier.citation[1] P. Dabkowski and Y. Gal, “Real time image saliency for black box classifiers,” in NeurIPS, 2017. [2] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in ICCV, 2017. [3] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, “Smoothgrad: removing noise by adding noise,” ArXiv, 2017. [4] S. Xu, S. Venugopalan, and M. Sundararajan, “Attribution in scale and space,” in CVPR, 2020. [5] A. Kapishnikov, T. Bolukbasi, F. Viégas, and M. Terry, “Xrai: Better attributions through regions,” in ICCV, 2019. [6] R. C. Fong and A. Vedaldi, “Interpretable explanations of black boxes by meaningful perturbation,” in ICCV, 2017. [7] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in ICML, 2017. [8] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” in WACV, 2018. [9] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in CVPR, 2016. [10] Z. Huang and Y. Li, “Interpretable and accurate fine-grained recognition via region grouping,” in CVPR, 2020. [11] J. Chen, L. Song, M. Wainwright, and M. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” in ICML, 2018. [12] A. Kanehira and T. Harada, “Learning to explain with complemental examples,” in CVPR, 2019. [13] J. V. Jeyakumar, J. Noor, Y.-H. Cheng, L. Garcia, and M. Srivastava, “How can i explain this to you? an empirical study of deep neural network explanation methods,” in NeurIPS, 2020. [14] S. Gulshad and A. Smeulders, “Explaining with counter visual attributes and examples,” in ICMR, 2020. [15] C. Chen, O. Li, C. Tao, A. J. Barnett, J. Su, and C. Rudin, “This looks like that: deep learning for interpretable image recognition,” in NeurIPS, 2019. [16] D. Rymarczyk, Ł. Struski, J. Tabor, and B. Zieliński, “Protopshare: prototype sharing for interpretable image classification and similarity discovery,” ArXiv, 2020. [17] K. Wickstrøm, M. Kampffmeyer, and R. Jenssen, “Uncertainty modeling and interpretability in convolutional neural networks for polyp segmentation,” in MLSP, 2018. [18] J. Sun, F. Darbehani, M. Zaidi, and B. Wang, “Saunet: shape attentive u-net for interpretable medical image segmentation,” in MICCAI, 2020. [19] L. Zhang, R. Tanno, M.-C. Xu, C. Jin, J. Jacob, O. Ciccarelli, F. Barkhof, and D. C. Alexander, “Disentangling human error from the ground truth in segmentation of medical images,” in NeurIPS, 2020. [20] C. F. Baumgartner, K. C. Tezcan, K. Chaitanya, A. M. Hötker, U. J. Muehlematter, K. Schawkat, A. S. Becker, O. Donati, and E. Konukoglu, “Phiseg: capturing uncertainty in medical image segmentation,” in MICCAI, 2019. [21] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015. [22] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI, 2015. [23] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in CVPR, 2017. [24] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: spatial pyramid matching for recognizing natural scene categories,” in CVPR, 2006. [25] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” ArXiv, 2017. [26] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” in ICLR, 2015. [27] ——, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” TPAMI, 2017. [28] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in ECCV, 2018. [29] R. Mohan and A. Valada, “Efficientps: Efficient panoptic segmentation,” IJCV, 2021. [30] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in ICCV, 2017. [31] S. A. Kohl, B. Romera-Paredes, C. Meyer, J. De Fauw, J. R. Ledsam, K. H. Maier-Hein, S. Eslami, D. J. Rezende, and O. Ronneberger, “A probabilistic u-net for segmentation of ambiguous images,” in NeurIPS, 2018. [32] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: the all convolutional net,” in ICLR workshop, 2015. [33] D. Gong, L. Liu, V. Le, B. Saha, M. R. Mansour, S. Venkatesh, and A. v. d. Hengel, “Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection,” in ICCV, 2019. [34] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” IJCV, 2010. [35] B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik, “Semantic contours from inverse detectors,” in ICCV, 2011. [36] D. Vázquez, J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, A. M. López, A. Romero, M. Drozdzal, and A. Courville, “A benchmark for endoluminal scene segmentation of colonoscopy images,” Journal of Healthcare Engineering, 2017. [37] S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman et al., “The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans,” Medical physics, 2011. [38] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009. [39] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016. [40] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in ICML, 2019. [41] A. J. Asman and B. A. Landman, “Formulating spatially varying performance in the statistical fusion framework,” IEEE transactions on medical imaging, 2012.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80436-
dc.description.abstract為了解深度學習模型如何進行分類預測,近期不少研究轉向發展模型可解釋性,然而目前多數研究無法直接應用至語意分割任務上,更無法在多標注者圖像語意分割問題上提供模型可解釋性。針對多標注者圖像語意分割任務,本研究旨在透過回答兩個問題來實現可解釋性模型:「誰」的標注影響預測結果,以及「為何」模型會受到該標注者影響。本研究中,我們提出了 Tendency-and-Assignment Explainable (TAX) 訓練框架,使模型能給提供「標注者」與「指派原因」兩層次的解釋。在 TAX 訓練框架下,多組捲積核負責學習不同標注者的標注傾向(標注偏好),而 prototype bank 利用圖像資訊來引導多組捲積核的學習。本研究實驗結果顯示,TAX 不僅能夠結合目前最新的網路架構以達到優良的語意分割效果,同時能針對「標注者」與「指派原因」兩面向提供令人滿意的可解釋性。zh_TW
dc.description.provenanceMade available in DSpace on 2022-11-24T03:06:38Z (GMT). No. of bitstreams: 1
U0001-2012202117001200.pdf: 13648156 bytes, checksum: 50555ff75cadc6799cb8bf2b7577bf78 (MD5)
Previous issue date: 2021
en
dc.description.tableofcontents中文摘要 i Abstract iii List of Figures vii List of Tables ix 1 Introduction 1 2 Related Work 5 2.1 Semantic Segmentation 5 2.2 Interpretable Deep Models 6 3 Proposed Method 9 3.1 Problem Formulation and Method Overview 9 3.2 Learning to Describe Labeling Tendencies 10 3.3 Learning to Assign Labeling Tendencies 12 3.4 Visual Interpretability during Inference 15 4 Experiments 17 4.1 Datasets and Implementation Details 17 4.1.1 Datasets 17 4.1.2 Implementation Details 19 4.2 Case Studies for Interpretability 19 4.3 Quantitative Analyses 23 4.3.1 Segmentation Performance 23 4.3.2 Annotator and Assignment-Level Explanations 24 5 Conclusion 27 Reference 29
dc.language.isoen
dc.subject電腦視覺zh_TW
dc.subject多標注者zh_TW
dc.subject可解釋性模型zh_TW
dc.subject語意分割zh_TW
dc.subject深度學習zh_TW
dc.subjectinterpretabilityen
dc.subjectsemantic segmentationen
dc.subjectdeep learningen
dc.subjectmulti-annotatoren
dc.subjectcomputer visionen
dc.title可解釋性深度學習於多標注者圖像語意分割zh_TW
dc.titleLearning Interpretable Semantic Segmentation from Multi-Annotatorsen
dc.date.schoolyear110-1
dc.description.degree碩士
dc.contributor.oralexamcommittee邱維辰(Hsin-Tsai Liu),陳祝嵩(Chih-Yang Tseng)
dc.subject.keyword深度學習,電腦視覺,語意分割,可解釋性模型,多標注者,zh_TW
dc.subject.keyworddeep learning,computer vision,semantic segmentation,interpretability,multi-annotator,en
dc.relation.page33
dc.identifier.doi10.6342/NTU202104548
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-01-22
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
U0001-2012202117001200.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
13.33 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved