請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80851完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 徐宏民(Winston H. Hsu) | |
| dc.contributor.author | Guan-Rong Lu | en |
| dc.contributor.author | 呂冠蓉 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:19:04Z | - |
| dc.date.available | 2021-11-05 | |
| dc.date.available | 2022-11-24T03:19:04Z | - |
| dc.date.copyright | 2021-11-05 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-10-06 | |
| dc.identifier.citation | [1]D. Amodei, C. Olah, J. Steinhardt, P. F. Christiano, J. Schulman, and D. Mané. Concrete problems in AI safety.CoRR, abs/1606.06565, 2016. [2]V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutionalencoderdecoder architecture for image segmentation.IEEEtransactionsonpatternanalysisandmachineintelligence, 39(12):2481–2495, 2017. [3]A. Bendale and T. E. Boult. Towards open set deep networks. InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition, pages 1563–1572,2016. [4]P. Bevandić, I. Krešo, M. Oršić, and S. Šegvić. Discriminative outofdistributiondetection for semantic segmentation.arXivpreprintarXiv:1808.07703, 2018. [5]L.C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolutionfor semantic image segmentation.arXivpreprintarXiv:1706.05587, 2017. [6]G. Franchi, A. Bursuc, E. Aldea, S. Dubuisson, and I. Bloch. Tradi: Tracking deepneural network weight distributions. InEuropeanConferenceonComputerVision(ECCV)2020. Springer, 2020. [7]Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Ininternationalconferenceonmachinelearning,pages 1050–1059. PMLR, 2016. [8]Y. Geifman and R. ElYaniv. Selective classification for deep neural networks.In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N.Vishwanathan, and R. Garnett, editors,AdvancesinNeuralInformationProcessingSystems30:AnnualConferenceonNeuralInformationProcessingSystems2017,December49,2017,LongBeach,CA,USA, pages 4878–4887, 2017. [9]I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In Y. Bengio and Y. LeCun, editors,3rdInternationalConferenceonLearningRepresentations,ICLR2015,SanDiego,CA,USA,May79,2015,ConferenceTrackProceedings, 2015. [10]C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. InInternationalConferenceonMachineLearning, pages 1321–1330.PMLR, 2017. [11]D. Hendrycks, S. Basart, M. Mazeika, M. Mostajabi, J. Steinhardt, and D. Song. Abenchmark for anomaly segmentation.arXivpreprintarXiv:1911.11132, 2019. [12]D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and outofdistribution examples in neural networks. In5thInternationalConferenceonLearningRepresentations,ICLR2017,Toulon,France,April2426,2017,ConferenceTrackProceedings. OpenReview.net, 2017. [13]D. Hendrycks, M. Mazeika, and T. G. Dietterich. Deep anomaly detection withoutlierexposure. In7thInternationalConferenceonLearningRepresentations,ICLR2019,NewOrleans,LA,USA,May69,2019. OpenReview.net, 2019 [14]Y.C. Hsu, Y. Shen, H. Jin, and Z. Kira. Generalized odin: Detecting outofdistribution image without learning from outofdistribution data. InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition, pages10951–10960, 2020. [15]A. Kendall and Y. Gal. What uncertainties do we need in bayesian deep learningfor computer vision? In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach,R. Fergus, S. V. N. Vishwanathan, and R. Garnett, editors,AdvancesinNeuralInformationProcessingSystems30:AnnualConferenceonNeuralInformationProcessingSystems2017,December49,2017,LongBeach,CA,USA, pages 5574–5584, 2017. [16]B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In I. Guyon, U. von Luxburg,S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, editors,AdvancesinNeuralInformationProcessingSystems30:AnnualConferenceonNeuralInformationProcessingSystems2017,December49,2017,LongBeach,CA,USA, pages 6402–6413, 2017. [17]K. Lee, H. Lee, K. Lee, and J. Shin. Training confidencecalibrated classifiers fordetecting outofdistribution samples. In6thInternationalConferenceonLearningRepresentations,ICLR2018,Vancouver,BC,Canada,April30May3,2018,ConferenceTrackProceedings. OpenReview.net, 2018. [18]Y. Li and N. Vasconcelos. Background data resampling for outlieraware classification. InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition, pages 13218–13227, 2020 [19]S. Liang, Y. Li, and R. Srikant. Enhancing the reliability of outofdistributionimage detection in neural networks. In6thInternationalConferenceonLearningRepresentations,ICLR2018,Vancouver,BC,Canada,April30May3,2018,ConferenceTrackProceedings. OpenReview.net, 2018. [20]J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semanticsegmentation. InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition, pages 3431–3440, 2015. [21]M. Masana, I. Ruiz, J. Serrat, J. van de Weijer, and A. M. López. Metric learning fornovelty and anomaly detection. InBritishMachineVisionConference2018,BMVC2018,Newcastle,UK,September36,2018, page 64. BMVA Press, 2018. [22]M. Mirza and S. Osindero. Conditional generative adversarial nets.arXivpreprintarXiv:1411.1784, 2014. [23]A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled:High confidence predictions for unrecognizable images. InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition, pages 427–436, 2015. [24]O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. InInternationalConferenceonMedicalimagecomputingandcomputerassistedintervention, pages 234–241. Springer, 2015. [25]A. Vyas, N. Jammalamadaka, X. Zhu, D. Das, B. Kaul, and T. L. Willke. Outofdistribution detection using an ensemble of self supervised leaveout classifiers. InProceedingsoftheEuropeanConferenceonComputerVision(ECCV), pages 550–564, 2018. [26]D. Williams, M. Gadd, D. De Martini, and P. Newman. Fool me once: Robustselective segmentation via outofdistribution detection with contrastive learning.arXivpreprintarXiv:2103.00869, 2021. [27]Y. Xia, Y. Zhang, F. Liu, W. Shen, and A. L. Yuille. Synthesize then compare: Detecting failures and anomalies for semantic segmentation. InEuropeanConferenceonComputerVision, pages 145–161. Springer, 2020. [28]O. Zendel, K. Honauer, M. Murschitz, D. Steininger, and G. F. Dominguez.Wilddashcreating hazardaware benchmarks. InProceedingsoftheEuropeanConferenceonComputerVision(ECCV), pages 402–416, 2018. [29]H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages 2881–2890, 2017. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80851 | - |
| dc.description.abstract | 異常偵測在很多生命攸關系統的應用上,如自駕車系統,是一件很重要的事情。隨著機器人學和電腦視覺的發展,有些研究致力於在影像分類的任務上面做到異常偵測,然而,將異常偵測做在語意分割上的這個題目先前仍然沒有太多研究,因此這個題目需要我們進一步的深入探討。在傳統分類的問題上面,有一些先前的研究有在做異常偵測的探討,而這樣的系統主要都是取得一些不同於訓練資料集裡的類別,在訓練的過程中將這些類別的物件當作是假定的未知物品,而這樣的作法會中存在著兩個缺點。(1)在訓練過程中,我們不會知道最終測試時的未知物件是甚麼,而且這些未知物件甚至在訓練過程中可能都還不存在或還沒被創造出來。(2)訓練出來的模型對於未知物品預測的準確度會隨著我們選來當假定的未知物品種類的不同而有很大的浮動。有鑑於我們發現了以往傳統的異常偵測方法中的這些缺點,我們提出了一個嶄新的合成假定的未知資料的方法,來做到在語意分割上的異常偵測。我們設計了一個遮罩梯度更新的模組來生成在正常資料點決策邊界附近的輔助資料(合成假定的未知資料)。另外,我們也提出了新的不同於傳統交叉熵的損失函數,來讓模型更專注於學習決策邊界附近的資料點。我們在兩個語意分割異常偵測的資料集上面都有很好的實驗成績表現,另外,消融實驗更證明的我們提出來的模組確實是有效的。最後,我們希望能藉由我們提出來的概念讓之後的研究者對於這項語意分割異常偵測的題目有更多的想法,並能對於這個題目提出更完備的解決方式,進而在實際應用上提升這些性命攸關系統的安全性。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:19:04Z (GMT). No. of bitstreams: 1 U0001-2909202100282600.pdf: 10125165 bytes, checksum: 479752569c3d0ad0d0941078ff85478c (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i Acknowledgements ii 摘要 iii Abstract v Contents vii List of Figures ix List of Tables x Chapter 1 Introduction 1 Chapter 2 Related work 4 2.1 Anomaly Detection with Deep Networks.. . . . . . . . 4 2.2 Image Segmentation.. . . . . . . . . . . . . . . . . .5 2.3 Anomaly Segmentation.. . . . . . . . . . . . . . . . 6 Chapter 3 Method 7 3.1 Preliminary. . . . . . . . . . . . . . . . . . . . . .7 3.2 Generating auxiliary data. . . . . . . . . . . . . . .9 3.3 Anomalyaware Finetuning. . . . . . . . . . . . . . . .10 Chapter 4 Experimental Results 13 4.1 Training Setting. . . . . . . . . . . . . . . . . . . 13 4.2 Evaluation Metrics. . . . . . . . . . . . . . . . . . 14 4.3 Selection Bias. . . . . . . . . . . . . . . . . . . . 15 4.4 Result. . . . . . . . . . . . . . . . . . . . . . . . 16 4.5 Ablation Study. . . . . . . . . . . . . . . . . . . . 17 4.6 Visualizations on StreetHazards Dataset. . . . . . . .18 Chapter 5 Conclusion 19 References 20 Appendix A — Analysis 25 A.1 Thresholding Analysis. . . . . . . . . . . . . . . . .25 A.2 Loss Analysis. . . . . . . . . . . . . . . . . . . . .27 A.3 More Experimental Result. . . . . . . . . . . . . . . 29 A.4 More Visualizations on StreetHazards Dataset. . . . . 30 A.5 Visualizations on Auxiliary Images. . . . . . . . . . 31 | |
| dc.language.iso | en | |
| dc.subject | 異常偵測 | zh_TW |
| dc.subject | 語意分割 | zh_TW |
| dc.subject | anomaly detection | en |
| dc.subject | semantic segmentation | en |
| dc.title | 利用合成的未知資料進行語意分割的異常偵測 | zh_TW |
| dc.title | Anomaly-Aware Semantic Segmentation by Leveraging Synthetic-Unknown Data | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳文進(Hsin-Tsai Liu),余能豪(Chih-Yang Tseng),葉梅珍,陳奕廷 | |
| dc.subject.keyword | 語意分割,異常偵測, | zh_TW |
| dc.subject.keyword | semantic segmentation,anomaly detection, | en |
| dc.relation.page | 31 | |
| dc.identifier.doi | 10.6342/NTU202103443 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2021-10-07 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-2909202100282600.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 9.89 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
