請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93772完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳家麟 | zh_TW |
| dc.contributor.advisor | Ja-Ling Wu | en |
| dc.contributor.author | 李沛剛 | zh_TW |
| dc.contributor.author | Pei-Kang Lee | en |
| dc.date.accessioned | 2024-08-07T17:15:07Z | - |
| dc.date.available | 2024-08-08 | - |
| dc.date.copyright | 2024-08-07 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-02 | - |
| dc.identifier.citation | [1] Y. H. Ahn, G.-M. Park, and S. T. Kim. Line: Out-of-distribution detection by leveraging important neurons. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19852–19862, 2023.
[2] M. B. Ammar, N. Belkhir, S. Popescu, A. Manzanera, and G. Franchi. NECO: NEural collapse based out-of-distribution detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/ forum?id=9ROuKblmi7. [3] Y. Bai, Z. Han, C. Zhang, B. Cao, X. Jiang, and Q. Hu. Id-like prompt learning for few-shot out-of-distribution detection. arXiv preprint arXiv:2311.15243, 2024. Available at https://arxiv.org/abs/2311.15243. [4] J. Bitterwolf, M. Müller, and M. Hein. In or out? fixing imagenet out-of-distribution detection evaluation. In Proceedings of the 40th International Conference on Machine Learning, pages 105–140, 2023. [5] S. Changpinyo, P. K. Sharma, N. Ding, and R. Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3557–3567, 2021. doi: 10.1109/CVPR46437.2021. 00356. [6] J. Chen, Y. Li, X. Wu, Y. Liang, and S. Jha. Atom: Robustifying out-of-distribution detection using outlier mining. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, and J. A. Lozano, editors, Machine Learning and Knowledge Discovery in Databases. Research Track, pages 430–445, Cham, 2021. Springer International Publishing. ISBN 978-3-030-86523-8. [7] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 3606–3613, 2014. doi: 10.1109/CVPR.2014.461. [8] A. Djurisic, N. Bozanic, A. Ashok, and R. Liu. Extremely simple activation shaping for out-of-distribution detection. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id= ndYXTEL6cZz. [9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https: //openreview.net/forum?id=YicbFdNTTy. [10] X. Du, Z. Wang, M. Cai, and S. Li. VOS: Learning what you don’t know by virtual outlier synthesis. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=TW7d65uYu5M. [11] S. Esmaeilpour, B. Liu, E. Robertson, and L. Shu. Zero-shot out-of-distribution detection based on the pre-trained model clip. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6568–6576, June 2022. doi: 10.1609/ aaai.v36i6.20610. URL https://ojs.aaai.org/index.php/AAAI/article/ view/20610. [12] A. Fang, G. Ilharco, M. Wortsman, Y. Wan, V. Shankar, A. Dave, and L. Schmidt. Data determines distributional robustness in contrastive language image pre-training (CLIP). In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 6216–6234. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/fang22a.html. [13] C. Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998. ISBN 9780262561167. URL https://mitpress.mit.edu/9780262561167/. [14] D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-ofdistribution examples in neural networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Hkg4TI9xl. [15] D. Hendrycks, M. Mazeika, and T. Dietterich. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyxCxhRcY7. [16] D. Hendrycks, S. Basart, M. Mazeika, A. Zou, J. Kwon, M. Mostajabi, J. Steinhardt, and D. Song. Scaling out-of-distribution detection for real-world settings. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 8759–8773. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/hendrycks22a.html. [17] R. Huang, A. Geng, and Y. Li. On the importance of gradients for detecting distributional shifts in the wild. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 677–689. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/ 2021/file/063e26c670d07bb7c4d30e6fc69fe056-Paper.pdf. [18] M. Jia, L. Tang, B.-C. Chen, C. Cardie, S. Belongie, B. Hariharan, and S.-N. Lim. Visual prompt tuning. In Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, pages 709–727, Berlin, Heidelberg, 2022. Springer-Verlag. ISBN 978-3-031-19826- 7. doi: 10.1007/978-3-031-19827-4_41. URL https://doi.org/10.1007/ 978-3-031-19827-4_41. [19] X. Jiang, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, and B. Han. Negative label guided OOD detection with pretrained vision-language models. In The Twelfth International Conference on Learning Representations, 2024. URL https: //openreview.net/forum?id=xUO1HXz4an. [20] J. Lee and G. AlRegib. Gradients as a measure of uncertainty in neural networks. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), pages 2416–2420, 2020. doi: 10.1109/ICIP40778.2020.9190679. [21] K. Lee, H. Lee, K. Lee, and J. Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ryiAv2xAZ. [22] K. Lee, K. Lee, H. Lee, and J. Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 7167– 7177, Red Hook, NY, USA, 2018. Curran Associates Inc. [23] T. Li, G. Pang, X. Bai, W. Miao, and J. Zheng. Learning transferable negative prompts for out-of-distribution detection. In Proceedings of the 2024 IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2024. Available at https://arxiv.org/abs/2404.03248. [24] S. Liang, Y. Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1VGkIxRZ. [25] W. Liu, X. Wang, J. Owens, and Y. Li. Energy-based out-of-distribution detection. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.-T. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 21464–21475. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ f5496252609c43eb8a3d147ab9b9c006-Paper.pdf. [26] Y. Ming, Z. Cai, J. Gu, Y. Sun, W. Li, and Y. Li. Delving into out-of-distribution detection with vision-language representations. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 35087–35102. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/ file/e43a33994a28f746dcfd53eb51ed3c2d-Paper-Conference.pdf. [27] Y. Ming, Y. Fan, and Y. Li. POEM: Out-of-distribution detection with posterior sampling. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15650– 15665. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/ ming22a.html. [28] A. Miyai, Q. Yu, G. Irie, and K. Aizawa. LoCoOp: Few-shot out-ofdistribution detection via prompt learning. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 76298–76310. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/ file/f0606b882692637835e8ac981089eccd-Paper-Conference.pdf. [29] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 427–436, 2015. doi: 10.1109/CVPR.2015.7298640. [30] J. Nie, Y. Zhang, Z. Fang, T. Liu, B. Han, and X. Tian. Out-of-distribution detection with negative prompts. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=nanyAujl6e. [31] J. Park, Y. G. Jung, and A. B. J. Teoh. Nearest neighbor guidance for out-of-distribution detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1686–1695, 2023. [32] S. Park, J. Mok, D. Jung, S. Lee, and S. Yoon. On the powerfulness of textual outlier exposure for visual ood detection. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 51675–51687. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/ file/a2374637af47ac9471b43c99b68acf27-Paper-Conference.pdf. [33] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models from natural language supervision. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/radford21a.html. [34] M. Raghu, T. Unterthiner, S. Kornblith, C. Zhang, and A. Dosovitskiy. Do vision transformers see like convolutional neural networks? In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 12116–12128. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/ paper/2021/file/652cf38361a209088302ba2b8b7f51e0-Paper.pdf. [35] J. Ren, S. Fort, J. Z. Liu, A. G. Roy, S. Padhy, and B. Lakshminarayanan. A simple fix to mahalanobis distance for improving near-ood detection. CoRR, abs/2106.09022, 2021. URL https://arxiv.org/abs/2106.09022. [36] C. S. Sastry and S. Oore. Detecting out-of-distribution examples with Gram matrices. In H. D. III and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 8491–8501. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr. press/v119/sastry20a.html. [37] Y. Shu, X. Guo, J. Wu, X. Wang, J. Wang, and M. Long. CLIPood: Generalizing CLIP to out-of-distributions. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 31716–31731. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr. press/v202/shu23a.html. [38] Y. Sun, C. Guo, and Y. Li. React: Out-of-distribution detection with rectified activations. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 144–157. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ 01894d6f048493d2cacde3c579c315a3-Paper.pdf. [39] Y. Sun, Y. Ming, X. Zhu, and Y. Li. Out-of-distribution detection with deep nearest neighbors. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 20827– 20840. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/ sun22d.html. [40] L. Tao, X. Du, J. Zhu, and Y. Li. Non-parametric outlier synthesis. In The Eleventh International Conference on Learning Representations, 2023. URL https: //openreview.net/forum?id=JHklpEZqduQ. [41] D. Ulmer, L. Meijerink, and G. Cinà. Trust issues: Uncertainty estimation does not enable reliable ood detection on medical tabular data. In E. Alsentzer, M. B. A. McDermott, F. Falck, S. K. Sarkar, S. Roy, and S. L. Hyland, editors, Proceedings of the Machine Learning for Health NeurIPS Workshop, volume 136 of Proceedings of Machine Learning Research, pages 341–354. PMLR, 11 Dec 2020. URL https: //proceedings.mlr.press/v136/ulmer20a.html. [42] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inaturalist species classification and detection dataset. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8769–8778, 2018. doi: 10.1109/CVPR.2018.00914. [43] S. Vaze, K. Han, A. Vedaldi, and A. Zisserman. Open-set recognition: A good closed-set classifier is all you need. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id= 5hLP5JY9S2d. [44] H. Wang, Z. Li, L. Feng, and W. Zhang. ViM: Out-of-distribution with virtuallogit matching. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. Available at https://arxiv.org/abs/ 2203.10807. [45] H. Wei, R. Xie, H. Cheng, L. Feng, B. An, and Y. Li. Mitigating neural network overconfidence with logit normalization. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 23631–23644. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/wei22d.html. [46] Y. Wei, H. Hu, Z. Xie, Z. Liu, Z. Zhang, Y. Cao, J. Bao, D. Chen, and B. Guo. Improving clip fine-tuning performance. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 5416–5426, 2023. doi: 10.1109/ICCV51070.2023.00501. [47] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pages 3485–3492, 2010. doi: 10.1109/CVPR.2010.5539970. [48] K. Xu, R. Chen, G. Franchi, and A. Yao. Scaling for training time and post-hoc out-of-distribution detection enhancement. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id= RDSTjtnqCg. [49] J. Zhang, J. Yang, P. Wang, H. Wang, Y. Lin, H. Zhang, Y. Sun, X. Du, K. Zhou, W. Zhang, Y. Li, Z. Liu, Y. Chen, and H. Li. Openood v1.5: Enhanced benchmark for out-of-distribution detection, 2023. URL https://arxiv.org/abs/2306.09301. [50] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1452–1464, 2018. doi: 10.1109/TPAMI.2017.2723009. [51] K. Zhou, J. Yang, C. C. Loy, and Z. Liu. Learning to prompt for visionlanguage models. International Journal of Computer Vision, 130(9):2337–2348, 2022. doi: 10.1007/s11263-022-01653-1. URL https://doi.org/10.1007/ s11263-022-01653-1. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93772 | - |
| dc.description.abstract | 分佈外(Out-of-Distribution, OOD)檢測的目標是賦予模型能力,使其能識別非訓練樣本分佈的輸入。此能力對於將模型部署於真實環境,尤其在醫療診斷與自動駕駛等安全關鍵領域,顯得格外重要。傳統上,許多研究利用捲積神經網絡(CNN)透過視覺特徵來進行OOD檢測。近年來,隨著視覺語言模型(Vision-Language Model, VLM)的興起,開啟了利用這些模型的綜合處理能力進行OOD檢測的新途徑。這些模型結合了標籤的語義與視覺特徵,進行零樣本或少樣本學習,以提高模型在多變環境中的適應性和效能。在本論文中,我們提出了一種創新的類別標籤語意集成方法,利用預訓練視覺語言模型的強大知識庫,透過類別標籤語意相近的特徵,使模型學習到更精確的類別特徵。此外,我們進一步結合負語意標籤進行少樣本訓練。實驗結果顯示,在以ImageNet-1K作為分佈內資料集時,我們的方法相較於現有基於VLM的方法,在分佈外資料集上顯著降低了FPR95,平均減少了11.33個百分點,並將AUROC平均提升了2.47個百分點,顯示出顯著的效能增益. | zh_TW |
| dc.description.abstract | Out-of-Distribution (OOD) detection aims to empower models with the ability to recognize inputs that deviate from the training sample distribution. This capability is crucial when deploying models in real-world settings, particularly in safety-critical areas such as medical diagnostics and autonomous driving systems. Traditionally, many studies have employed Convolutional Neural Networks (CNNs) to conduct OOD detection through visual features. Recently, with the advent of Vision-Language Models (VLMs), a new approach has emerged that leverages the comprehensive processing power of these models for OOD detection. These models integrate semantic and visual features of labels to facilitate zero-shot or few-shot learning, enhancing the model's adaptability and performance in diverse environments. In this work, we leverage the pretrained knowledge of VLMs and introduce an innovative method called the Positive Label Semantic Ensemble. Our model learns more precise category features by harnessing semantically related features to class labels. Additionally, we incorporate negative semantic labels in our few-shot training approach. Experimental results demonstrate that, with ImageNet-1K as the in-distribution dataset, our method significantly reduces the FPR95 by an average of 11.33 percentage points and increases the AUROC by an average of 2.47 percentage points compared to existing VLM-based methods, showing a substantial performance improvement. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-07T17:15:07Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-07T17:15:07Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 摘要 ii
Abstract iii Contents v List of Figures vii List of Tables viii Chapter 1 Introduction 1 Chapter 2 Related Works 5 2.1 Traditional OOD Detection . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 CLIP-based OOD Detection . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 3 Proposed Method 8 3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.1.2 CLIP and Prompt Learning . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.1 Auxiliary Data Preparation . . . . . . . . . . . . . . . . . . . . . . 12 3.2.2 Positive Label Semantic Ensemble . . . . . . . . . . . . . . . . . . 13 3.2.3 Negative Label Mining . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.4 Training Procedure and Loss Function . . . . . . . . . . . . . . . . 20 3.2.5 Enhance Performance Through Visual Prompt Tuning . . . . . . . . 25 3.2.6 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Chapter 4 Experiments 30 4.1 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Chapter 5 Discussion 37 5.1 Performance Analysis Across Different Experimental Phases . . . . . 37 5.2 Selection of Negative Embedding for ID Loss . . . . . . . . . . . . . 39 5.3 Selection of Negative Embedding for OOD Loss . . . . . . . . . . . 40 5.4 Role of Positive Label Semantic Ensemble . . . . . . . . . . . . . . 41 5.5 Prompt Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.6 Analysis of Visual Prompt Tuning Integration . . . . . . . . . . . . . 42 5.7 Comparison of Different Numbers of Training Samples in Few-Shot learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Chapter 6 Conclusion 44 References 46 | - |
| dc.language.iso | en | - |
| dc.subject | 預訓練視覺語言模型 | zh_TW |
| dc.subject | 分佈外檢測 | zh_TW |
| dc.subject | 異常檢測 | zh_TW |
| dc.subject | Anomaly detection | en |
| dc.subject | Out-of-Distribution Detection | en |
| dc.subject | Vision-Language Model | en |
| dc.title | 超越負語意標籤: 透過預訓練視覺語言模型的類別標籤語意集成來增強少樣本分佈外檢測效能 | zh_TW |
| dc.title | Beyond Negative Label: Advancing Few-Shot Out-of-Distribution Detection Performance via Positive Label Semantic Ensemble in Pretrained Vision-Language Model | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 許永真;陳文進;胡敏君;陳駿丞 | zh_TW |
| dc.contributor.oralexamcommittee | Yung-jen Hsu;Wen-Chin Chen;Min-Chun Hu;Jun-Cheng Chen | en |
| dc.subject.keyword | 分佈外檢測,預訓練視覺語言模型,異常檢測, | zh_TW |
| dc.subject.keyword | Out-of-Distribution Detection,Vision-Language Model,Anomaly detection, | en |
| dc.relation.page | 56 | - |
| dc.identifier.doi | 10.6342/NTU202402927 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-08-06 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 4.45 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
