請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93456完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 楊鈞澔 | zh_TW |
| dc.contributor.advisor | Chun-Hao Yang | en |
| dc.contributor.author | 陳思妤 | zh_TW |
| dc.contributor.author | Szu-Yu Chen | en |
| dc.date.accessioned | 2024-08-01T16:13:15Z | - |
| dc.date.available | 2024-08-02 | - |
| dc.date.copyright | 2024-08-01 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-07-29 | - |
| dc.identifier.citation | Ali, K. and Saenko, K. (2014). Confidencerated multiple instance boosting for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Amores, J. (2013). Multiple instance classification: Review, taxonomy and comparative study. Artificial Intelligence, 201:81–105. Andrews, S., Tsochantaridis, I., and Hofmann, T. (2002). Support vector machines for multipleinstance learning. In Advances in Neural Information Processing Systems, volume 15. Carbonneau, M.A., Cheplygina, V., Granger, E., and Gagnon, G. (2018). Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition, 77:329–353. Chen, P.Y., Chen, C.C., Yang, C.H., Chang, S.M., and Lee, K.J. (2017). milr: Multipleinstance logistic regression with lasso penalty. R Journal, 9:446–457. Chen, Y., Bi, J., and Wang, J. (2007). Miles: Multipleinstance learning via embedded instance selection. IEEE transactions on pattern analysis and machine intelligence, 28:1931–47. Dietterich, T. G., Lathrop, R. H., and LozanoPérez, T. (1997). Solving the multiple instance problem with axisparallel rectangles. Artificial Intelligence, 89(1):31–71. Gärtner, T., Flach, P. A., Kowalczyk, A., and Smola, A. J. (2002). Multiinstance kernels. In Proceedings of the Nineteenth International Conference on Machine Learning, ICML ’02, page 179–186. Morgan Kaufmann Publishers Inc. Haußmann, M., Hamprecht, F. A., and Kandemir, M. (2017). Variational bayesian multiple instance learning with gaussian processes. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 810–819. Ilse, M., Tomczak, J., and Welling, M. (2018). Attentionbased deep multiple instance learning. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2127–2136. Kandemir, M., Haußmann, M., Diego, F., Rajamani, K., Laak, J., and Hamprecht, F. (2016). Variational weakly supervised gaussian processes. In British Machine Vision Conference. Kandemir, M., Zhang, C., and Hamprecht, F. A. (2014). Empowering multiple instance histopathology cancer diagnosis by cell graphs. In MICCAI. Proceedings, volume 8674, pages 228–235. Springer. 1. Ko, K. H., Jang, G., Park, K., and Kim, K. (2012). Gprbased landmine detection and identification using multiple features. International Journal of Antennas and Propagation, 2012. Li, W. and Vasconcelos, N. (2015). Multiple instance learning for soft bags via top instances. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Maron, O. and LozanoPérez, T. (1997). A framework for multipleinstance learning. In Advances in Neural Information Processing Systems, volume 10. Maron, O. and Ratan, A. L. (1998). Multipleinstance learning for natural scene classification. In International Conference on Machine Learning. Nicholas G. Polson, J. G. S. and Windle, J. (2013). Bayesian inference for logistic models using pólya–gamma latent variables. Journal of the American Statistical Association, 108(504):1339–1349. Popescu, M. and Mahnot, A. (2012). Early illness recognition using inhome monitoring sensors and multiple instance learning. Methods of information in medicine, 51:359–67. Rasmussen, C. E. and Williams, C. K. I. (2005). Gaussian Processesfor Machine Learning. The MIT Press. Raykar, V. C., Krishnapuram, B., Bi, J., Dundar, M., and Rao, R. B. (2008). Bayesian multiple instance learning: automatic feature selection and inductive transfer. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, page 808–815. Srinivasan, A., Muggleton, S., King, R. D., and Sternberg, M. J. E. (1994). Mutagenesis: ILP experiments in a nondeterminate biological domain. In Proceedings of the Fourth Inductive Logic Programming Workshop. Wang, F. and Pinar, A. (2021). The multiple instance learning gaussian process probit model. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 3034–3042. Wang, J. and Zucker, J.D. (2000). Solving the multipleinstance problem: A lazy learning approach. In International Conference on Machine Learning. Zhang, J., Marszałek, M., Lazebnik, S., and Schmid, C. (2007). Local features and kernels for classification of texture and object categories: A comprehensive study. International Journal of Computer Vision, 73(2):213–238. Zhou, Z.H., Sun, Y.Y., and Li, Y.F. (2008). Multiinstance learning by treating instances as noni.i.d. samples. In International Conference on Machine Learning. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93456 | - |
| dc.description.abstract | 多實例學習 (MIL) 是弱監督學習問題,已被應用於許多領域。多實例(MI)數據包括了袋子和實例的概念,其中每個袋子中都包含了一些實例。另外,袋子的資訊是已知的,而實例的資訊是缺失的。Carbonneau et al. (2018) 提到由於多實例數據中存在缺失值,因此標籤歧義 (label ambiguity) 是在 MIL 中的常見問題。在本論文中,我們了解了某些可能造成標籤歧義的來源,並提出了一個新的袋子模型來解決此問題。我們提出的模型具有幾個優勢:除了放寬現有 MIL 方法中常用的嚴格假設,也能提供更多實例與袋子關聯性的資訊,並且可以與於許多不同的實例分類方法一起合併使用,例如羅吉斯回歸。
本文討論的 MIL 模型是使用貝氏的吉布斯採樣進行模型推論。我們在吉布斯採樣過程中使用變量擴展的方法,具體來說是引入了玻利亞伽瑪的潛變量。我們對提出的袋子模型與現有方法(例如Haußmann et al. (2017) 中提出的方法)進行了比較分析,證明了我們方法的有效性。最後,通過模擬以及實際資料的實驗,驗證了我們方法的性能。 | zh_TW |
| dc.description.abstract | Multiple Instance Learning (MIL) is a weakly supervised learning problem, and it has been used in various fields. Multiple Instance (MI) data includes the concepts of bag and instance, where each bag contains several instances. Also, the bag information is observed, while the instance information is missing. Carbonneau et al. (2018) mentions that label ambiguity is a common issue in MIL due to the missing values in the MI data. In this thesis, we investigate the sources of label ambiguity and propose a novel bag model to address this issue. Our proposed model offers several advantages: (i) It relaxes the strict MIL assumption commonly employed in existing MIL methods. (ii) It provides greater insight into the relationship between instances and their corresponding bags. (iii) It can be integrated with various classifiers at the instance level, such as logistic regression.
The MIL models discussed here are inferred by a Gibbs sampling scheme, which is a Bayesian approach. We employ a variable augmentation technique on the Gibbs sampling process, specifically the PólyaGamma augmentation. Comparative analysis between our proposed bag model and existing methods, such as the one presented in Haußmann et al. (2017), demonstrate the effectiveness of our approach. Finally, we validate the performance of our model through various simulations and real data. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-01T16:13:15Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-01T16:13:15Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 致謝 i
摘要 iii Abstract v Contents vii List of Figures ix List of Tables xi Chapter 1 Introduction 1 1.1 Multiple Instance Learning 1 1.2 Applications of MIL 2 1.3 Existing MIL Approaches 3 1.4 Label Ambiguity in MIL 4 1.5 Motivation 6 Chapter 2 Background 7 2.1 Multiple Instance Logistic Regression (MILR) 7 2.2 Gaussian Process MILR (GPMILR) 8 2.3 PólyaGamma Augmentation 10 Chapter 3 Methodology 11 3.1 Logistic Aggregation Model (LAM) 11 3.2 MILR-LAM 12 3.3 GPMILR-LAM 14 3.4 Comparison between LAM and the Bag Likelihood of VGPMIL 15 Chapter 4 Simulation 19 4.1 Impact of Label Noise on Model Performance 22 4.1.1 Results of Equal Bag Size 24 4.1.2 Results of Unequal Bag Size 27 4.2 Impact of Different Bag Size on Model Performance 28 4.2.1 Results 29 4.3 Impact of Varying Threshold Values on Model Performance 29 4.3.1 Results 30 Chapter 5 Real Data Experiment 31 5.1 Musk 32 5.1.1 Results 32 5.2 Mutagensis 33 5.2.1 Results 35 Chapter 6 Conclusion 37 References 39 | - |
| dc.language.iso | en | - |
| dc.subject | 多重實例學習 | zh_TW |
| dc.subject | 標籤歧義 | zh_TW |
| dc.subject | 吉布斯取樣 | zh_TW |
| dc.subject | 玻利亞伽瑪擴充 | zh_TW |
| dc.subject | Pólya-Gamma augmentation | en |
| dc.subject | multiple instance learning | en |
| dc.subject | label ambiguity | en |
| dc.subject | Gibbs sampling | en |
| dc.title | 處理多重實例學習中標籤歧義的貝氏方法 | zh_TW |
| dc.title | A Bayesian Approach for Addressing Label Ambiguity in Multiple Instance Learning | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 陳裕庭;張升懋 | zh_TW |
| dc.contributor.oralexamcommittee | Yu-Ting Chen;Sheng-Mao Chang | en |
| dc.subject.keyword | 多重實例學習,標籤歧義,吉布斯取樣,玻利亞伽瑪擴充, | zh_TW |
| dc.subject.keyword | multiple instance learning,label ambiguity,Gibbs sampling,Pólya-Gamma augmentation, | en |
| dc.relation.page | 42 | - |
| dc.identifier.doi | 10.6342/NTU202401899 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2024-07-31 | - |
| dc.contributor.author-college | 理學院 | - |
| dc.contributor.author-dept | 統計與數據科學研究所 | - |
| 顯示於系所單位: | 統計與數據科學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf | 1.09 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
