請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74854
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 林軒田 | |
dc.contributor.author | Kuen-Han Tsai | en |
dc.contributor.author | 蔡昆翰 | zh_TW |
dc.date.accessioned | 2021-06-17T09:08:54Z | - |
dc.date.available | 2019-11-04 | |
dc.date.copyright | 2019-11-04 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-10-28 | |
dc.identifier.citation | [1] Ehsan Mohammady Ardehaly and Aron Culotta. Co-training for demographic classification using deep learning from label proportions. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pages 1017–1024. IEEE, 2017.
[2] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019. [3] Gerda Bortsova, Florian Dubost, Silas Ørting, Ioannis Katramados, Laurens Hogeweg, Laura Thomsen, Mathilde Wille, and Marleen de Bruijne. Deep learning from label proportions for emphysema quantification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 768–776. Springer, 2018. [4] Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3):542–542, 2009. [5] Bee-Chung Chen, Lei Chen, Raghu Ramakrishnan, and David R Musicant. Learning from aggregate views. In 22nd International Conference on Data Engineering (ICDE’06), pages 3–3. IEEE, 2006. [6] Shuo Chen, Bin Liu, Mingjie Qian, and Changshui Zhang. Kernel k-means based framework for aggregate outputs classification. In 2009 IEEE International Conference on Data Mining Workshops, pages 356–361. IEEE, 2009. [7] Zhensong Chen, Zhiquan Qi, Bo Wang, Limeng Cui, Fan Meng, and Yong Shi. Learning with label proportions based on nonparallel support vector machines. Knowledge-Based Systems, 119:126–141, 2017. [8] Gabriel Dulac-Arnold, Neil Zeghidour, Marco Cuturi, Lucas Beyer, and JeanPhilippe Vert. Deep multi-class learning from label proportions. arXiv preprint arXiv:1905.12909, 2019. [9] Kai Fan, Hongyi Zhang, Songbai Yan, Liwei Wang, Wensheng Zhang, and Jufu Feng. Learning a generative classifier from label proportions. Neurocomputing, 139:47–55, 2014. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016. [11] Jerónimo Hernández-González, Iñaki Inza, and Jose A Lozano. Learning bayesian network classifiers from label proportions. Pattern Recognition, 46(12):3425–3440, 2013. [12] Jerónimo Hernández-González, Inaki Inza, Lorena Crisol-Ortíz, María A Guembe, María J Iñarra, and Jose A Lozano. Fitting the data from embryo implantation prediction: Learning from label proportions. Statistical methods in medical research, 27(4):1056–1066, 2018. [13] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [14] Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pages 3581–3589, 2014. [15] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. [16] Hendrik Kuck and Nando de Freitas. Learning about individuals from group statistics. arXiv preprint arXiv:1207.1393, 2012. [17] Kuan-Ting Lai, Felix X Yu, Ming-Syan Chen, and Shih-Fu Chang. Video event detection by inferring temporal instance labels. In Proceedings of the ieee conference on computer vision and pattern recognition, pages 2243–2250, 2014. [18] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. [19] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, page 2, 2013. [20] Fan Li and Graham Taylor. Alter-cnn: An approach to learning from label proportions with application to ice-water classification. In Neural Information Processing Systems Workshops (NIPSW) on Learning and privacy with incomplete data and weak supervision, 2015. [21] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [22] Takeru Miyato, Shin-ichi Maeda, Shin Ishii, and Masanori Koyama. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 2018. [23] David R Musicant, Janara M Christensen, and Jamie F Olson. Supervised learning by training on aggregate outputs. In Seventh IEEE International Conference on Data Mining (ICDM 2007), pages 252–261. IEEE, 2007. [24] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Ng. Reading digits in natural images with unsupervised feature learning. Advances in neural information processing systems, 01 2011. [25] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems, pages 3235–3246, 2018. [26] Giorgio Patrini, Richard Nock, Paul Rivera, and Tiberio Caetano. (almost) no label no cry. In Advances in Neural Information Processing Systems, pages 190–198, 2014. [27] Zhiquan Qi, Bo Wang, Fan Meng, and Lingfeng Niu. Learning with label proportions via npsvm. IEEE transactions on cybernetics, 47(10):3293–3305, 2016. [28] Novi Quadrianto, Alex J Smola, Tiberio S Caetano, and Quoc V Le. Estimating labels from label proportions. Journal of Machine Learning Research, 10(Oct):2349– 2374, 2009. [29] Stefan Rueping. Svm classifier estimation from group probabilities. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 911–918, 2010. [30] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. [31] Marco Stolpe and Katharina Morik. Learning from label proportions by optimizing cluster model selection. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 349–364. Springer, 2011. [32] Tao Sun, Dan Sheldon, and Brendan O’Connor. A probabilistic approach for learning with label proportions applied to the us presidential election. In 2017 IEEE International Conference on Data Mining (ICDM), pages 445–454. IEEE, 2017. [33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weightaveraged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pages 1195–1204, 2017. [34] Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825, 2019. [35] Bo Wang, Zhensong Chen, and Zhiquan Qi. Linear twin svm for learning from label proportions. In 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), volume 3, pages 56–59. IEEE, 2015. [36] Felix X Yu, Dong Liu, Sanjiv Kumar, Tony Jebara, and Shih-Fu Chang. ∝svm for learning with label proportions. arXiv preprint arXiv:1306.0886, 2013. [37] Felix X Yu, Krzysztof Choromanski, Sanjiv Kumar, Tony Jebara, and Shih-Fu Chang. On learning from label proportions. arXiv preprint arXiv:1402.5902, 2014. [38] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. Procedings of the British Machine Vision Conference 2016, 2016. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74854 | - |
dc.description.abstract | 從標籤比例學習 (Learning from Label proportions) 的問題涉及使用一袋 (bag) 資料對應的弱標籤訓練分類器,而不是在單個資料上帶有強標籤。弱標籤僅包含每袋資料的標籤比例。 從標籤比例中學習問題對於許多實際應用非常重要,尤其是數據隱私或註釋成本,這些實際應用僅允許收集標籤比例,並且最近受到了很多研究關注。現有的大多數方法都集中在擴展監督學習模型,以解決從標籤比例中學習問題,然而弱標籤的本質很難從監督學習角度進一步提高分類性能。因此在本文中,我們從半監督學習的角度提出新的方法。
更進一步說,我們提出了一種受到一致性正規化 (consistency regularization) 啟發的新穎模型,一致性正規化是半監督式學習中的一種熱門概念,它鼓勵模型產生可以更好地描述資料流形的決策邊界。隨著一致性正規化的引入,我們進一步將研究擴展到了更符合實際需求的情形,更透過實驗顯示參數選擇過程可以只依賴標籤比例。實驗不僅證明通過一致性正規化從標籤比例中學習具有出色的性能,而且還證明了所提出方法的實際可用性。 | zh_TW |
dc.description.abstract | The problem of learning from label proportions (LLP) involves training classifiers with weak labels on bags of instances, rather than strong labels on individual instances. The weak labels only contain the label proportion of each bag. The LLP problem is important for many practical applications that only allow label proportions to be collected because of data privacy or annotation cost, and has recently received lots of research attention. Most existing works focus on extending supervised learning models to solve the LLP problem, but the weak learning nature makes it hard to further improve LLP performance with a supervised angle. In this paper, we take a different angle from semi-supervised learning.
In particular, we propose a novel model inspired by consistency regularization, a popular concept in semi-supervised learning that encourages the model to produce a decision boundary that better describes the data manifold. With the introduction of consistency regularization, we further extend our study to non-uniform bag-generation and validation-based parameter-selection procedures that better match practical needs. Experiments not only justify that LLP with consistency regularization achieves superior performance, but also demonstrate the practical usability of the proposed procedures. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T09:08:54Z (GMT). No. of bitstreams: 1 ntu-108-R06922066-1.pdf: 759037 bytes, checksum: 35d5e7290a7fd3c6ff4a80a016ceea14 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 誌謝 ii
摘要 iii Abstract iv 1 Introduction 1 2 Background 5 2.1 Learning from label proportions 5 2.2 Proportion loss 6 2.3 Consistency regularization 7 3 Proposed Method 9 3.1 LLP with consistency regularization 9 4 Experiment 12 4.1 Datasets 12 4.2 Experiment Setup 12 4.3 Uniform bag generation 13 4.4 K-means bag generation 14 4.5 Validation metrics 16 4.6 Discussion 17 5 Related Work 19 6 Conclusion 21 Bibliography 22 | |
dc.language.iso | en | |
dc.title | 通過一致性正規化從標籤比例中學習 | zh_TW |
dc.title | Learning from Label Proportions with Consistency Regularization | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳縕儂,李宏毅 | |
dc.subject.keyword | 從標籤比例學習,一致性正規化,半監督式學習, | zh_TW |
dc.subject.keyword | Learning from Label Proportions,Consistency Regularization,Semi-supervised Learning, | en |
dc.relation.page | 26 | |
dc.identifier.doi | 10.6342/NTU201903956 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-10-29 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 741.25 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。