請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51316
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳良基(Liang-Gee Chen) | |
dc.contributor.author | Che-Wei Chang | en |
dc.contributor.author | 張哲偉 | zh_TW |
dc.date.accessioned | 2021-06-15T13:30:18Z | - |
dc.date.available | 2018-03-08 | |
dc.date.copyright | 2016-03-08 | |
dc.date.issued | 2016 | |
dc.date.submitted | 2016-02-03 | |
dc.identifier.citation | [1] T. Yeh and T. Darrell, Dynamic visual category learning,' in Com-puter Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Con-ference on, pp. 1-8, June 2008.
[2] F. Nater, T. Tommasi, H. Grabner, L. Van Gool, and B. Caputo, Transferring activities: Updating human behavior analysis,' in Com-puter Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pp. 1737-1744, Nov 2011. [3] B.-F. Zhang, J.-S. Su, and X. Xu, A class-incremental learning method for multi-class support vector machines in text classification,' in Ma-chine Learning and Cybernetics, 2006 International Conference on, pp. 2581-2585, Aug 2006. [4] O. Oreifej and Z. Liu, Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences,' in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 716-723, June 2013. [5] R. E. S. Yoav Freund, A short introduction to boosting,' Journal of Japanese Society for Artificial Intelligence, 1999. [6] T. K. Ho, Random decision forests,' in Document Analysis and Recog-nition, 1995., Proceedings of the Third International Conference on, vol. 1, pp. 278-282 vol.1, Aug 1995. [7] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, ImageNet Large Scale Visual Recognition Challenge,' In-ternational Journal of Computer Vision (IJCV), pp. 1-42, April 2015. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks,' in Advances in Neural Infor-mation Processing Systems 25 (F. Pereira, C. Burges, L. Bottou, and K. Weinberger, eds.), pp. 1097-1105, Curran Associates, Inc., 2012. [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classifica-tion with deep convolutional neural networks,' in Advances in Neural Information Processing Systems 25 (F. Pereira, C. Burges, L. Bottou, and K. Weinberger, eds.), pp. 1097-1105, 2012. [10] E. Bart and S. Ullman, Cross-generalization: learning novel classes from a single example by feature replacement,' in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005. [11] I. Kuzborskij, F. Orabona, and B. Caputo, From n to n+1: Multi-class transfer incremental learning,' in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 3358-3365, June 2013. [12] L. Jie, T. Tommasi, and B. Caputo, Multiclass transfer learning from unconstrained priors,' in Computer Vision (ICCV), 2011 IEEE Inter-national Conference on, pp. 1863-1870, Nov 2011. [13] C. Lampert, H. Nickisch, and S. Harmeling, Learning to detect un-seen object classes by between-class attribute transfer,' in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 951-958, June 2009. [14] P. Bodesheim, A. Freytag, E. Rodner, M. Kemmler, and J. Denzler, Kernel null space methods for novelty detection,' in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 3374-3381, June 2013. [15] Z. Zhou and Chen, Hybrid decision tree,' in Knowledge-Based Sys-tems, p. 515528, 2002. [16] J. Schlimmer and J. Granger, RichardH., Incremental learning from noisy data,' Machine Learning, vol. 1, no. 3, pp. 317-354, 1986. [17] N. S. Altman, An introduction to kernel and nearest-neighbor non-parametric regression,' in The American Statistician. [18] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka, Distance-based image classification: Generalizing to new classes at near-zero cost,' Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, pp. 2624-2637, Nov 2013. [19] K. Q. Weinberger, J. Blitzer, and L. K. Saul, Distance metric learn-ing for large margin nearest neighbor classification,' in In NIPS, MIT Press, 2006. [20] A. Y. N. David M. Blei and M. I. Jordan, Latent dirichlet allocation,' The Journal of Machine Learning Research, 2003. [21] Q. Liu, R. Huang, H. Lu, and S. Ma, Face recognition using kernel-based fisher discriminant analysis,' in Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pp. 197-201, May 2002. [22] C. Huang, H. Ai, T. Yamashita, S. Lao, and M. Kawade, Incremental learning of boosted face detector,' in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pp. 1-8, Oct 2007. [23] Y. Qin, H. R. Karimi, D. Li, S. Lun, and A. Zhang, A mahalanobis hy-perellipsoidal learning machine class incremental learning algorithm,' pp. 1-5, 2014. [24] A. Rahman and S. Tasnim, Ensemble classifiers and their applications:A review,' CoRR, vol. abs/1404.4088, 2014. [25] R. Polikar, L. Udpa, S. Udpa, and V. Honavar, Learn++: an incre-mental learning algorithm for multilayer perceptron networks,' vol. 6, pp. 3414-3417 vol.6, 2000. [26] M. Muhlbaier, A. Topalis, and R. Polikar, Learn++nc:combining en-semble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes,' Neural Networks, IEEE Transactions on, vol. 20, pp. 152-168, Jan 2009. [27] S. Ruping, Incremental learning with support vector machines,' in Data Mining, 2001. ICDM 2001, Proceedings IEEE International Con-ference on, pp. 641-642, 2001. [28] A. Bordes, S. Ertekin, J. Weston, and L. Bottou, Fast kernel clas-sifiers with online and active learning,' Journal of Machine Learning Research, vol. 6, pp. 1579-1619, September 2005. [29] B. Lakshminarayanan, D. M. Roy, and Y. W. Teh, Mondrian forests:Efficient online random forests,' in Advances in Neural Informa-tion Processing Systems 27 (Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, eds.), pp. 3140-3148, Curran Asso-ciates, Inc., 2014. [30] S. Okada and T. Nishida, Online incremental clustering with dis-tance metric learning for high dimensional data,' in Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 2047-2054, July 2011. [31] C. Cortes and V. Vapnik, Support-vector networks,' Machine Learn-ing, vol. 20, no. 3, pp. 273-297, 1995. [32] B. T. Smith, Lagrange multipliers tutorial in the context of support vector machines,' tech. rep., Engineering and Applied Science Memo-rial University of Newfoundland, 2004. [33] J. C. Platt, Sequential minimal optimization : A fast algorithm for training support vector machines,' tech. rep., Microsoft Research, 1998. [34] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundarara-jan, A dual coordinate descent method for large-scale linear svm,' in Proceedings of the 25th International Conference on Machine Learning, ICML '08, (New York, NY, USA), pp. 408-415, ACM, 2008. [35] H. W. Kuhn and A. W. Tucker, Nonlinear programming,' in Pro-ceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, (Berkeley, Calif.), pp. 481-492, University of California Press, 1951. [36] J. Milgram, M. Cheriet, R. Sabourin, and cole De Technologie Su-prieure Montral, One against one or one against all: Which one is better for handwriting recognition with svms,' in Proceedings of 10th International Workshop on Frontiers in Handwriting Recognition, 2006. [37] S. Sadanand and J. Corso, Action bank: A high-level representa-tion of activity in video,' in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 1234-1241, June 2012. [38] K. Derpanis, M. Sizintsev, K. Cannons, and R. Wildes, Efficient action spotting based on a spacetime oriented structure representation,' in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Con-ference on, pp. 1990-1997, June 2010. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51316 | - |
dc.description.abstract | 最近幾年機器學習在電腦視覺領域扮演越來越重要的角色,許多電腦視覺的研究與應用都離不開機器學習的技術。除了過去我們所熟知的視覺應用,在未來 也將會因應智慧生活的需求,結合機器學習提供更多的應用,例如家護監控、智慧助理和機器人等,將徹底地改變了每個人的生活。
但是,機器學習要完全能應用到我們的生活中仍有一大段路要走。機器學習演算法仍有許多限制。首先,目前的大多數機器學習算法都是在處理靜態的問題 在一般的機器學習方法中,我們提供完整的訓練資料與資料類別,訓練出模型,但在生活中,所接觸到的影像與資料是動態的而且包含了各種變化,大幅增加了辨識的困難度。另一問題是,識別系統無法處理的沒學習過的類別。一般的機器學習演算法學習新的知識是昂貴的,需要去重新訓練整個模型,因此,我們需要找出一種方法。使機器能夠逐步並有效地學習。 在此論文中,我們首先介紹一些增量學習演算法與應用,以及增量學演算法所面臨的困難與挑戰,而我們的增量式學習系統基於支持向量機器SVM演算法,擴增新的類別於原本的模型上,另外改善原本的演算法讓系統能夠有效率地學習動態的影像資料,篩選出具有代表性的資料保留,大幅減少記憶體使用並讓準確度保持在可接受誤差範圍,最後因應未來即時互動的需求,我們提出可能的硬體架構來加速學習系統。 總結而言,我們的系統應用在圖像以及影像的辨識上,我們提出基於SVM的增量學習系統,最主要的貢獻在於可以逐步擴增學習新的類別,並改進了記憶體使用量與線上運算的時間。 | zh_TW |
dc.description.abstract | Machine learning has received much attention in the computer vision community in the past few years and is involved various applications. Many future application such as home-care surveillance, intelligent agent and robotics become more and more popular in recent year.
However, there are still lots of limitations to apply the machine learning techniques into real-world learning scenario. Most of the current visual learning algorithm are dealing with static recognition problem, assuming that the numbers of categories and the training data are fixed. Another problem is that the recognition system can not handle the unseen category. To learn the new knowledge, it is costly to retrain the whole system each time when a new category is presented. Therefore, we need to figure out a way to make the robotic system learn incrementally and efficiently. In this thesis, a novel incremental learning algorithm are presented. Our incremental learning system is based on SVM learning model and learns new classes in online scenario. We propose a novel incremental strategy to extend our model, and we learn with Learning Vectors, which is proposed to select the representative samples for incremental learning and can largely reduce the data storage. In addition, we also adopt online training techniques in our learning algorithm to learn the streaming data efficiently. In the end, we present the hardware architecture design for our learning system. With the acceleration on training process, the system can deal with new knowledge instantly and it is suitable for many real-world visual learning applications such as human action recognition and multiple object tracking. To sum up, we propose a SVM-based incremental learning system which can learn incrementally and largely reduce the memory with acceptable decease in accuracy comparing with retraining the whole system. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T13:30:18Z (GMT). No. of bitstreams: 1 ntu-105-R02943127-1.pdf: 4730418 bytes, checksum: 783d3dec94dbc6dbe280e06aad6d4c61 (MD5) Previous issue date: 2016 | en |
dc.description.tableofcontents | 1 Introduction 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Related Work on Learning for Intelligent System . . . . . . 5 1.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . 9 2 Challenges of Incremental Learning Systems 11 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Overview of incremental learning strategy . . . . . . . . . . 13 2.2.1 Distance metric learning approach . . . . . . . . . . . 14 2.2.2 One-versus-Rest extension approach . . . . . . . . . . 15 2.2.3 Ensemble classifier approach . . . . . . . . . . . . . . 17 2.3 Problems in Incremental Learning . . . . . . . . . . . . . . . 18 2.4 Introduction to online learning . . . . . . . . . . . . . . . . . 20 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3 Proposed Robust Incremental Learning System 23 3.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Introduction to the basic learning framework . . . . . . . . . 24 3.2.1 Introduction to Support Vector Machine . . . . . . . 25 3.2.2 SVM Training . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Proposed new-class incremental learning framework . . . . . 29 3.3.1 Libsvm vs Liblinear . . . . . . . . . . . . . . . . . . . 32 3.4 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . 34 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Proposed Online Learning and Hardware Architecture De- sign 39 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2 Proposed Online Incremental Learning System . . . . . . . . 40 4.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.2 Learning Vectors . . . . . . . . . . . . . . . . . . . . 43 4.2.3 Online training . . . . . . . . . . . . . . . . . . . . . 44 4.2.4 Update . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . 46 4.4 Architecture Design . . . . . . . . . . . . . . . . . . . . . . . 50 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Biography. . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 | |
dc.language.iso | en | |
dc.title | 用於影像辨識之擴增學習系統演算法與架構設計 | zh_TW |
dc.title | Algorithm and Architecture Design of
Incremental Learning System for Visual Recognition | en |
dc.type | Thesis | |
dc.date.schoolyear | 104-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 賴永康(Yeong-Kang Lai),陳美娟(Mei-Juan Chen),黃朝宗(Chao-Tsung Huang) | |
dc.subject.keyword | 擴增學習,支持向量機,機器學習,線上學習, | zh_TW |
dc.subject.keyword | incremental learning,new category learning,on-line learning, | en |
dc.relation.page | 63 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2016-02-04 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電子工程學研究所 | zh_TW |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-105-1.pdf 目前未授權公開取用 | 4.62 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。