請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56069
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 徐宏民(Winston H. Hsu) | |
dc.contributor.author | Cheng-Yu Huang | en |
dc.contributor.author | 黃正宇 | zh_TW |
dc.date.accessioned | 2021-06-16T05:14:33Z | - |
dc.date.available | 2019-09-02 | |
dc.date.copyright | 2014-09-02 | |
dc.date.issued | 2014 | |
dc.date.submitted | 2014-08-18 | |
dc.identifier.citation | [1] Bay et al. Surf: Speeded up robust features. In ECCV. 2006.
[2] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT’ 98, pages 92–100, New York, NY, USA, 1998. ACM. [3] Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary amp; region segmentation of objects in n-d images. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 1, pages 105–112 vol.1, 2001. [4] G. Bradski. Dr. Dobb’s Journal of Software Tools. [5] M.-M. Cheng, J. Warrell, W.-Y. Lin, S. Zheng, V. Vineet, and N. Crook. Efficient salient region detection with soft image abstraction. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 1529–1536, Dec 2013. [6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [7] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object cate-28 gories. In Computer Vision and Pattern Recognition Workshop, 2004. CVPRW ’04. Conference on, pages 178–178, June 2004. [8] Jegou et al. Aggregating local descriptors into a compact image representation. In CVPR, 2010. [9] N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. Lopez, and J. V. B. Soares. Leafsnap: A computer vision system for automatic plant species identification. In The 12th European Conference on Computer Vision (ECCV), October 2012. [10] A. Levin, P. Viola, and Y. Freund. Unsupervised improvement of visual detectors using cotraining. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pages 626–633 vol.1, Oct 2003. [11] P. Li et al. Very sparse random projections. In ACM, 2006. [12] V. Ng and C. Cardie. Weakly supervised natural language learning without redundant views. In HLT-NAACL 2003: Proceedings of the Main Conference, pages 173–180, 2003. [13] M.-E. Nilsback and A. Zisserman. A visual vocabulary for flower classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 1447–1454, 2006. [14] C. Rother, V. Kolmogorov, and A. Blake. Grabcut -interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (SIGGRAPH), August 2004. [15] F. Tang, S. Brennan, Q. Zhao, and H. Tao. Co-tracking using semi-supervised support vector machines. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1–8, Oct 2007. 29 [16] K. E. A. Van de Sande, T. Gevers, and C. G. M. Snoek. Evaluating color descriptors for object and scene recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1582–1596, Sept 2010. [17] Yanagawa et al. Brief descriptions of visual features for baseline trecvid concept detectors. Technical report. [18] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56069 | - |
dc.description.abstract | 在近年來,花卉辨識逐漸受到大家的重視,幫助人們了解花卉的生
物知識。隨著行動裝置的盛行,我們開發出一個即時的花卉辨識系統。 傳統的花卉辨識系統,直接應用在行動裝置上可能會受限於在速度有 限的無線網路中,傳輸資料的速度很慢,以及需要夠多的訓練資料才 能得到比較好的花卉辨識系統。我們利用緊湊的特徵表示符降低傳輸 量達到即時花卉辨識,且將網路上的照片自動加入到系統中,學習出 更好的花卉分類器並取得更多訓練資料。為了實驗我們系統的效能, 我們使用了一個常被使用的花卉資料庫,以及從 Flickr 上下載約一萬 張的影像,來測試系統的準確度與速度。 | zh_TW |
dc.description.abstract | Flower (plant) recognition has gained much attention recently for helping users to automatically identify flowers (plants). With the convenience of
mobile devices, we develop a real-time mobile-based flower recognition system. Applying the traditional flower recognition frameworks to the mobile environment suffer from (1) the bottleneck of the transmission of the query image from the mobile devices to the recognition servers in the limited wireless network and (2) insufficient training images to build the strong classifiers. We purpose a mobile-based detection framework to reduce the transmission time and the expanding training data framework which crawls the training images from web automatically. We evaluate our mobile-based recognition framework on standard dataset, showing performance competitive with existing methods and fast recognition time in the mobile environment. To evaluate the effective of expanding training images from web, we collect 10k flower images from Flickr and show the significant improvement after expanding. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T05:14:33Z (GMT). No. of bitstreams: 1 ntu-103-R01944016-1.pdf: 4714551 bytes, checksum: 04a479b51d4f5d3d37aba07f9695c227 (MD5) Previous issue date: 2014 | en |
dc.description.tableofcontents | 誌謝 ii
摘要 v Abstract vi 1 Introduction 1 2 Related Work 4 2.1 Flower Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.1 Self-training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.2 Co-training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 Flora - Mobile APP for Flower Recognition 9 3.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Off-line Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 On-line Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4 Content Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.5 Mobile Interface Design and APP . . . . . . . . . . . . . . . . . . . . . 12 4 Automatically Expanding Training Datasets from Web 13 4.1 The Challenges of Utilizing the Web Resource . . . . . . . . . . . . . . . 13 vii 4.2 Multi-modality in the Flower Recognition . . . . . . . . . . . . . . . . . 14 4.3 Self-training and Co-training in the Flower Recognition . . . . . . . . . . 15 4.4 The Expanding Training Data Process Overview . . . . . . . . . . . . . . 17 4.5 Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.6 Opponent SURF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.7 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 Experiments 22 5.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2 Retrieval Performance of the Flower Recognition System . . . . . . . . . 23 5.3 Performance Improvement After the Expanding Training Data . . . . . . 24 6 Conclusions and Future Work 27 Bibliography 28 | |
dc.language.iso | zh-TW | |
dc.title | 利用網路自動擴增訓練影像資料庫以提升行動裝置上花卉辨識之效能 | zh_TW |
dc.title | Augmenting Mobile-Based Flower Recognition by Automatically Expanding Training Datasets from Web | en |
dc.type | Thesis | |
dc.date.schoolyear | 102-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳文進(Wen-Chin Chen),余能豪(Neng-Hao Yu) | |
dc.subject.keyword | 電腦視覺,半監督式學習法, | zh_TW |
dc.subject.keyword | Computer vision,Semi-supervised learning, | en |
dc.relation.page | 30 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2014-08-18 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-103-1.pdf 目前未授權公開取用 | 4.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。