請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72001
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳世芳 | |
dc.contributor.author | Yu-Ting Chen | en |
dc.contributor.author | 陳昱婷 | zh_TW |
dc.date.accessioned | 2021-06-17T06:18:34Z | - |
dc.date.available | 2028-08-20 | |
dc.date.copyright | 2018-08-21 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-20 | |
dc.identifier.citation | 行政院農委會。2018。農業統計月報。台北:行政院農委會。網址: http://agrstat.coa.gov.tw/sdweb/public/book/Book.aspx。上網日期:2018-8-13。
行政院農委會。2018。農產品生產面積統計。台北:行政院農委會。網址: http://agrstat.coa.gov.tw/sdweb/public/inquiry/InquireAdvance.aspx。上網日期:2018-8-13。 行政院農委會。2018。農畜產品生產成本統計。台北:行政院農委會。網址: http://agrstat.coa.gov.tw/sdweb/public/inquiry/InquireAdvance.aspx。上網日期:2018-8-13。 吳雪梅、張富貴、呂敬堂。2013。基於圖像顏色信息的茶葉嫩葉識別方法研究。中國農業機械學報 33(6):584~589。 吳雪梅、唐仙、張富貴、顧金梅。2015。基於K-means聚類法的茶葉嫩芽識別研究。中國農業機械學報 36(5)。 茶業改良場。2018。茶葉機械近年研發成果。台北:行政院農委會。網址: https://www.tres.gov.tw/view.php?catid=1677。上網日期:2018-6-20 張浩、陳勇、汪巍、張國路。2014。基於主動計算機視覺的茶葉採摘定位技術。中國農業機械學報 45(9)。 聯合國農業與糧食組織。2018。網址: http://www.fao.org/faostat/en/#data/QC。上網日期:2018-8-13。 Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495. Baruah, P. (2015). Types of tea, value addition and product diversification of Indian tea. Proc. First Int. Conf. Tea Science and Development, pp.151-159. Chen, J., Chen, Y., Jin, X., Che, J., Gao, F., & Li, N. (2015). Research on a parallel robot for green tea flushes plucking. Proc. 5th . Conf. Int. Conf. on Education, Management, Information and Medicine, pp. 22-26. Shenyang, China. Dai, J., Li, Y., He, K., & Sun, J. (2016). R-FCN: Object detection via region-based fully convolutional networks. Proc. 2016 Conf. Neural information processing systems (NIPS), pp. 379-387. Felzenszwalb, P. F., Girshick, R. B., McAllester, D., & Ramanan, D. (2010). Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9), 1627-1645. Frey, P., Björn, E., & Harald, D. (2011). Development of Artificial Neural Network Models for Sorption Chillers. Proc. 30th Conf. ISES Solar World Congress 2011, Kassel, Germany. Girshick, R., J. Donahue., T. Darrell., and J. Malik. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Proc. Conf. IEEE conference on computer vision and pattern recognition(CVPR), pp. 580-587. Girshick, R. (2015). Fast r-cnn. Proc. IEEE Int. Conf. computer vision, pp. 1440-1448. Han, Y., Xiao, H., Qin, G., Song, Z., Ding, W., & Mei, S.. (2014). Developing situations of tea plucking machine. Engineering, 6(06), 268. Hebb, D. O. (1949). The organization of behavior: A neurophysiological theory. New York: Wiley. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Proc. 2012 Conf. Neural information processing systems (NIPS), pp. 1097-1105. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE, 86(11), pp. 2278-2324. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. Proc. European Conf. computer vision., pp. 21-37. Springer, Cham. Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. Proc. IEEE Conf. computer vision and pattern recognition(CVPR), pp. 3431-3440. McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133. Minsky, M. L., & Papert, S. (1969). Perceptrons: An essay in computational geometry. Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. Proc. 27th Int. Conf. machine learning (ICML-10), pp. 807-814. Pound, M. P., Atkinson, J. A., Townsend, A. J., Wilson, M. H., Griffiths, M., Jackson, A. S., ... & Pridmore, T. P. (2016). Deep Machine Learning provides state-of-the-art performance in image-based plant phenotyping. GigaScience. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. Proc. IEEE Conf. computer vision and pattern recognition(CVPR), pp. 779-788. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Proc. 28th Int. Conf. Neural Information Processing Systems (NIPS), pp. 91-99. Montreal, Canada. Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533. Sahoo, S., & Jha, M. K. (2013). Groundwater-level prediction using multiple linear regression and artificial neural network techniques: a comparative assessment. Hydrogeol. J., 21(8), 1865-1887. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117. Sharma, S., Sarangi, S., & Pappula, S. (2016). A framework for performance evaluation of plucking activity in tea. Proc. Conf. Global Humanitarian Technology Conference (GHTC), pp. 14-21. Simonyan, K., & Zisserman , A. (2014). Very deep convolutional networks for large-scale image recognition. Proc. Int. Conf. Learning Representations. Siniscalchi, S. M., Svendsen, T., & Lee, C.-H. (2014). An artificial neural network approach to automatic speech processing. Neurocomputing, 140(22), 326-338. Szegedy, C., Liu. W, Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A.. (2015). Going deeper with convolutions. Proc. IEEE Conf. computer vision and pattern recognition(CVPR), pp. 1-9. Thangavel, S. K., & Murthi, M. (2017). A semi-automated system for smart harvesting of tea leaves. Proc. 4th Int. Conf. on Advanced Computing and Communication Systems (ICACCS), pp. 1-10. Coimbatore, India. Ticknor, J. L. (2013). A Bayesian regularized artificial neural network for stock market forecasting. Expert Sys. Ap., 40(14), 5501-5506. Topuz, A., DİNÇER, C., Torun, M., Tontul, I., NADEEM, H. Ş., Haznedar, A., & ÖZDEMİR, F. (2014). Physicochemical properties of Turkish green tea powder: effects of shooting period, shading, and clone. Turkish Journal of Agriculture and Forestry, 38(2), 233-241. Torralba, A., Russell, B. C., & Yuen, J. (2010). Labelme: Online image annotation and applications. Proc. IEEE, 98(8), 1467-1484. Tzutalin. LabelImg. (2015). Git code:https://github.com/tzutalin/labelImg Wang, J., Zeng, X., and Liu , J. (2011). Three-dimensional modeling of tea-shoots using images and models. Sensors, 11(4), 3803-3815. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks?. Proc. Conf. Adv. Neural info. Processing Syst., pp. 3320-3328. Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. Proc. 13th ECCV, pp. 818-833. Zurich, Switzerland. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72001 | - |
dc.description.abstract | 現行茶葉採收方式主要分為手採與機器採收兩大類,其中以機器進行採收較手採之方式,可增進12到15倍的效率,然機器採收無法避免造成破葉和老葉的收集,亦無法達成特定位置(如:一心二葉、一心三葉或一心四葉)的採收需求。因此對於精品茶市場,仍須以人力需求大的手採為主,但於採收季時面臨勞動力缺乏問題,因此為兼顧採收效率與特定採收位置,本研究致力於開發茶葉採收點辨識。
本研究旨在使用深度學習於偵測嫩葉和辨識其採摘點,使用更快速區域卷積神經網路(Faster Region-based Convolutional Neural Network, Faster R-CNN),搭配ZF模型,達成偵測嫩葉區域位置資訊,再透過全卷積網路(Fully Convolutional Network, FCN)辨識出欲採之區域,測試其三種結構:FCN-32s、FCN-16s和FCN-8s,以FCN-16s表現最佳,最後以影像處理方法決定其二維採收座標。選用台茶8號和台茶18號為訓練樣本,Faster R-CNN其測試平均精確度(Average Precision)結果獲得86.34%,FCN測試結果,其平均準確度和平均交集與聯集比(Intersection over Union),分別達84.91%和70.72%。同時經過測試,其所使用之方法同時也能應用於未被訓練之茶種之上,如:青心烏龍、台茶12號和台茶13號,同時影像並不會受其相機參數影響,亦可達到辨識之結果,訓練之模型提供了一心二葉採摘點位置辨識之成果。 | zh_TW |
dc.description.abstract | Tea (Camellia sinensis) has mainly two plucking types: hand plucking and machine plucking. Although mechanical tea harvester boosts the harvesting efficiency by 12 to 15 times compared with hand-plucking method, it cannot avoid broken or old leaves and achieve the specific points (e.g., one tip with two leaves, one tip with three leaves, one tip with three leaves). High value tea is usually harvested by hand, which is labor intensive. However, tea farmers have faced the problem of labor shortage. To achieve efficient harvesting and specific plucking point, this study focused on developing an algorithm to identify the plucking points of tea shoot.
This study proposed to automatically identify and localize tea plucking point using deep learning. First, faster region-based convolutional neural network (Faster R-CNN) with ZF model was applied to identify the regions of tea shoots. Second, fully convolutional network (FCN) was applied to identify the plucking region. After comparing the performance of three types of FCN: FCN-32s, FCN-16s and FCN-8s, FCN-16s structure was selected. Finally, image processing was applied to get the two dimensional coordinate. Tea leaf images of Taiwan Tea Experiment Station no. 8 and no. 18 were acquired and were used to develop Faster R-CNN and FCN. The Faster R-CNN model achieved a testing average precision (AP) of 86.34%. The Faster R-CNN model could also be applied on another variety Chin Shin oolong. The testing of FCN achieved an average accuracy and average intersection-over-union of 84.91% and 70.72%, respectively. The testing results showed that these methods achieved the same performance while applied on other varieties not used for training (e.g., chin shin oolong, Taiwan Tea Experiment Station no.12 and no.13). Besides, images could not be influenced by other camera and were successfully identified. The developed model presents a promising result to provide the plucking position of the specified tea shoot. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T06:18:34Z (GMT). No. of bitstreams: 1 ntu-107-R05631031-1.pdf: 5763336 bytes, checksum: 8325a456792a4876a44bf25f3454f09b (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 目錄
致謝 i 摘要 ii Abstract iii 目錄 v 圖目錄 vii 表目錄 ix 中英文名詞暨縮寫對照 x 第一章 緒論 1 1.1研究背景 1 1.2研究目的 3 第二章 文獻回顧 4 2.1茶葉採收背景 4 2.1.1採收部位 4 2.1.2採收方式 5 2.1.3機器採收相關研究 7 2.2深度學習(deep learning, DL) 9 2.2.1深度學習簡介 9 2.2.2卷積神經網路(convolutional neural network, CNN) 11 2.3物體偵測(object detection) 19 2.3.1滑動窗格 20 2.3.2二階段偵測 20 2.3.3一階段偵測 23 2.4語義分割(semantic segmentation) 25 2.4.1反卷積(deconvolution) 25 2.4.2編碼-解碼(encoder-decoder) 26 第三章 材料與方法 27 3.1影像收集 27 3.2實驗設備 27 3.3實驗流程 28 3.4 Faster R-CNN 29 3.4.1影像樣本 29 3.4.2卷積網路(Convolution network) 30 3.4.3區域建議網路(Region proposal network, RPN) 30 3.4.4 Fast R-CNN 32 3.4.5損失函數 33 3.4.6訓練細節 34 3.4.7評估方式 35 3.5全卷積網路(Fully convolutional network, FCN) 36 3.5.1影像樣本 36 3.5.2模型結構 37 3.5.3訓練方式 37 3.5.4評估方式 39 3.6影像處理 40 第四章 結果與討論 41 4.1 Faster R-CNN於嫩葉位置偵測結果 41 4.1.1訓練結果 41 4.1.2影像訓練數量比較 45 4.2 FCN於採摘區域辨識結果 47 4.3整體流程評估 50 4.3.1影像結果 50 4.3.2不同取像參數 56 4.3.3茶種適用模型評估 58 第五章 結論與建議 62 5.1結論 62 5.2建議 63 參考文獻 64 | |
dc.language.iso | zh-TW | |
dc.title | 深度卷積神經網路於茶葉採摘點辨識之應用 | zh_TW |
dc.title | Application of Deep Convolutional Neural Networks for the Identification of Tea Plucking Points | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林達德,陳右人,巫嘉昌,顏炳郎 | |
dc.subject.keyword | 更快速區域卷積神經網路(Faster R-CNN),全卷積網路(FCN),一心二葉, | zh_TW |
dc.subject.keyword | Faster R-CNN,FCN,one bud with two leaves, | en |
dc.relation.page | 67 | |
dc.identifier.doi | 10.6342/NTU201804033 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-08-20 | |
dc.contributor.author-college | 生物資源暨農學院 | zh_TW |
dc.contributor.author-dept | 生物產業機電工程學研究所 | zh_TW |
顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 5.63 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。