請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72001
標題: | 深度卷積神經網路於茶葉採摘點辨識之應用 Application of Deep Convolutional Neural Networks for the Identification of Tea Plucking Points |
作者: | Yu-Ting Chen 陳昱婷 |
指導教授: | 陳世芳 |
關鍵字: | 更快速區域卷積神經網路(Faster R-CNN),全卷積網路(FCN),一心二葉, Faster R-CNN,FCN,one bud with two leaves, |
出版年 : | 2018 |
學位: | 碩士 |
摘要: | 現行茶葉採收方式主要分為手採與機器採收兩大類,其中以機器進行採收較手採之方式,可增進12到15倍的效率,然機器採收無法避免造成破葉和老葉的收集,亦無法達成特定位置(如:一心二葉、一心三葉或一心四葉)的採收需求。因此對於精品茶市場,仍須以人力需求大的手採為主,但於採收季時面臨勞動力缺乏問題,因此為兼顧採收效率與特定採收位置,本研究致力於開發茶葉採收點辨識。
本研究旨在使用深度學習於偵測嫩葉和辨識其採摘點,使用更快速區域卷積神經網路(Faster Region-based Convolutional Neural Network, Faster R-CNN),搭配ZF模型,達成偵測嫩葉區域位置資訊,再透過全卷積網路(Fully Convolutional Network, FCN)辨識出欲採之區域,測試其三種結構:FCN-32s、FCN-16s和FCN-8s,以FCN-16s表現最佳,最後以影像處理方法決定其二維採收座標。選用台茶8號和台茶18號為訓練樣本,Faster R-CNN其測試平均精確度(Average Precision)結果獲得86.34%,FCN測試結果,其平均準確度和平均交集與聯集比(Intersection over Union),分別達84.91%和70.72%。同時經過測試,其所使用之方法同時也能應用於未被訓練之茶種之上,如:青心烏龍、台茶12號和台茶13號,同時影像並不會受其相機參數影響,亦可達到辨識之結果,訓練之模型提供了一心二葉採摘點位置辨識之成果。 Tea (Camellia sinensis) has mainly two plucking types: hand plucking and machine plucking. Although mechanical tea harvester boosts the harvesting efficiency by 12 to 15 times compared with hand-plucking method, it cannot avoid broken or old leaves and achieve the specific points (e.g., one tip with two leaves, one tip with three leaves, one tip with three leaves). High value tea is usually harvested by hand, which is labor intensive. However, tea farmers have faced the problem of labor shortage. To achieve efficient harvesting and specific plucking point, this study focused on developing an algorithm to identify the plucking points of tea shoot. This study proposed to automatically identify and localize tea plucking point using deep learning. First, faster region-based convolutional neural network (Faster R-CNN) with ZF model was applied to identify the regions of tea shoots. Second, fully convolutional network (FCN) was applied to identify the plucking region. After comparing the performance of three types of FCN: FCN-32s, FCN-16s and FCN-8s, FCN-16s structure was selected. Finally, image processing was applied to get the two dimensional coordinate. Tea leaf images of Taiwan Tea Experiment Station no. 8 and no. 18 were acquired and were used to develop Faster R-CNN and FCN. The Faster R-CNN model achieved a testing average precision (AP) of 86.34%. The Faster R-CNN model could also be applied on another variety Chin Shin oolong. The testing of FCN achieved an average accuracy and average intersection-over-union of 84.91% and 70.72%, respectively. The testing results showed that these methods achieved the same performance while applied on other varieties not used for training (e.g., chin shin oolong, Taiwan Tea Experiment Station no.12 and no.13). Besides, images could not be influenced by other camera and were successfully identified. The developed model presents a promising result to provide the plucking position of the specified tea shoot. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72001 |
DOI: | 10.6342/NTU201804033 |
全文授權: | 有償授權 |
顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 5.63 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。