Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64727
Title: | 茶園機械之導航輔助暨植被監測影像系統之開發 Developing a Guiding and Monitoring System for Tea Plucking Machine |
Authors: | Yu-Kai Lin 林愉凱 |
Advisor: | 陳世芳(Shih-Fang Chen) |
Keyword: | 航向輔助,深度學習,語義分割,影像處理,採茶機械, semantic segmentation,deep learning,automatic navigation,image processing,tea plucking machine, |
Publication Year : | 2020 |
Degree: | 碩士 |
Abstract: | 近年來人口老年化導致農業勞力短缺,於茶產業尤為嚴重。台灣從日本引進乘坐式採茶機,以高效率採茶方式解決勞力短缺問題,然而使用採茶機需高度技術與經驗,若操作不當可能會對茶樹造成損害,並導致機械故障,甚至影響茶葉採收品質。因此,若能開發採茶機之即時航向輔助功能以提高茶葉品質。於採茶機施作同時建立植被監測系統,進行生長情形監測,將可提高其功能性。本研究使用深度卷積神經網路FCN-32s、FCN-16s、FCN8s和ENet等語義分割架構辨識茶行所在及茶園中之障礙物。其中以ENet模型取得較佳之平均交疊率(mean intersection over unit, mean IU) 為0.734、平均準確率(mean accuracy)為0.94,及較快之運算時間0.176s。完成物件分割後,再利用霍夫變換(Hough transform)建立行使之航向輔助線,其平均誤差為5.92° 和11.30公分。於植被監測系統中,利用色彩空間HSV判斷茶樹的生長狀況,並建立全景影觀察一定範圍的茶樹冠面,比較scale-invariant feature transform (SIFT), speeded up robust feature (SURF) and binary robust invariant scalable keypoints (BRISK)三種方法。三種方法的縫合效果差異不大,其中以SURF的匹配速度較快,在一般茶行和茶行盡頭這兩種情形中,偵測時間為1.86和1.17秒。植被監測系統由GPS軌跡、單張茶樹冠面影像的生長情形和茶樹冠面全景影像呈現,全景影像由影像縫合技術建立。比較一般茶行、茶行盡頭和稀疏茶行三種情形,影像縫合達到90.50%、95.94%和90.91%的平均縫合相似度。本研究成功建立乘坐式採茶機之航向輔助導引,以協助其行駛路線維持於茶行中心;並於同次機械運作時收集植被生長狀態,提供包含行駛軌跡紀錄、位置與茶樹冠面生長影像、植被生長狀態判別等監測功能。透過上述兩大主要開發功能,以期提升茶園機械操作及生長管理的便利性。 Labor shortage is a critical issue in many industries, especially in agricultural production. In recent years, riding-type tea plucking machines were imported to provide a relatively efficient solution for tea harvesting. However, high-level driving skill is essential. Improper operation may cause damage to tea trees, mechanical failure, and affect the quality of harvested tea leaves. A real-time image-based navigation system can potentially mitigate these difficulties. While the tea plucking machine is in operation, tea canopy images are captured to build a monitoring system. The monitoring system provides the growth status of tea. In this study, semantic segmentation ‒ fully convolutional networks (FCN, including the architectures of 8s, 16s, and 32s) and ENet were applied to detect the obstacles and develop guiding lines to maneuver the plucking machine. ENet outperformed other models in overall performance with mean intersection over unit (mIU) of 0.734, mean accuracy of 0.941, and detection time of 0.176s. Based on the segmentation result from ENet, the guiding lines calculated by Hough transform produced an average angle bias and distance bias of 5.92° and 11.30 centimeter, respectively. In the meantime, scale-invariant feature transform (SIFT), speeded up robust feature (SURF) and Binary Robust Invariant Scalable Keypoints (BRISK) image stitching methods were compared to build panoramic tea canopy images for monitoring purpose. There were no significant differences in the stitching results among the three methods. SURF delivered the results in the shortest processing time of 1.86s and 1.17s in tea row and end of tea row images, respectively. The monitoring system consists of a driving path, a single location image with growth status, and a panoramic image. The cosine similarity (CS) of panoramic tea canopy image are 90.50%, 95.94% and 90.91% in tea row, end of tea row and sparse tea row images. This study successfully develops a real-time guiding system to assist the machine operator to ride in the center of tea row, as well as a monitoring system for tea field management. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64727 |
DOI: | 10.6342/NTU202000600 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 生物機電工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-109-1.pdf Restricted Access | 2.8 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.