請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64727
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳世芳(Shih-Fang Chen) | |
dc.contributor.author | Yu-Kai Lin | en |
dc.contributor.author | 林愉凱 | zh_TW |
dc.date.accessioned | 2021-06-16T22:58:08Z | - |
dc.date.available | 2020-09-25 | |
dc.date.copyright | 2020-09-25 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-02-25 | |
dc.identifier.citation | Bertozzi, M., and Broggi, A. (1997). Vision-based vehicle guidance. Computer, 30(7), 49-55. Bay, H., Tuytelaars, T., and Van Gool, L. (2006). Surf: Speeded up robust features. Proceedings of European Conference on Computer vision (ECCV), 404-417. Bakken, M., Moore, R. J., From, P. (2019). End-to-end Learning for Autonomous Crop Row-following. Proceedings of the International Federation of Automatic Control (IFAC) Conference on Sensing, Control and Automation Technologies for Agriculture, 52(30), 102-107. Council of Agriculture. (2018). Agricultural Statistics. Council of Agriculture. Retrieved from https://agrstat.coa.gov.tw/sdweb/public/trade/tradereport.aspx (accessed April, 2019). Dai, A., and Nießner, M. (2018). 3DMV: Joint 3D-multi-view prediction for 3D semantic scene segmentation. Proceedings of the European Conference on Computer Vision (ECCV), 452-468. Gottschalk, R., Burgos-Artizzu, X. P., Ribeiro, A., Pajares, G., and Sainchez, M. A. (2008). Real-time image processing for the guidance of a small agricultural field inspection vehicle. (IEEE) Proceedings of the International Conference on Mechatronics and Machine Vision in Practice (ICMMVP), 493-498. Harris, C. G., and Stephens, M. (1988). A combined corner and edge detector. Proceedings of the International Conference Alvey Vision (ICAV), 15(50), 10-5244. Han, S., Zhang, Q., Ni, B., and Reid, J. F. (2004). A guidance directrix approach to vision-based vehicle guidance systems. Computers and Electronics in Agriculture, 43(3), 179-195. Huang, W.Y., Wu, J. C., Liu, M. C., and Zhang, Z. H. (2016). Tea plucking machine instead of hand picking. Tea Industry News, 96, 3-5. Huang, W.Y., Wu, J. C., Lin, H. C., Su, Y. S., Liu, M. C., and Zhang, Z. H. (2017). Application of riding-type tea plucking machine in flat field. Tea Love Bimonthly, 89, 1-4. Jiang, G., Wang, X., Wang, Z., and Liu, H. (2016). Wheat rows detection at the early growth stage based on Hough transform and vanishing point. Computers and Electronics in Agriculture, 123, 211-223. Jia, Y., Su, Z., Zhang, Q., Zhang, Y., Gu, Y., and Chen, Z. (2015). Research on UAV remote sensing image mosaic method based on SIFT. Signal Processing, Image Processing and Pattern Recognition, 8(11), 365-374. Kuter, N., and Kuter, S. (2010). Accuracy comparison between GPS and DGPS: A field study at METU campus. Italian Journal of Remote Sensing, 42(3), 3-14. Leutenegger, S., Chli, M., and Siegwart, R. (2011). BRISK: Binary robust invariant scalable keypoints. (IEEE) Proceedings of International Conference on Computer Vision (ICCV), 2548-2555. Long, J., Shelhamer, E., and Darrell, T. (2015). Fully convolutional networks for semantic segmentation. (IEEE) Proceedings of the International Conference on Computer Vision and Pattern Recognition (ICCVPR), 3431-3440. Mistry, S., and Patel, A. (2016). Image Stitching using Harris Feature Detection. International Research Journal of Engineering and Technology (IRJET), 3(4), 2220-6. Murali, V., Chiu, H. P., Samarasekera, S., and Kumar, R. T. (2017). Utilizing semantic visual landmarks for precise vehicle navigation. (IEEE) Proceedings of the International Conference on Intelligent Transportation Systems (ITSC), 1-8. O’Mahony, N., Campbell, S., Krpalkova, L., Riordan, D., Walsh, J., Murphy, A., and Ryan, C. (2018). Deep learning for visual navigation of unmanned ground vehicles: a review. (IEEE) Processing of the Conference Irish Signals and Systems (CISS), 1-6. Pravenaa, S., and Menaka, R. (2016). A methodical review on image stitching and video stitching techniques. Applied Engineering Research, 11(5), 3442-3448. Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E. (2016). Enet: A deep neural network architecture for real-time semantic segmentation. arXiv: 1606.02147. Patra, S., Maheshwari, P., Yadav, S., Banerjee, S., and Arora, C. (2018). A joint 3d-2d based method for free space detection on roads. Proceedings of the International Conference on Applications of Computer Vision (WACV), 643-652. Rosten, E., and Drummond, T. (2006). Machine learning for high-speed corner detection. Proceedings of European Conference on Computer Vision (ECCV), 430-443. Ribeiro, D., Mateus, A., Miraldo, P., and Nascimento, J. C. (2017). A real-time deep learning pedestrian detector for robot navigation. (IEEE) Proceedings of the International Conference on Autonomous Robot Systems and Competitions (ICARSC), 165-171. Russell, B. C., Torralba, A., Murphy, K. P. and Freeman, W. T. (2008). LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision, 77(1-3), 157–173. Sharma, V. S. (2004). Integration of agro-techniques for higher plucker productivity and lower harvesting costs. International Journal of Tea Science, 3(3and4). Shalal, N., Low, T., McCarthy, C., and Hancock, N. (2013). A review of autonomous navigation systems in agricultural environments. Proceedings of the Society for Engineering in Agriculture (SEA), 22-25. Siam, M., Elkerdawy, S., Jagersand, M., and Yogamani, S. (2017, October). Deep semantic segmentation for automated driving: Taxonomy, roadmap and challenges. Proceedings of the International Conference on intelligent transportation systems (ITSC), 1-8. Zhang, W., Li, X., Yu, J., Kumar, M., and Mao, Y. (2018). Remote sensing image mosaic technology based on SURF algorithm in agriculture. Image and Video Processing, 2018(1), 85. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64727 | - |
dc.description.abstract | 近年來人口老年化導致農業勞力短缺,於茶產業尤為嚴重。台灣從日本引進乘坐式採茶機,以高效率採茶方式解決勞力短缺問題,然而使用採茶機需高度技術與經驗,若操作不當可能會對茶樹造成損害,並導致機械故障,甚至影響茶葉採收品質。因此,若能開發採茶機之即時航向輔助功能以提高茶葉品質。於採茶機施作同時建立植被監測系統,進行生長情形監測,將可提高其功能性。本研究使用深度卷積神經網路FCN-32s、FCN-16s、FCN8s和ENet等語義分割架構辨識茶行所在及茶園中之障礙物。其中以ENet模型取得較佳之平均交疊率(mean intersection over unit, mean IU) 為0.734、平均準確率(mean accuracy)為0.94,及較快之運算時間0.176s。完成物件分割後,再利用霍夫變換(Hough transform)建立行使之航向輔助線,其平均誤差為5.92° 和11.30公分。於植被監測系統中,利用色彩空間HSV判斷茶樹的生長狀況,並建立全景影觀察一定範圍的茶樹冠面,比較scale-invariant feature transform (SIFT), speeded up robust feature (SURF) and binary robust invariant scalable keypoints (BRISK)三種方法。三種方法的縫合效果差異不大,其中以SURF的匹配速度較快,在一般茶行和茶行盡頭這兩種情形中,偵測時間為1.86和1.17秒。植被監測系統由GPS軌跡、單張茶樹冠面影像的生長情形和茶樹冠面全景影像呈現,全景影像由影像縫合技術建立。比較一般茶行、茶行盡頭和稀疏茶行三種情形,影像縫合達到90.50%、95.94%和90.91%的平均縫合相似度。本研究成功建立乘坐式採茶機之航向輔助導引,以協助其行駛路線維持於茶行中心;並於同次機械運作時收集植被生長狀態,提供包含行駛軌跡紀錄、位置與茶樹冠面生長影像、植被生長狀態判別等監測功能。透過上述兩大主要開發功能,以期提升茶園機械操作及生長管理的便利性。 | zh_TW |
dc.description.abstract | Labor shortage is a critical issue in many industries, especially in agricultural production. In recent years, riding-type tea plucking machines were imported to provide a relatively efficient solution for tea harvesting. However, high-level driving skill is essential. Improper operation may cause damage to tea trees, mechanical failure, and affect the quality of harvested tea leaves. A real-time image-based navigation system can potentially mitigate these difficulties. While the tea plucking machine is in operation, tea canopy images are captured to build a monitoring system. The monitoring system provides the growth status of tea. In this study, semantic segmentation ‒ fully convolutional networks (FCN, including the architectures of 8s, 16s, and 32s) and ENet were applied to detect the obstacles and develop guiding lines to maneuver the plucking machine. ENet outperformed other models in overall performance with mean intersection over unit (mIU) of 0.734, mean accuracy of 0.941, and detection time of 0.176s. Based on the segmentation result from ENet, the guiding lines calculated by Hough transform produced an average angle bias and distance bias of 5.92° and 11.30 centimeter, respectively. In the meantime, scale-invariant feature transform (SIFT), speeded up robust feature (SURF) and Binary Robust Invariant Scalable Keypoints (BRISK) image stitching methods were compared to build panoramic tea canopy images for monitoring purpose. There were no significant differences in the stitching results among the three methods. SURF delivered the results in the shortest processing time of 1.86s and 1.17s in tea row and end of tea row images, respectively. The monitoring system consists of a driving path, a single location image with growth status, and a panoramic image. The cosine similarity (CS) of panoramic tea canopy image are 90.50%, 95.94% and 90.91% in tea row, end of tea row and sparse tea row images. This study successfully develops a real-time guiding system to assist the machine operator to ride in the center of tea row, as well as a monitoring system for tea field management. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T22:58:08Z (GMT). No. of bitstreams: 1 ntu-109-R06631027-1.pdf: 2867502 bytes, checksum: f67ade82158c5a2b59f10e24462e1e08 (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | ACKNOWLEDGEMENT i 摘要 ii ABSTRACT iv LIST OF FIGURES viii LIST OF TABLES ix CHAPTER 1. Introduction 1 1.1 Background and Motivation 1 1.2 Objectives 3 CHAPTER 2. Literature Review 4 2.1. Guiding System Sensors 4 2.2. Image-based Guiding System 4 2.3. Deep Convolutional Neural Networks (DCNN) 5 2.4. Construction of Panoramic Images 6 CHAPTER 3. Materials and Methods 8 3.1. Experimental Design and Image Acquisition 8 3.2. Imaging Annotation 9 3.3. Tea Rows and Obstacle Segmentation Using DCNN 10 3.4. Guiding Line Identification 13 3.5. Model Performance Evaluation 14 3.6. Monitoring System 15 3.7. Image Stitching 16 3.8. Image Stitching Evaluation Methods 21 CHAPTER 4. Results and Discussion 22 4.1. The Performance of Tea Rows Segmentation and Obstacle Detection 22 4.2. The Performance of Navigation Line Estimation 24 4.3. The Performance of Panoramic Image 26 4.4. The Monitoring System 28 CHAPTER 5. Conclusion 31 REFERENCES 32 | |
dc.language.iso | en | |
dc.title | 茶園機械之導航輔助暨植被監測影像系統之開發 | zh_TW |
dc.title | Developing a Guiding and Monitoring System for Tea Plucking Machine | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林達德(Ta-Te Lin),顏炳郎(Ping-Lang Yen),劉天麟(Tian-Lin Liu),郭彥甫(Yan-Fu Kuo) | |
dc.subject.keyword | 航向輔助,深度學習,語義分割,影像處理,採茶機械, | zh_TW |
dc.subject.keyword | semantic segmentation,deep learning,automatic navigation,image processing,tea plucking machine, | en |
dc.relation.page | 35 | |
dc.identifier.doi | 10.6342/NTU202000600 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2020-02-26 | |
dc.contributor.author-college | 生物資源暨農學院 | zh_TW |
dc.contributor.author-dept | 生物機電工程學系 | zh_TW |
顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-109-1.pdf 目前未授權公開取用 | 2.8 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。