請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94581| 標題: | 應用影像庋用與機器學習於智慧害蟲監測系統資料品質之提升 Enhancing Data Quality in Intelligent Insect Pest Monitoring System with Image Curation and Machine Learning |
| 作者: | 鄧喬尹 Chiao-Yin Teng |
| 指導教授: | 林達德 Ta-Te Lin |
| 關鍵字: | 病蟲害整合管理,影像前處理,物件偵測模型,害蟲分類模型,時空間分佈演算法分析, integrated pest management,image curation,object detection,insect pest classification,spatiotemporal analysis, |
| 出版年 : | 2024 |
| 學位: | 碩士 |
| 摘要: | 在農作物生產中,蟲害被認為是對農業生產的最大威脅之一。它們危害農作物的生長,降低作物產量,對農業經濟收益造成重大損失。精確獲取作物栽培環境中害蟲種類與數量的資訊對於蟲害整合管理(IPM)至關重要。在之前的研究中,我們開發出一套基於深度學習方法的AIoT影像設備,專門用於通過黏蟲紙獲取的影像,利用深度學習辨識害蟲種類與數量。儘管先前已取得初步研究成果,但在灰塵和水分干擾、光照條件變化及影像模糊等挑戰下,仍可能導致害蟲的錯誤分類和數量計算不準確。為確保蟲害整合管理的有效性,本研究的目標是開發一個整體框架,用於提升害蟲檢測、分類和數據分析。我們採用YOLOv7模型來替代之前的YOLOv3-Tiny模型,專門用於黏蟲紙影像的物體檢測。結果顯示,YOLOv7模型在各個實驗場域中均達到0.95以上的mAP@.5表現。後續使用ResNet-18模型進行害蟲分類,達到整體害蟲分類模型的F1-score為0.988。此外,利用時空間分析對一段時間內蒐集的黏蟲紙影像進行害蟲種類和數量的調整。創立一個動態的二維數組來記錄在黏蟲紙影像中檢測到害蟲的位置和時間。通過多數決投票算法與對應座標的時空間分析,修正每個時間點害蟲的種類與數量,從而最大限度地減少害蟲的錯誤分類和數量計算不準確。相比於人工統計的真實數據,MAPE誤差僅約為5%。分析結果顯示,本研究所提出的框架能有效提升害蟲辨識模型效能和整體資料品質。 Effective integrated pest management (IPM) in crop production depends on precise information about the quantity and species of insect pests in the crop cultivation environment. In our prior work, we developed AIoT imaging devices employing a deep learning approach tailored for the automated counting and classification of insect pests through acquired images from sticky paper traps. Notwithstanding the previous progress achieved, challenges persist in mitigating issues such as interference from dust and water, variations in lighting conditions, and blurred images, ultimately resulting in the misclassification and miscounting of insect pests. To ensure the effectiveness of integrated pest management, securing accurate data on the types and quantities of insect pests present within the field is crucial. Thus, the objective of this research aims to develop a holistic framework for enhancement of insect pest detection, classification, and data analysis. This involves adopting the YOLOv7 model for object detection models specifically applied to images obtained from sticky paper traps, as a replacement for the prior approach employing the YOLOv3-tiny model. The results demonstrated that the proposed method achieved an average mAP exceeding 0.95, and employing ResNet-18 for insect pest classification, achieving an overall F1-score of 0.988. Additionally, subsequent method involves insect pest count modification using spatiotemporal analysis for a series of sticky paper trap images. A dynamic two-dimensional array was created to log the location and times at which insect pests were detected on the sticky paper trap images. Employing a voting algorithm can ascertain the definitive count of insect pests for each time interval, thereby minimizing the potential for miscounts, achieved a MAPE error of only about 5% compared to ground truth. The analysis results indicate that the proposed framework effectively enhances the insect pest recognition performance and the overall data quality. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94581 |
| DOI: | 10.6342/NTU202404155 |
| 全文授權: | 同意授權(全球公開) |
| 顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf | 4.94 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
