Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 生物資源暨農學院
  3. 生物機電工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89089
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林達德zh_TW
dc.contributor.advisorTa-Te Linen
dc.contributor.author陳璟寬zh_TW
dc.contributor.authorChing-Kuang Chenen
dc.date.accessioned2023-08-16T17:05:13Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-16-
dc.date.issued2023-
dc.date.submitted2023-08-08-
dc.identifier.citationAiello, G., Giovino, I., Vallone, M., Catania, P., & Argento, A. (2018). A decision support system based on multisensor data fusion for sustainable greenhouse management. Journal of Cleaner Production, 172, 4057-4065.
Alves, A. N., Souza, W. S., & Borges, D. L. (2020). Cotton pests classification in field-based images using deep residual networks. Computers and Electronics in Agriculture, 174, 105488.
Azfar, S., Ahsan, K., Mehmood, N., Nadeem, A., Alkhodre, A. B., Alghmdi, T., & Alsaawy, Y. (2018). Monitoring, detection and control techniques of agriculture pests and diseases using wireless sensor network: a review. International Journal of Advanced Computer Science and Applications, 9(12), 424-433.
Baucum, M., Belotto, D., Jeannet, S., Savage, E., Mupparaju, P., & Morato, C. W. (2017). Semi-supervised deep continuous learning. Proceedings of the 2017 International Conference on Deep Learning Technologies.
Boyat, A. K., & Joshi, B. K. (2015). A review paper : Noise models in digital image processing. Signal & Image Processing : An International Journal, 6(2), 63-75.
Bradski, G., & Kaehler, A. (2000). OpenCV. Dr. Dobb’s journal of software tools, 3(2).
Cho, J., Choi, J., Qiao, M., Ji, C., Kim, H., Uhm, K., & Chon, T. (2007). Automatic identification of whiteflies, aphids and thrips in greenhouse based on image analysis. International Journal of Mathematics and Computers in Simulation, 346(246), 244.
Dai, Q., Cheng, X., Qiao, Y., & Zhang, Y. (2020). Agricultural pest super-resolution and identification with attention enhanced residual and dense fusion generative and adversarial network. IEEE Access, 8, 81943-81959.
Domingues, R., Filippone, M., Michiardi, P., & Zouaoui, J. (2018). A comparative evaluation of outlier detection algorithms: Experiments and analyses. Pattern recognition, 74, 406-421.
Dong, C., Loy, C. C., He, K., & Tang, X. (2015). Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence Access, 38(2), 295-307.
Dong, C., Loy, C. C., & Tang, X. (2016). Accelerating the super-resolution convolutional neural network. European conference on computer vision.
Ehler, L. E. (2006). Integrated pest management (IPM): Definition, historical development and implementation, and the other IPM. Pest Management Science, 62(9), 787-789.
Espinoza, K., Valera, D. L., Torres, J. A., López, A., & Molina-Aiz, F. D. (2016). Combination of image processing and artificial neural networks as a novel approach for the identification of Bemisia tabaci and Frankliniella occidentalis on sticky traps in greenhouse agriculture. Computers and Electronics in Agriculture, 127, 495-505.
Gjestang, H. L., Hicks, S. A., Thambawita, V., Halvorsen, P., & Riegler, M. A. (2021). A self-learning teacher-student framework for gastrointestinal image classification. 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS).
Guérin, J., Thiery, S., Nyiri, E., & Gibaru, O. (2018). Unsupervised robotic sorting: Towards autonomous decision making robots. arXiv preprint arXiv:1804.04572.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition.
Hoi, S. C., Sahoo, D., Lu, J., & Zhao, P. (2021). Online learning: A comprehensive survey. Neurocomputing, 459, 249-289.
Huddar, S. R., Gowri, S., Keerthana, K., Vasanthi, S., & Rupanagudi, S. R. (2012). Novel algorithm for segmentation and automatic identification of pests on plants using image processing. 2012 Third International Conference on Computing, Communication and Networking Technologies (ICCCNT'12).
Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. International conference on machine learning.
Khalifa, N. E. M., Loey, M., & Taha, M. H. N. (2020). Insect pests recognition based on deep transfer learning models. Journal of Theoretical and Applied Information Technology, 98(1), 60-68.
Khan, M. A., Sharif, M., Akram, T., Raza, M., Saba, T., & Rehman, A. (2020). Hand-crafted and deep convolutional neural network features fusion and selection strategy: an application to intelligent human action recognition. Applied Soft Computing, 87, 105986.
Kogan, M. (1998). Integrated pest management: historical perspectives and contemporary developments. Annual Review of Entomology, 43, 243-270.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the Acm, 60(6), 84-90.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., & Wang, Z. (2017). Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the IEEE conference on computer vision and pattern recognition.
Li, F., & Xiong, Y. (2017). Automatic identification of butterfly species based on HoMSC and GLCMoIB. The Visual Computer, 34(11), 1525-1533.
Lin, W. Y., Hasenstab, K., Cunha, G. M., & Schwartzman, A. (2020). Comparison of handcrafted features and convolutional neural networks for liver MR image adequacy assessment. Scientific Reports, 10(1), 11, Article 20336.
Liu, T., Chen, W., Wu, W., Sun, C., Guo, W., & Zhu, X. (2016). Detection of aphids in wheat fields using a computer vision technique. Biosystems Engineering, 141, 82-93.
Lu, S., & Ye, S.-j. (2020). Using an image segmentation and support vector machine method for identifying two locust species and instars. Journal of Integrative Agriculture, 19(5), 1301-1313.
Maqsood, M. H., Mumtaz, R., Haq, I. U., Shafi, U., Zaidi, S. M. H., & Hafeez, M. (2021). Super resolution generative adversarial network (Srgans) for wheat stripe rust classification. Sensors, 21(23), 7903.
Mohapatra, B. R., Mishra, A., & Rout, S. K. (2014). A comprehensive review on image restoration techniques. International Journal of research in advent technology, 2(3), 101-105.
Oerke, E. C. (2005). Crop losses to pests. The Journal of Agricultural Science, 144(1), 31-43.
Oliveira, C., Auad, A., Mendes, S., & Frizzas, M. (2014). Crop losses and the economic impact of insect pests on Brazilian agriculture. Crop Protection, 56, 50-54.
Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1), 62-66.
Pan, S. J., & Yang, Q. A. (2010). A Survey on Transfer Learning. Ieee Transactions on Knowledge and Data Engineering, 22(10), 1345-1359.
Partel, V., Nunes, L., Stansly, P., & Ampatzidis, Y. (2019). Automated vision-based system for monitoring Asian citrus psyllid in orchards utilizing artificial intelligence. Computers and Electronics in Agriculture, 162, 328-336.
Pech-Pacheco, J. L., Cristóbal, G., Chamorro-Martinez, J., & Fernández-Valdivia, J. (2000). Diatom autofocusing in brightfield microscopy: a comparative study. Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.
Picanço, M. C., Bacci, L., Crespo, A. L. B., Miranda, M. M. M., & Martins, J. C. (2007). Effect of integrated pest management practices on tomato production and conservation of natural enemies. Agricultural and Forest Entomology, 9(4), 327-335.
Rajan, R. G., & Leo, M. J. (2020). American sign language alphabets recognition using hand crafted and deep learning features. 2020 International Conference on Inventive Computation Technologies (ICICT).
Raschka, S. (2018). Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808.
Reynolds, D. A. (2009). Gaussian mixture models. Encyclopedia of biometrics, 741(659-663).
Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20, 53-65.
Rustia, D. J. A., Lin, C. E., Chung, J.-Y., Zhuang, Y.-J., Hsu, J.-C., & Lin, T.-T. (2020). Application of an image and environmental sensor network for automated greenhouse insect pest monitoring. Journal of Asia-Pacific Entomology, 23(1), 17-28.
Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A. P., Bishop, R., Rueckert, D., & Wang, Z. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. Proceedings of the IEEE conference on computer vision and pattern recognition.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. International conference on machine learning.
Tetila, E. C., Machado, B. B., Astolfi, G., Belete, N. A. d. S., Amorim, W. P., Roel, A. R., & Pistori, H. (2020). Detection and classification of soybean pests using deep learning with UAV images. Computers and Electronics in Agriculture, 179.
Thenmozhi, K., & Srinivasulu Reddy, U. (2019). Crop pest classification based on deep convolutional neural network and transfer learning. Computers and Electronics in Agriculture, 164.
Thomas, M. B. (1999). Ecological approaches and the development of "truly integrated" pest management. Proceedings of the National Academy of Sciences of the United States of America, 96(11), 5944-5951.
Tianyu, Z., Zhenjiang, M., & Jianhu, Z. (2018). Combining cnn with hand-crafted features for image classification. 2018 14th ieee international conference on signal processing (icsp).
Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep Learning for Computer Vision: A Brief Review. Comput Intell Neurosci, 2018, 7068349.
Wang, Q.-J., Zhang, S.-Y., Dong, S.-F., Zhang, G.-C., Yang, J., Li, R., & Wang, H.-Q. (2020). Pest24: A large-scale very small object data set of agricultural pests for multi-target detection. Computers and Electronics in Agriculture, 175.
Wang, X., Wang, C., Yao, J., Fan, H., Wang, Q., Ren, Y., & Gao, Q. (2022). Comparisons of deep learning and machine learning while using text mining methods to identify suicide attempts of patients with mood disorders. Journal of affective disorders, 317, 107-113.
Wang, Y., Shen, J., Petridis, S., & Pantic, M. (2019). A real-time and unsupervised face re-identification system for human-robot interaction. Pattern recognition letters, 128, 559-568.
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600-612.
Wen, C., & Guyer, D. (2012). Image-based orchard insect automated identification and classification method. Computers and Electronics in Agriculture, 89, 110-115.
Xie, C., Wang, R., Zhang, J., Chen, P., Dong, W., Li, R., Chen, T., & Chen, H. (2018). Multi-level learning features for automatic classification of field crop pests. Computers and Electronics in Agriculture, 152, 233-241.
Yao, Q., Xian, D.-X., Liu, Q.-J., Yang, B.-J., Diao, G.-Q., & Tang, J. (2014). Automated Counting of Rice Planthoppers in Paddy Fields Based on Image Processing. Journal of Integrative Agriculture, 13(8), 1736-1745.
Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2012). A comprehensive evaluation of full reference image quality assessment algorithms. 2012 19th IEEE International Conference on Image Processing. Vienna, Austria.
Zhang, T., & Zhang, X. (2021). Injection of traditional hand-crafted features into modern CNN-based models for SAR ship classification: What, why, where, and how. Remote Sensing, 13(11), 2091.
Zhou, H., Miao, H., Li, J., Jian, F., & Jayas, D. S. (2019). A low-resolution image restoration classifier network to identify stored-grain insects from images of sticky boards. Computers and Electronics in Agriculture, 162, 593-601.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89089-
dc.description.abstract在作物生長期間,蟲害被認為是對農業生產的最大威脅之一。它們危害農作物的生長,降低產量,對農業經濟收益造成嚴重損失。因此,有效管理蟲害的發生,對於農業從業者至關重要。若想要達到有效管理的目標,需要透過即時且準確的害蟲種類和數量數據,擬定正確的管理對策。本研究室先前已開發出一套基於物聯網技術的智慧型蟲害管理系統(intelligent integrated pest and disease management, I2PDM),使用相機拍攝黏蟲紙,並利用深度學習辨識害蟲種類與數量。本研究目的為優化I2PDM系統中所使用的害蟲辨識模型,使其提供更準確的蟲害資訊,我們使用SRGAN影像增強模型,用於強化害蟲影像的視覺特徵。同時,將害蟲的尺寸納入分類模型中提供額外的資訊,以提高辨識的準確性。以影像增強、尺寸特徵與兩者結合的方法,提出三種新的模型架構,相比於原始的架構,經過優化後的模型分別能提升約2.7%、2.3%與4.4%的F1-score。此外,我們提出一套自動化線上自主學習架構,利用I2PDM系統數據流的優勢,持續收集更多的害蟲影像擴增訓練集,再利用新影像對基礎模型進行優化訓練,用以解決傳統資料收集與訓練模型所需的大量人力與時間。並且透過樣本清理演算法,搭配高斯混合模型對新進樣本進行篩選,確保新收集之樣本正確性,以及與正確樣本之間的特徵相似性,實現自動化樣本收集和模型再訓練的流程。測試結果顯示,在使用三年資料與四種不同基礎模型的情境下,其最終模型都能有效提升約2.6% 到5.8%的水準。後續利用MQTT、ZMQ與TCP等網路傳輸協議,將線上自主學習架構實際部署到I2PDM系統中。經 使用五個月的資料進行測試比較,與基礎模型相比可達2.7%的效能提升。除軟體優化外,本研究亦進行硬體升級,以使用Arducam 64MP替換原有的Raspberry pi camera v2相機模組,最終得到約2倍DPI的影像,取得更細微的害蟲特徵,測試結果顯示,使用新相機所訓練的分類模型相比使用原始相機,約有4.4%的F¬1-score提升。zh_TW
dc.description.abstractDuring crop growth, pests are considered one of the biggest threats to agricultural production. They damage crop growth, reduce yields, and cause significant economic losses to the agricultural industry. Therefore, effective management of pest occurrence is crucial for agricultural practitioners. To achieve effective management goals, accurate and real-time data on pest species and quantities are needed to formulate appropriate management strategies. Our research laboratory has previously developed an Intelligent Integrated Pest and Disease Management (I2PDM) system, which utilizes cameras to capture images of sticky traps and employs deep learning to identify pest species and quantities. The objective of this study is to optimize the pest recognition model used in the I2PDM system to provide more accurate pest information. We employ the SRGAN image enhancement model to enhance the visual features of pest images. Additionally, we incorporate the size of pests into the classification model to provide additional information for improved recognition accuracy. Three new model architectures are proposed using image enhancement, size features, and a combination of both, which achieve F1-score improvements of approximately 2.7%, 2.3%, and 4.4%, respectively, compared to the original architecture. Furthermore, we propose an automated online self-learning framework that leverages the data flow of the I2PDM system to continuously collect more pest images for augmented training sets and optimize the base model through retraining with new images. This approach solves the challenges of traditional data collection and the significant manpower and time required for model training. By utilizing a sample cleaning algorithm with a Gaussian Mixture Model, the newly collected samples are filtered to ensure their correctness and feature similarity with correct samples. This automated process of sample collection and model retraining is achieved. With three years of data and four different base models, the final models show effective improvements ranging from approximately 2.6% to 5.8%. The online self-learning framework is deployed in the I2PDM system using network transmission protocols such as MQTT, ZMQ, and TCP. Through a five-month data test, it shows a 2.7% performance improvement compared to the base models. In addition to software optimization, we also upgraded the hardware by replacing the original Raspberry Pi Camera V2 module with the Arducam 64MP camera module, resulting in images with approximately twice the DPI. This upgrade captures finer pest features. The test results indicate a 4.4% improvement of the F1-score when using the model trained with the new camera compared to the model trained with the original camera.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-16T17:05:13Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-16T17:05:13Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 i
摘要 ii
Abstract iii
圖目錄 viii
表目錄 xii
第一章 緒論 1
1.1 前言 1
1.2 研究目的 2
第二章 文獻探討 4
2.1 基於影像的昆蟲辨識方法 4
2.1.1 影像處理應用於昆蟲辨識 4
2.1.3 機器學習應用於昆蟲辨識 5
2.1.2 深度學習應用於昆蟲辨識 6
2.2 影像增強技術 7
2.2.1深度學習應用於影像增強 8
2.2.2 影像增強模型應用於農業影像辨識 9
2.3 手工特徵 (Handcrafted Features) 10
2.3.1結合手工特徵和深度學習之方法 10
2.3.2手工特徵應用於深度學習模型 11
2.4線上學習方法回顧 13
2.4.1連續神經網路學習 (Continuous Neural Network Learning) 13
2.4.2 線上學習與其應用 13
第三章 研究方法 16
3.1 研究架構 16
3.2 害蟲分類模型之優化 17
3.2.1 原始害蟲分類模型 17
3.2.2 影像增強分類模型 18
3.2.3手工特徵融合於害蟲分類模型 19
3.2.4影像增強結合手工特徵分類模型 22
3.3模型訓練與評估 22
3.3.1起始模型建立 23
3.3.2分類模型效能檢定 23
3.3.3影像增強模型訓練與評估 25
3.4線上自主學習框架 27
3.4.1自動害蟲影像收集 28
3.4.2害蟲樣本清理流程 29
3.4.3模型再訓練與模型更新 30
3.5實驗場域與害蟲影像資料集 31
3.5.1害蟲標的 31
3.5.2害蟲影像資料集 32
3.6 I2PDM相機模組升級 34
3.6.1相機模組 34
3.6.2原始相機與高解析度相機分類模型測試 34
第四章 結果與討論 35
4.1影像增強模型訓練結果 35
4.1.1初步實驗結果 35
4.1.2影像增強方法 36
4.1.3樣本數對於影像增強模型評估結果 38
4.2深度學習模型訓練結果 40
4.2.1原始模型訓練結果 40
4.2.2影像增強分類模型訓練結果 41
4.2.3手工特徵分類模型訓練結果 42
4.2.4影像增強結合手工特徵分類模型訓練結果 43
4.3線上自主學習 44
4.3.1樣本清理參數選擇 45
4.3.2線上自主學習迭代訓練結果 49
4.3.3動態樣本測試結果 57
4.3.4樣本清理演算法探討 62
4.4線上自主學習部署於I2PDM系統 66
4.4.1基礎模型訓練結果 67
4.4.2程式架構 67
4.4.3實際部署運作結果 68
4.5相機模組升級結果 72
4.5.1實際拍攝結果與延長支架設計 72
4.5.2分類模型測試結果 75
第五章 結論與建議 78
5.1 結論 78
5.2建議 80
參考文獻 81
-
dc.language.isozh_TW-
dc.subject病蟲害整合管理zh_TW
dc.subject手工特徵zh_TW
dc.subject影像增強zh_TW
dc.subject線上自主學習zh_TW
dc.subject害蟲分類模型zh_TW
dc.subjectimage enhancementen
dc.subjecthand-crafted featuresen
dc.subjectonline learningen
dc.subjectintegrated pest managementen
dc.subjectinsect pest classificationen
dc.title溫室微型害蟲辨識系統之優化與線上自主學習架構之研究zh_TW
dc.titleOptimization and Development of an Online Self-Learning Framework for Greenhouse Insect Pest Classification Systemen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee謝廣文;陳世芳zh_TW
dc.contributor.oralexamcommitteeKuang-Wen Hsieh;Shih-Fang Chenen
dc.subject.keyword病蟲害整合管理,影像增強,手工特徵,線上自主學習,害蟲分類模型,zh_TW
dc.subject.keywordintegrated pest management,image enhancement,hand-crafted features,online learning,insect pest classification,en
dc.relation.page86-
dc.identifier.doi10.6342/NTU202303657-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-08-10-
dc.contributor.author-college生物資源暨農學院-
dc.contributor.author-dept生物機電工程學系-
顯示於系所單位:生物機電工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf11.3 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved