Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93258
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield??? | Value | Language |
---|---|---|
dc.contributor.advisor | 顏炳郎 | zh_TW |
dc.contributor.advisor | Ping-Lang Yen | en |
dc.contributor.author | 墨艾誠 | zh_TW |
dc.contributor.author | Aeron Rollon Mojica | en |
dc.date.accessioned | 2024-07-23T16:32:42Z | - |
dc.date.available | 2024-07-24 | - |
dc.date.copyright | 2024-07-23 | - |
dc.date.issued | 2024 | - |
dc.date.submitted | 2024-07-09 | - |
dc.identifier.citation | Afonso, M., Fonteijn, H., Fiorentin, F. S., Lensink, D., Mooij, M., Faber, N., Polder, G., & Wehrens, R. (2020). Tomato Fruit Detection and Counting in Greenhouses Using Deep Learning. Frontiers in Plant Science, 11. https://doi.org/10.3389/fpls.2020.571299.
Aharon, N., Orfaig, R., & Bobrovsky, B. (2022). BOT-SORT: Robust Associations Multi-Pedestrian Tracking. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2206.14651. Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A. Q., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., & Farhan, L. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data, 8(1). https://doi.org/10.1186/s40537-021-00444-8. Amodei, D., Olah, C., Steinhardt, J., Christiano, P.F., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety. ArXiv, abs/1606.06565. Appe, S. N., Arulselvi, G., & Balaji, G. N. (2023). CAM-YOLO: tomato detection and classification based on improved YOLOv5 using combining attention mechanism. PeerJ Computer Science, 9. https://doi.org/10.7717/peerj-cs.1463. Arani, E., Gowda, S., Mukherjee, R., Magdy, O., Kathiresan, S.K., & Zonooz, B. (2022). A Comprehensive Study of Real-Time Object Detection Networks Across Multiple Domains: A Survey. ArXiv, abs/2208.10895. Arah, I. K., Ahorbo, G. K., Anku, E. K., Kumah, E. K., & Amaglo, H. (2016). Postharvest handling practices and treatment methods for tomato handlers in developing countries: a mini review. Advances in Agriculture, 2016, 1–8. https://doi.org/10.1155/2016/6436945. Baek, S., Lim, J., Lee, J. G., McCarthy, M. J., & Kim, S. M. (2020). Investigation of the maturity changes of cherry tomato using magnetic resonance imaging. Applied Sciences (Basel), 10(15), 5188. https://doi.org/10.3390/app10155188. Bhujbal, K., & Barahate, S. (2022). Custom Object detection Based on Regional Convolutional Neural Network & YOLOv3 With DJI Tello Programmable Drone. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4101029. Bhusnoor, Mallikarjun & Patel, Jyoti & Mehta, Akshara & Sainkar, Sandeep & Patel, Dhrumil & Mehendale, Ninad. (2023). Investigating The Impact Of Distance On Object Detection Accuracy in Unmanned Aerial Vehicle Systems Using MobileNetV3. 10.36227/techrxiv.24155598. Bochkovskiy, A., Wang, C., & Liao, H. M. (2020). YOLOV4: Optimal speed and accuracy of object detection. arXiv (Cornell University). https://arxiv.org/pdf/2004.10934v1. Boonsongsrikul, A., & Eamsaard, J. (2023). Real-Time Human Motion Tracking by Tello EDU Drone. Sensors, 23(2). https://doi.org/10.3390/s23020897. Camacho, J. C., & Morocho-Cayamcela, M. E. (2023). Mask R-CNN and YOLOv8 Comparison to Perform Tomato Maturity Recognition Task. Communications in Computer and Information Science, 1885 CCIS, 382–396. https://doi.org/10.1007/978-3-031-45438-7_26. Cocchioni, F., Pierfelice, V., Benini, A., Mancini, A., Frontoni, E., Zingaretti, P., Ippoliti, G., & Longhi, S. (2014). Unmanned Ground and Aerial Vehicles in extended range indoor and outdoor missions. 2014 International Conference on Unmanned Aircraft Systems (ICUAS).https://doi.org/10.1109/icuas.2014.6842276. Christensen, M. J., & Richter, T. (2020). Achieving reliable UDP transmission at 10 Gb/s using BSD socket for data acquisition systems. Journal of Instrumentation, 15(09), T09005. https://doi.org/10.1088/1748-0221/15/09/t09005. de la Escalera, A., & Armingol, J. M. (2010). Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration. Sensors, 10(3), 2027–2044. https://doi.org/10.3390/s100302027. de Luna, R. G., Dadios, E. P., Bandala, A. A., & Vicerra, R. R. P. (2019). Tomato Fruit Image Dataset for Deep Transfer Learning-based Defect Detection. https://doi.org/10.1109/CIS-RAM47153.2019.9095778. De Silva, S. C., Phlernjai, M., Rianmora, S., & Ratsamee, P. (2022). Inverted Docking Station: A Conceptual Design for a Battery-Swapping Platform for Quadrotor UAVs. Drones, 6(3). https://doi.org/10.3390/drones6030056. Ding, Z., Lai, Z., Li, S., Li, P., Yang, Q., & Wong, E.K. (2019). Confidence-Triggered Detection: Accelerating Real-time Tracking-by-detection Systems. Retrieved 14 May 2024 at https://arxiv.org/html/1902.00615v4. DJI. (2018). DJI SDK for Tello-Python. Retrieved 17 October 2023 at https://github.com/dji-sdk/Tello-Python/blob/master/doc/readme.pdf. Douterloigne, K., Gautama, S., & Philips, W. (2009). Fully automatic and robust UAV camera calibration using chessboard patterns. 2009 IEEE International Geoscience and Remote Sensing Symposium, 2, II-551-II-554. Duan, E., Han, G., Zhao, S., Ma, Y., Lv, Y., & Bai, Z. (2023). Regulation of Meat Duck Activeness through Photoperiod Based on Deep Learning. Animals (Basel), 13(22), 3520. https://doi.org/10.3390/ani13223520. Egi, Y., Hajyzadeh, M., & Eyceyurt, E. (2022). Drone-Computer Communication Based Tomato Generative Organ Counting Model Using YOLO V5 and Deep-Sort. Agriculture, 12(9), 1290. https://doi.org/10.3390/agriculture12091290. El-Bendary, N., Hariri, E. E., Hassanien, A. E., & Badr, A. (2015). Using machine learning techniques for evaluating tomato ripeness. Expert Systems with Applications, 42(4), 1892–1905. https://doi.org/10.1016/j.eswa.2014.09.057. El-hariri, Esraa & El-Bendary, Nashwa & Hassanien, Aboul Ella & Badr, Amr. (2014). AUTOMATED RIPENESS ASSESSMENT SYSTEM OF TOMATOES USING PCA AND SVM TECHNIQUES. Retrieved 5 February 2024 at https://www.researchgate.net/publication/263889773_AUTOMATED_RIPENESS_ASSESSMENT_SYSTEM_OF_TOMATOES_USING_PCA_AND_SVM_TECHNIQUES. Ge, Y., Lin, S., Zhang, Y., Li, Z., Cheng, H., Dong, J., Shao, S., Zhang, J., Qi, X., & Wu, Z. (2022). Tracking and counting of tomato at different growth period using an improving YOLO-Deepsort network for inspection robot. Machines, 10(6), 489. https://doi.org/10.3390/machines10060489. Giap, Y. C., Muljono, M., Soeleman, M. A., Affandy, & Basuki, R. S. (2023). Effect of Distance and Light Intensity on Multiple Detection Object. 2023 6th International Seminar on Research of Information Technology and Intelligent Systems (ISRITI). https://doi.org/10.1109/isriti60336.2023.10467935. Giernacki, W., Rao, J., Sladic, S., Bondyra, A., Retinger, M., & Espinoza-Fraire, T. (2022). DJI Tello Quadrotor as a Platform for Research and Education in Mobile Robotics and Control Engineering. 2022 International Conference on Unmanned Aircraft Systems, ICUAS 2022, 735–744. https://doi.org/10.1109/ICUAS54217.2022.9836168. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/cvpr.2014.81. Girshick, R. (2015). Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV). https://doi.org/10.1109/iccv.2015.169. Grammatikopoulos, L., Karras, G., & Petsa, E. (2007). An automatic approach for camera calibration from vanishing points. ISPRS Journal of Photogrammetry and Remote Sensing, 62(1), 64–76. https://doi.org/10.1016/j.isprsjprs.2007.02.002. Gruen, Armin. (2001). Calibration and Orientation of Cameras in Computer Vision. 10.1007/978-3-662-04567-1. Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27–48. https://doi.org/10.1016/j.neucom.2015.09.116. Hall, D. R., Dayoub, F., Skinner, J., Zhang, H., Miller, D., Corke, P., Carneiro, G., Angelova, A., & Sünderhauf, N. (2020). Probabilistic Object Detection: Definition and Evaluation. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). https://doi.org/10.1109/wacv45572.2020.9093599. Hamdu, I. H., Yunus, A., & Hardi, I. M. (2016). Maturity and ripening on the biochemical characteristics of three local varieties of tomato. DOAJ (DOAJ: Directory of Open Access Journals). https://doaj.org/article/b393275265b64d90b30d465fa5a1570c. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9), 1904–1916. https://doi.org/10.1109/tpami.2015.2389824. He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969). Hendrawan, Rido & Istiono, Wirawan. (2023). Analyzing the Distance and Intensity of Light in Learning Augmented Reality Marker Based Tracking Application. Journal of Advances in Mathematics and Computer Science. 58-70. 10.9734/jamcs/2023/v38i21746. Hnoohom, N., Chotivatunyu, P., Maitrichit, N., Nilsumrit, C., & Iamtrakul, P. (2024). The video-based safety methodology for pedestrian crosswalk safety measured: The case of Thammasat University, Thailand. Transportation Research Interdisciplinary Perspectives, 24, 101036. https://doi.org/10.1016/j.trip.2024.101036. Host, K., Pobar, M., & Ivašić-Kos, M. (2023). Analysis of movement and activities of handball players using deep neural networks. Journal of Imaging, 9(4), 80. https://doi.org/10.3390/jimaging9040080. Hou, Q., Zhou, D., & Feng, J. (2021). Coordinate Attention for Efficient Mobile Network Design. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr46437.2021.01350. Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195(1), 215–243. https://doi.org/10.1113/jphysiol.1968.sp008455. Junaid, A. B., Lee, Y., & Kim, Y. (2016). Design and implementation of autonomous wireless charging station for rotary-wing UAVs. Aerospace Science and Technology, 54, 253–266. https://doi.org/10.1016/j.ast.2016.04.023. Junaid, A. B., Konoiko, A., Zweiri, Y., Sahinkaya, M. N., & Seneviratne, L. (2017). Autonomous wireless Self-Charging for Multi-Rotor unmanned aerial vehicles. Energies, 10(6), 803. https://doi.org/10.3390/en10060803. Kaur, R., & Singh, S. (2023). A comprehensive review of object detection with deep learning. Digital Signal Processing, 132, 103812. https://doi.org/10.1016/j.dsp.2022.103812. Kemper, F.P., Suzuki, K.A.O. & Morrison, J.R. UAV Consumable Replenishment: Design Concepts for Automated Service Stations. J Intell Robot Syst 61, 369–397 (2011). https://doi.org/10.1007/s10846-010-9502-z. Kimura, S., & Sinha, N. (2008). Tomato (Solanum lycopersicum): A model fruit-bearing crop. Cold Spring Harbor Protocols, 3(11). https://doi.org/10.1101/pdb.emo105. Kung, R., Pan, N., Wang, C. C., & Lee, P. (2021). Application of deep learning and unmanned aerial vehicle on building maintenance. Advances in Civil Engineering, 2021, 1–12. https://doi.org/10.1155/2021/5598690. Kurtulmuş, F., Lee, W. S., & Vardar, A. (2013). Immature peach detection in colour images acquired in natural illumination conditions using statistical classifiers and neural network. Precision Agriculture, 15(1), 57–79. https://doi.org/10.1007/s11119-013-9323-8. Larmo, A., Ratilainen, A., & Saarinen, J. (2018). Impact of COAP and MQTT on NB-IoT system performance. Sensors, 19(1), 7. https://doi.org/10.3390/s19010007. Lawal, M. O. (2021). Tomato detection based on modified YOLOv3 framework. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-81216-5. Leahy, K., Zhou, D., Vasile, C., Oikonomopoulos, K., Schwager, M., & Belta, C. (2015). Persistent surveillance for unmanned aerial vehicles subject to charging and temporal logic constraints. Autonomous Robots, 40(8), 1363–1378. https://doi.org/10.1007/s10514-015-9519-z. Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke, Z., Li, Q., Cheng, M., Nie, W., Li, Y., Zhang, B., Liang, Y., Zhou, L., Xu, X., Chu, X., Wei, X., & Wei, X. (2022). YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. ArXiv, abs/2209.02976. Li, P., Zheng, J., Li, P., Long, H., Li, M., & Gao, L. (2023). Tomato Maturity Detection and Counting Model Based on MHSA-YOLOv8. Sensors, 23(15). https://doi.org/10.3390/s23156701. Li, R., Ji, Z., Hu, S., Huang, X., Yang, J., & Li, W. (2023). Tomato Maturity Recognition Model Based on Improved YOLOv5 in Greenhouse. Agronomy, 13(2). https://doi.org/10.3390/agronomy13020603. Lin, T., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature Pyramid Networks for Object Detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2017.106. Liu, G., Christian, N. J., Mbouembe, P. L. T., & Kim, J. H. (2020). YOLO-Tomato: A robust algorithm for tomato detection based on YOLOV3. Sensors, 20(7), 2145. https://doi.org/10.3390/s20072145. Liu, G., Mao, S., & Kim, J. H. (2019). A Mature-Tomato detection algorithm using machine learning and color analysis. Sensors, 19(9), 2023. https://doi.org/10.3390/s19092023. Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., & Pietikäinen, M. (2019). Deep Learning for Generic Object Detection: A survey. International Journal of Computer Vision, 128(2), 261–318. https://doi.org/10.1007/s11263-019-01247-4. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C., & Berg, A. C. (2016). SSD: Single Shot MultiBox Detector. In Lecture Notes in Computer Science (pp. 21–37). https://doi.org/10.1007/978-3-319-46448-0_2. Luo, L., Tang, Y., Zou, X., Chen, Z., Zhang, P., & Feng, W. (2016). Robust grape cluster detection in a vineyard by combining the ADABoost framework and multiple color components. Sensors, 16(12), 2098. https://doi.org/10.3390/s16122098. Ma, Linh & Hussain, Muhammad Ishfaq & Park, JongHyun & Kim, Jeongbae & Jeon, Moongu. (2023). Adaptive Confidence Threshold for ByteTrack in Multi-Object Tracking. 370-374. 10.1109/ICCAIS59597.2023.10382403. Millot, Y., & Man, P. P. (2012). Active and passive rotations with Euler angles in NMR. Concepts in Magnetic Resonance Part A: Bridging Education and Research, 40 A (5), 215–252. https://doi.org/10.1002/cmr.a.21242. Mirzaei, B., Nezamabadi‐pour, H., Raoof, A., & Derakhshani, R. (2023). Small Object Detection and Tracking: A Comprehensive review. Sensors, 23(15), 6887. https://doi.org/10.3390/s23156887. Monaghan, T. F., Rahman, S. N., Agudelo, C. W., Wein, A. J., Lazar, J. M., Everaert, K., & Dmochowski, R. R. (2021). Foundational Statistical Principles in Medical Research: Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value. Medicina (Kaunas, Lithuania), 57(5), 503. https://doi.org/10.3390/medicina57050503. Moneruzzaman, K. M., Hossain, A. B. M. S., Sani, W., Saifuddin, M., & Alenazi, M. (2009). Effect of harvesting and storage conditions on the post harvest quality of tomato (Lycopersicon esculentum Mill) cv. Roma VF. Australian Journal of Crop Science, 3(2), 113–121. http://www.cropj.com/Moneeruzaman_3_2_2009.pdf. Mostafa, T. M., Muharam, A., & Hattori, R. (2017). Wireless battery charging system for drones via capacitive power transfer. 2017 IEEE PELS Workshop on Emerging Technologies: Wireless Power Transfer (WoW). https://doi.org/10.1109/wow.2017.7959357. Mourgelas, Christos & Kokkinos, Sokratis & Milidonis, Athanasios & Voyiatzis, Ioannis. (2020). Autonomous drone charging stations: A survey. 10.1145/3437120.3437314. Mulgaonkar, Yash. (2014). Autonomous Charging to Enable Long-Endurance Missions for Small Aerial Robots. 90831S. 10.1117/12.2051111. Ni, J., Zhu, S., Tang, G., Ke, C., & Wang, T. (2024). A Small-Object Detection Model Based on Improved YOLOv8s for UAV Image Scenarios. Remote Sensing, 16, 2465. https://doi.org/10.3390/rs16132465. Njume, C. A., Ngosong, C., Krah, C. Y., & Mardjan, S. (2020). Tomato food value chain: managing postharvest losses in Cameroon. IOP Conference Series. Earth and Environmental Science, 542(1), 012021. https://doi.org/10.1088/1755-1315/542/1/012021. Ntouskos, V., Karras, G., Douskos, V., & Kalisperakis, I. (2007). Automatic calibration of digital cameras using planar chess-board patterns View project AUTOMATIC CALIBRATION OF DIGITAL CAMERAS USING PLANAR CHESS-BOARD PATTERNS. https://www.researchgate.net/publication/228345254. Phan, Q., Nguyen, V., Lien, C., Duong, T., Hou, M. T., & Le, N. (2023). Classification of tomato fruit using YoLov5 and convolutional neural network models. Plants, 12(4), 790. https://doi.org/10.3390/plants12040790. Radiansyah, S & Kusrini, Mirza & Prasetyo, Lilik. (2017). Quadcopter applications for wildlife monitoring. IOP Conference Series: Earth and Environmental Science. 54. 012066. 10.1088/1755-1315/54/1/012066. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271). Redmon, J., & Farhadi, A. (2018). YOLOV3: an incremental improvement. arXiv (Cornell University). https://arxiv.org/pdf/1804.02767. Rekavandi, Aref & Xu, Lian & Boussaid, Farid & Seghouane, Abd-Krim & Hoefs, Stephen & Bennamoun, Mohammed. (2022). A Guide to Image and Video based Small Object Detection using Deep Learning: Case Study of Maritime Surveillance. 10.48550/arXiv.2207.12926. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., & Savarese, S. (2019). Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression. IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/cvpr.2019.00075. Rodríguez-Ortega, W. M., Martínez, V., Nieves, M., Simón, I., Lidón, V., Fernandez-Zapata, J. C., Martinez-Nicolas, J. J., Cámara-Zapata, J. M., & García-Sánchez, F. (2019). Agricultural and Physiological Responses of Tomato Plants Grown in Different Soilless Culture Systems with Saline Water under Greenhouse Conditions. Scientific Reports, 9(1). https://doi.org/10.1038/s41598-019-42805-7 Rong, J., Zhou, H., Zhang, F., Yuan, T., & Wang, P. (2023). Tomato cluster detection and counting using improved YOLOv5 based on RGB-D fusion. Computers and Electronics in Agriculture, 207, 107741. https://doi.org/10.1016/j.compag.2023.107741. Ryze Robotics (2018). Tello SDK 2.0 User Guide. https://dl-cdn.ryzerobotics.com/downloads/Tello/Tello%20SDK%202.0%20User%20Guide.pdf. Sacchi, E., Sayed, T., & deLeur, P. (2013). A comparison of collision-based and conflict-based safety evaluations: The case of right-turn smart channels. Accident Analysis and Prevention, 59, 260–266. https://doi.org/10.1016/j.aap.2013.06.002. Saleem, M.H., Potgieter, J. & Arif, K.M. (2021). Automation in Agriculture by Machine and Deep Learning Techniques: A Review of Recent Developments. Precision Agric 22, 2053–2091. https://doi.org/10.1007/s11119-021-09806-x. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229. Song, B. D., Kim, J., Kim, J., Park, H., Morrison, J. R., and Shim, D. H. (2013). "Persistent UAV service: An improved scheduling formulation and prototypes of system components," 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, pp. 915-925, doi: 10.1109/ICUAS.2013.6564777. Stanojevic, V. D., & Todorović, B. (2024). BoostTrack: boosting the similarity measure and detection confidence for improved multiple object tracking. Machine Vision and Applications, 35(3). https://doi.org/10.1007/s00138-024-01531-5. Su, K., Cao, L., Zhao, B., Li, N., Wu, D., & Han, X. (2023). N-IoU: better IoU-based bounding box regression loss for object detection. Neural Computing & Applications. https://doi.org/10.1007/s00521-023-09133-4. Sünderhauf, N., Brock, O., Scheirer, W. J., Hadsell, R., Fox, D., Leitner, J., Upcroft, B., Abbeel, P., Burgard, W., Milford, M., & Corke, P. (2018). The limits and potentials of deep learning for robotics. The International Journal of Robotics Research, 37(4–5), 405–420. https://doi.org/10.1177/0278364918770733. Szegedy, C., Toshev, A., & Erhan, D. (2013). Deep neural networks for object detection. Neural Information Processing Systems, 26, 2553–2561. https://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf. Szepessy, Tamas. (2019). Optical Drone Control. Retrieved 5 June 2023 at https://github.com/TamasSzepessy/DJITelloOpticalControl/tree/master. Tang, C., Feng, Y., Yang, X., Zheng, C., & Zhou, Y. (2017). The Object Detection Based on Deep Learning. 2017 4th International Conference on Information Science and Control Engineering (ICISCE). https://doi.org/10.1109/icisce.2017.156. Tapia-Mendez, E., Cruz-Albarrán, I. A., Tovar-Arriaga, S., & Morales-Hernández, L. A. (2023). Deep Learning-Based Method for Classification and Ripeness Assessment of fruits and vegetables. Applied Sciences, 13(22), 12504. https://doi.org/10.3390/app132212504. Terven, J. R., Córdova-Esparza, D., & Romero-González, J. (2023a). A comprehensive review of YOLO architectures in computer vision: from YOLOV1 to YOLOV8 and YOLO-NAS. Machine Learning and Knowledge Extraction, 5(4), 1680–1716. https://doi.org/10.3390/make5040083. Terven, Juan & Cordova-Esparza, Diana-Margarita & Ramirez-Pedraza, Alfonzo & Chavez-Urbiola, Edgar. (2023b). Loss Functions and Metrics in Deep Learning. A Review. Retrieved 2/29/2024 at https://www.researchgate.net/publication/372163006_Loss_Functions_and_Metrics_in_Deep_Learning_A_Review. Thuan, D. (2021). Evolution of Yolo algorithm and Yolov5: The State-of-the-Art object detention algorithm. Retrieved 3/20/2024 at https://www.semanticscholar.org. Tian, Y., Su, D., Lauria, S., & Liu, X. (2022). Recent advances on loss functions in deep learning for computer vision. Neurocomputing, 497, 129–158. https://doi.org/10.1016/j.neucom.2022.04.127. Tolasa, M., Gedamu, F., & Woldetsadik, K. (2021). Impacts of harvesting stages and pre-storage treatments on shelf life and quality of tomato (Solanum lycopersicum L.). Cogent Food & Agriculture, 7(1). https://doi.org/10.1080/23311932.2020.1863620. Tong, Z., Chen, Y., Xu, Z., & Yu, R. (2023). Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism. ArXiv, abs/2301.10051. Tsai, F. T., Nguyen, V., Duong, T., Phan, Q., & Lien, C. (2023). Tomato Fruit Detection Using Modified Yolov5m Model with Convolutional Neural Networks. Plants, 12(17), 3067. https://doi.org/10.3390/plants12173067. Tsironis, V., Bourou, S., & Stentoumis, C. (2020). Tomatod: Evaluation of object detection algorithms on a new real-world tomato dataset. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 43(B3), 1077–1084. https://doi.org/10.5194/isprs-archives-XLIII-B3-2020-1077-2020. Tsouvaltzis, P., Gkountina, S., & Siomos, A. S. (2023). Quality traits and nutritional components of cherry tomato in relation to the harvesting period, storage duration and fruit position in the truss. Plants, 12(2), 315. https://doi.org/10.3390/plants12020315. Wang, C., Wang, C., Hu, X., Wang, J., Liao, J., Li, Y., & Lan, Y. (2023). A lightweight cherry tomato maturity Real-Time Detection algorithm based on improved YOLOV5N. Agronomy, 13(8), 2106. https://doi.org/10.3390/agronomy13082106. Wang, C., Bochkovskiy, A., & Liao, H.M. (2022). YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7464-7475. Wang, Z., Zhang, X., Su, Y., Li, W., Yin, X., Li, Z., Ying, Y., Wang, J., Wu, J., Miao, F., & Zhao, K. (2023). Abnormal Behavior Monitoring Method of Larimichthys crocea in Recirculating Aquaculture System Based on Computer Vision. Sensors, 23(5), 2835. https://doi.org/10.3390/s23052835. Wenkel, S., Alhazmi, K., Liiv, T., Alrshoud, S. R., & Simón, M. (2021). Confidence Score: The Forgotten Dimension of Object Detection Performance Evaluation. Sensors, 21(13), 4350. https://doi.org/10.3390/s21134350. Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and realtime tracking with a deep association metric. 2017 IEEE International Conference on Image Processing (ICIP). https://doi.org/10.1109/icip.2017.8296962. Yamashita, R., Nishio, M., Gian, R. K., & Togashi, K. (2018). Convolutional neural networks: an overview and application in radiology. Insights Into Imaging, 9(4), 611–629. https://doi.org/10.1007/s13244-018-0639-9. Yang, Chuankai & He, Yuanjian & Qu, Haoyue & Wu, Jingfeng & Hou, Zhe & Lin, Zhongzheng & Cai, Changsong. (2019). Analysis, design and implement of asymmetric coupled wireless power transfer systems for unmanned aerial vehicles. AIP Advances. 9. 025206. 10.1063/1.5080955. Yu, C., Feng, Z., Wu, Z., Wei, R., Song, B., & Cao, C. (2023). HB-YOLO: an improved YOLOV7 algorithm for DIM-Object tracking in satellite remote sensing videos. Remote Sensing (Basel), 15(14), 3551. https://doi.org/10.3390/rs15143551. Yu, H., Pei, H., Lyu, Y., Yuan, Z., Rizzo, J. R., Wang, Y., & Fang, Y. (2023). Understanding the Impact of Image Quality and Distance of Objects to Object Detection Performance. 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). https://doi.org/10.1109/iros55552.2023.10342139. Yuan, T., Li, L., Zhang, F., Fu, J., Gao, J., Zhang, J., Li, W., Zhang, C., & Zhang, W. (2020). Robust cherry tomatoes detection algorithm in Greenhouse Scene based on SSD. Agriculture, 10(5), 160. https://doi.org/10.3390/agriculture10050160. Zhang, X., Yang, Y. H., Han, Z., Wang, H., & Gao, C. (2013). Object class detection: A survey. ACM Computing Surveys (CSUR), 46(1), 1-53. Zhang, Y. et al. (2022). ByteTrack: Multi-object Tracking by Associating Every Detection Box. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13682. Springer, Cham. https://doi.org/10.1007/978-3-031-20047-2_1. Zhang, Z. (2000). A Flexible New Technique for Camera Calibration. In IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (Vol. 22, Issue 11). Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object Detection With Deep Learning: A Review. IEEE transactions on neural networks and learning systems, 30(11), 3212–3232. https://doi.org/10.1109/TNNLS.2018.2876865. Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., & Ren, D. (2020). Distance-IOU loss: Faster and better learning for bounding box regression. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12993–13000. https://doi.org/10.1609/aaai.v34i07.6999. Zheng, Z., Wang, P., Ren, D., Liu, W., Ye, R., Hu, Q., & Zuo, W. (2022). Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE Transactions on Cybernetics, 52(8), 8574–8586. https://doi.org/10.1109/tcyb.2021.3095305. Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object Detection in 20 years: A survey. Proceedings of the IEEE, 111(3), 257–276. https://doi.org/10.1109/jproc.2023.3238524. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93258 | - |
dc.description.abstract | 近代農業中,櫻桃番茄(Solanum lycopersicum)是一種經濟重要的農產品。然而,它們的小尺寸和獨特特性,包括短暫的壽命和易受損的脆弱性,給作物監測和收穫帶來了挑戰。迄今為止,大多數先前的研究都集中在開發和優化深度學習模型,用於檢測不同成熟水平的番茄。然而,距離和信心閾值等因素對物體檢測器的檢測性能的影響往往被忽略和未加調查。因此,本研究提出了一種使用無人機(UAV)和YOLOv8自動檢測溫室櫻桃番茄成熟水平的方法。隨後,對上述因素的影響進行了調查。通過使用UDP協議建立了無人機與計算機的通信系統,利用DJI Tello無人機進行了測試,實現了高效的數據傳輸和UAV控制。利用使用DJI Tello、手機和Intel RealSense D435深度相機獲取的櫻桃番茄數據集,對基於YOLOv8n的深度學習模型進行了訓練。微調和消融研究的結果表明,將坐標注意塊和具有動態聚焦機制的邊界框回歸損失(WIoU)損失函數結合起來,可以實現高精度(90.2%)、召回率(88.5%)、F1分數(89.34%)和mAP(93.7%)的成熟檢測模型。開發的模型YOLOv8n + CA + WIoU用於檢測和跟踪,與微調的BoT-SORT跟踪算法一起。結果顯示,BoT-SORT對櫻桃番茄的跟踪效果良好,多目標跟踪精度(MOTA)在74%至87%之間。此外,這些結果進一步證實了低閾值導致更高的敏感性,而高閾值導致更高的特異性和增加的跟踪性能。然而,觀察到YOLOv8n + CA + WIoU算法在40厘米至100厘米的物體至相機距離範圍內開始顯示出檢測確定性下降,特別是在同類番茄之間存在遮擋和接近的情況下。總的來說,這些研究結果突顯了利用UAV系統和先進的深度學習模型在溫室環境中高效準確地監測櫻桃番茄成熟水平的潛力。此外,後續的研究結果還表明,物體至相機距離、信心閾值和遮擋對檢測性能的影響至關重要。解決這些因素對於最大程度地提高UAV基礎的農業監測系統的準確性和可靠性至關重要,進一步加強了這些技術在實際應用和工業應用中的可行性和有效性。 | zh_TW |
dc.description.abstract | In modern agriculture, cherry tomato (Solanum lycopersicum) is an economically significant commodity. However, their small size and unique characteristics, including their brief lifespan and vulnerability to damage, pose challenges in crop monitoring and harvesting. Up to date, most previous studies focused on developing and optimizing deep learning models for detecting tomatoes at different maturity levels. However, the impact of factors like distance and confidence threshold on the detection performance of object detectors is often ignored and left uninvestigated. Therefore, this study proposed an autonomous method of detecting maturity levels of greenhouse-grown cherry tomatoes using UAV and YOLOv8. Subsequently, the impact of the aforementioned factors was investigated. DJI Tello drone was utilized to setup a UDP-based communication, enabling efficient data transmission and UAV control. For the model training, a cherry tomato dataset comprising images from different modalities were used. The results of fine-tuning and ablation studies demonstrated the effectiveness of incorporating coordinate attention blocks and a bounding box regression loss with dynamic focusing mechanism (WIoU) loss function, achieving high precision (90.2%), recall (88.5%), F1-score (89.34%), and mAP (93.7%) for the ripe detection model. The developed model, YOLOv8n + CA + WIoU, was used for detection and tracking together with a fine-tuned BoT-SORT tracking algorithm. The results revealed the efficacy of BoT-SORT for tracking the cherry tomatoes by achieving Multi-Object Tracking Accuracy (MOTA) ranging from 74 ~ 87% and Counting Accuracy ranging from 60% ~ 85%. Moreover, the results of the study determined the implications that for cherry tomatoes, low threshold (45% ~ 50%) leads to higher sensitivity, while higher threshold (75% ~ 80%) leads to higher specificity and increased tracking performance. However, it was observed that the YOLOv8n + CA + WIoU algorithm started to display decrease in detection certainty at a range of 0.4 m to 1.0 m object-to-camera distance, particularly in the presence of occlusion and close proximity between tomatoes with the same class. The overall findings highlight the potential of using UAV-based systems and advanced deep learning models for efficient and accurate monitoring of the maturity level of cherry tomatoes in greenhouse settings. Furthermore, subsequent findings also demonstrate the critical impact of object-to-camera distance, confidence threshold, and occlusion on detection performance. Addressing these factors are essential for maximizing the accuracy and reliability of UAV-based agricultural monitoring systems, reinforcing the feasibility and effectiveness of these technologies in real-world and industrial applications. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-23T16:32:42Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2024-07-23T16:32:42Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | Certificate of Thesis Approval i
Acknowledgement ii 中文摘要 iv ABSTRACT vi Table of Contents viii List of Figures xi List of Tables xiii List of Appendices xiv Abbreviations xv Chapter 1. Introduction 1 Chapter 2. Literature Review 3 2.1 Cherry Tomato 3 2.2 Convolutional Neural Network (CNN) for Image-based Object Detection 4 2.3 Object Detection Via Deep Learning 5 2.4 Phenotyping Using Deep Learning Methods 9 2.5 Loss Function 12 2.6 The WIoU Loss Function 13 2.7 Attention Mechanisms 14 2.8 Object Tracking Using Deep Learning 15 2.9 Tello EDU Drone 18 2.10 Drawbacks and Solutions: Prolonged UAV Operations 19 2.11 Geometric Calibration 20 2.12 Factors Affecting the Detection Accuracy 21 Chapter 3. Materials and Methods 25 3.1 Overview of the System 25 3.2 Area of Study 26 3.3 UAV model 26 3.4 Communication Protocol Between Drone and Computer 27 3.5 Geometric Calibration of the UAV’s Camera 28 3.6 Tomato Maturity Recognition System 29 3.6.1 Dataset Collection 29 3.6.2 Target Maturity Stage for the Detection Models 31 3.6.3 Experimental Setup 32 3.6.4 Experimental Workflow 33 3.6.5 Method of Detection 31 3.6.6 Ablation and Fine-tuning 32 3.6.7 Detection Models for Maturity Assessment 37 3.6.7.1 Monitoring and Surveillance 37 3.6.7.2 Cherry Tomato at Light-red Stage 39 3.6.7.3 Cherry Tomato at Fully Mature (Red) Stage 42 3.6.8 Multi-Source Data Training 42 3.6.9 Model Training, Validation, and Testing 44 3.6.10 Performance Evaluation 46 3.7 Detection and Tracking in Greenhouse Environment 47 3.8 Impact of Distance and Confidence on Detection Performance 51 Chapter 4. Results and Discussion 56 4.1 Communication Protocols Between Drones and Computers 56 4.2 Multi-Source Data Training 59 4.3 Ablation Study and Fine-tuning 61 4.4 Detection Models Focusing on Single Classes 67 4.5 Detection and Tracking 73 4.6 Impact of Distance and Confidence on Detection Performance 77 Chapter 5. Conclusion, Limitation, and Perspective 89 5.1 Conclusion 89 5.2 Limitations and Future Perspectives of the Study 91 References 93 Appendices 111 | - |
dc.language.iso | en | - |
dc.title | 應用深度學習和無人載具整合的多模態方法對櫻桃番茄成熟度進行評估 | zh_TW |
dc.title | Multi-modal Approach for Cherry Tomato (Solanum lycopersicum) Maturity Assessment through Deep Learning and Unmanned Vehicles (UV) Integration | en |
dc.type | Thesis | - |
dc.date.schoolyear | 112-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 陳世芳;吳筱梅;Gella Patria Abella | zh_TW |
dc.contributor.oralexamcommittee | Shih-Fang Chen;Hsiao-Mei Wu;Gella Patria Abella | en |
dc.subject.keyword | 櫻桃番茄成熟度檢測,UAV,YOLOv8,坐標注意,WIoU,BoT-SORT, | zh_TW |
dc.subject.keyword | Cherry tomato ripeness detection,UAV,YOLOv8,Coordinate Attention,WIoU,BoT-SORT, | en |
dc.relation.page | 120 | - |
dc.identifier.doi | 10.6342/NTU202401594 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2024-07-10 | - |
dc.contributor.author-college | 共同教育中心 | - |
dc.contributor.author-dept | 全球農業科技與基因體科學碩士學位學程 | - |
Appears in Collections: | 全球農業科技與基因體科學碩士學位學程 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-112-2.pdf | 4.91 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.