請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94692
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 郭彥甫 | zh_TW |
dc.contributor.advisor | Yan-Fu Kuo | en |
dc.contributor.author | 李居展 | zh_TW |
dc.contributor.author | Chu-Chan Lee | en |
dc.date.accessioned | 2024-08-16T17:33:23Z | - |
dc.date.available | 2024-08-17 | - |
dc.date.copyright | 2024-08-16 | - |
dc.date.issued | 2024 | - |
dc.date.submitted | 2024-08-11 | - |
dc.identifier.citation | REFERENCES
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., et al. (2016). TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) (pp. 265–283). https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi Ames, R. T., Leaman, B. M., & Ames, K. L. (2007). Evaluation of video technology for monitoring of multispecies longline catches. North American Journal of Fisheries Management, 27, 955–964. Álvarez-Ellacuría, A., Palmer, M., Catalán, I. A., & Lisani, J. L. (2019). Image-based unsupervised estimation of fish size from commercial landings using deep learning. ICES Journal of Marine Science. doi:10.1093/icesjms/fsz216. Bartholomew, D. C., Mangel, J. C., Alfaro-Shigueto, J., Pingo, S., Jimenez, A., & Godley, B. J. (2018). Remote electronic monitoring as a potential alternative to on-board observers in small-scale fisheries. Biological Conservation, 219, 35–45. Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010 (pp. 177–186). Physica-Verlag HD. Chollet, F. (2015). Keras. https://github.com/fchollet/keras. Ditria, E. M., Lopez-Marcano, S., Sievers, M. K., Jinks, E. L., Brown, C. J., & Connolly, R. M. (2019). Automating the analysis of fish abundance using object detection: Optimising animal ecology with deep learning. bioRxiv, 805796. Everingham, M., & Winn, J. (2011). The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Development Kit. Pattern Analysis, Statistical Modelling, and Computational Learning Technical Report. FAO. (2017). Seafood Traceability for Fisheries Compliance: Country-Level Support for Catch Documentation Schemes. FAO, Rome. FAO. (2018). The State of World Fisheries and Aquaculture 2018 (SOFIA 2018): Meeting the Sustainable Development Goals. Food and Agriculture Organization, Rome. Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27, 861–874. Francisco, F. A., Nührenberg, P., & Jordan, A. L. (2019). A low-cost open-source framework for tracking and behavioural analysis of animals in aquatic ecosystems. bioRxiv, 571232. French, G., Fisher, M. H., Mackiewicz, M., & Needle, C. (2015). Convolutional neural networks for counting fish in fisheries surveillance video. In Proceedings of the Machine Vision of Animals and their Behaviour (MVAB) (pp. 7–1). French, G., Mackiewicz, M., Fisher, M., Holah, H., Kilburn, R., Campbell, N., & Needle, C. (2019). Deep neural networks for analysis of fisheries surveillance video and automated monitoring of fish discards. ICES Journal of Marine Science. doi:10.1093/icesjms/fsz149. Garcia, R., Prados, R., Quintana, J., Tempelaar, A., Gracias, N., Rosen, S., Vågstøl, H., et al. (2019). Automatic segmentation of fish using deep learning with application to fish size measurement. ICES Journal of Marine Science. doi:10.1093/icesjms/fsz186. He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2961–2969). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778). Jaeger, J., Wolff, V., Fricke-Neuderth, K., Mothes, O., & Denzler, J. (2017). Visual fish tracking: Combining a two-stage graph approach with CNN-features. In OCEANS 2017-Aberdeen (pp. 1–6). IEEE. Kindt-Larsen, L., Kirkegaard, E., & Dalskov, J. (2011). Fully documented fishery: A tool to support a catch quota management system. ICES Journal of Marine Science, 68, 1606–1610. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2, 1097–1105. Larsen, R., Olafsdottir, H., & Ersbøll, B. K. (2009). Shape and texture based classification of fish species. In Scandinavian Conference on Image Analysis (pp. 745–749). Springer Berlin Heidelberg. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324. Li, X., Shang, M., Hao, J., & Yang, Z. (2016). Accelerating fish detection and recognition by sharing CNNs with objectness learning. In OCEANS 2016-Shanghai (pp. 1–5). IEEE. Lin, T. Y., Dollar, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2117–2125). Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., et al. (2014). Microsoft COCO: Common objects in context. In European Conference on Computer Vision (pp. 740–755). Springer Cham. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). SSD: Single shot multibox detector. In European Conference on Computer Vision (pp. 21–37). Springer Cham. Wei, J. C., & Chu, Y. N. (2021). White shrimp’s appetite recognition based on deep learning using underwater image. Master's thesis, National Taiwan University. Lu, Y. C., Tung, C., & Kuo, Y. F. (2019). Identifying the species of harvested tuna and billfish using deep convolutional neural networks. ICES Journal of Marine Science. doi:10.1093/icesjms/fsz089. Manning, C. D., Manning, C. D., & Schütze, H. (1999). Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA. Monkman, G. G., Hyder, K., Kaiser, M. J., & Vidal, F. P. (2019). Using machine vision to estimate fish length from images using regional convolutional neural networks. Methods in Ecology and Evolution, 10, 2045–2056. Morais, E. F., Campos, M. F. M., Padua, F. L., & Carceroni, R. L. (2005). Particle filter-based predictive tracking for robust fish counting. In XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI’05) (pp. 367–374). IEEE. Needle, C. L., Dinsdale, R., Buch, T. B., Catarino, R. M., Drewery, J., & Butler, N. (2015). Scottish science applications of remote electronic monitoring. ICES Journal of Marine Science, 72, 1214–1229. Qin, H., Li, X., Liang, J., Peng, Y., & Zhang, C. (2016). DeepFish: Accurate underwater live fish recognition with a deep architecture. Neurocomputing, 187, 49–58. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788). Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 2015, 91–99. Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: A database and web-based tool for image annotation. International Journal of Computer Vision, 77, 157–173. Shafry, M. R. M., Rehman, A., Kumoi, R., Abdullah, N., & Saba, T. (2012). FiLeDI framework for measuring fish length from digital images. International Journal of Physical Sciences, 7, 607–618. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Spampinato, C., Chen-Burger, Y. H., Nadarajan, G., & Fisher, R. B. (2008). Detecting, tracking, and counting fish in low-quality unconstrained underwater videos. VISAPP 2008, 1. Student. (1908). The probable error of a mean. Biometrika, 6(1), 1-25. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94692 | - |
dc.description.abstract | 蝦是全球主要的蛋白質來源。在蝦類養殖中,飼料成本大約占總支出的40%。有效的進食管理對於優化蝦的成長和最小化成本至關重要。蝦的食慾受到生長階段和環境條件的影響。此外,由於蝦類是底棲生物,使得直接觀察變得充滿挑戰。傳統上,蝦的食慾是通過將樣本飼料放置在傘網上進行肉眼觀察來確定的,但此方法耗時且主觀。為了解決這些問題,本研究旨在通過使用深度神經網絡觀察蝦類的餵食相關行為來自動化蝦的食慾判定。
在提議的方法中,構建了配備飼料投料器的水下攝影模組,以在樣本餵食過程中捕捉蝦隻的影片。通過影像處理算法對影片中的飼料殘留區域進行量化,並計算出反映蝦食慾的飼料殘留區域變化指數(FRAVI)。影片中的蝦隻被YOLOv9-c模型和追蹤演算法追蹤。接著,利用飼料殘留檢測模組進行測量,衍生出關鍵的進食相關特性,包括蝦類數量、移動、進入頻率和停留時間。飼料殘留檢測模組達到了0.885的整體相關性,而YOLOv9-c模型達到了0.88的平均精度。此外,還監測了水溫、鹽度和溶解氧等環境因素,分析它們與進食相關特性的相關性。分析表明,水溫與蝦類活動水平正相關,較高的站壓與蝦類進入頻率正相關,顯示這些因素在飼料攝取效率中扮演著重要角色。本研究提供了關於蝦隻進食相關行為的持續、客觀和精確的信息,這些資訊可能有助於農民優化飼料管理和水產養殖實踐。 | zh_TW |
dc.description.abstract | Shrimp serves as a significant protein source globally. In shrimp farming, feed accounts for approximately 40% of the overall expenses. Effective feeding management is crucial for optimizing shrimp growth and minimizing the costs. Shrimp appetite is influenced by growth stages and ambient conditions. In addition, shrimps are benthos, making direct observation challenging. Conventionally, shrimp appetite was determined using naked-eye observation by putting sample feed on trays. The approach is, however, time-consuming and subjective. To address these issues, this study aimed to automate shrimp appetite by observing their feeding-related behaviors using deep neural networks. In the proposed approach, underwater video modules with feed dispensers were built to capture videos of shrimps during sample feeding (i.e., a small amount of feed). Feed residue areas in the videos were quantified using image processing algorithms. Feed residue area variation index (FRAVI) that indicates shrimp appetite was quantified. Shrimps in the videos were detected and tracked using YOLOv9-c and simple online realtime tracking algorithm. Feed residue were measured using feed residue detection module. Key feeding-related characteristics, including shrimp count, movement, entry frequency, dwelling time, were next derived. Feed residue detection module achieved an overall correlation of 0.885. YOLOv9-c model achieved a mean average precision of 0.88. Additionally, environmental factors like water temperature, salinity, and dissolved oxygen levels were monitored to analyze correlations with feeding-related behavior. Analysis indicated that water temperature is positively correlated with shrimp activity levels, and higher station pressure is positively correlated with shrimp entry frequency, suggesting these factors play a significant role in feed intake efficiency. The proposed approach provides continuous, objective, and precise information of the feeding-related behaviors of shrimps. The information may aid farmers in optimizing feed management and aquaculture practices. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-16T17:33:23Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2024-08-16T17:33:23Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | TABLE OF CONTENTS
ACKNOWLEDGEMENTS i 中文摘要 ii ABSTRACT iii TABLE OF CONTENTS v LIST OF FIGURES vii LIST OF TABLES x CHAPTER 1. INTRODUCTION 1 1.1 Background 1 1.2 Objectives 2 1.3 Organization 2 CHAPTER 2. LITERATURE REVIEW 3 2.1 Image processing techniques for quantifying shrimp feeding behavior 3 2.2 Deep learning techniques in aquaculture monitoring 3 2.3 Integration of deep learning with other techniques for comprehensive analysis 3 CHAPTER 3. MATERIALS AND METHODS 5 3.1 Overview of the system 5 3.2 Experiment site 5 3.3 Video acquisition 6 3.4 Image collection and conditions 7 3.5 Image calibration 7 3.6 Image annotation 8 3.7 Shrimp detection and tracking 9 3.8 Feed residue detection 10 3.9 Feeding-related characteristics of shrimps 11 CHAPTER 4. RESULTS AND DISCUSSION 13 4.1 Performance of shrimp detection model 13 4.2 Challenging scenarios in shrimp detection 13 4.3 Performance of shrimp tracking 14 4.4 Performance of the feed residue measurement algorithms 14 4.5 Comparative analysis of feeding-related characteristics in strong and weak conditions 15 4.6 Feeding-related characteristics in a long-term study 20 4.7 Comparison of feeding-related characteristics between daytime and nighttime 23 4.8 Correlation between feeding-related characteristics and environmental conditions 24 4.9 Shrimp count difference between the two UVMs 26 CHAPTER 5. CONCLUSIONS 28 5.1 Summary 28 5.2 Future work 28 REFERENCES 29 LIST OF FIGURES Figure 1. Flowchart of the proposed system. 5 Figure 2. Configuration of the aquaculture pond. Photos of the (I) automatic feeder, (II) underwater video module, and (III) water quality monitoring unit. 6 Figure 3. (a) Daytime and (b) nighttime images. 7 Figure 4. (a) Original and (b) calibrated UVM images. 8 Figure 5. The annotations of measurable and visible shrimps in (a) daytime and (b) nighttime images. 9 Figure 6. Pipeline of the FR area identification on daytime images: (a) original image, (b) grayscale image, (c) binarized image using dynamic thresholding, and (d) binarized image after morphological operations. 10 Figure 7. Pipeline of the FR area identification on nighttime images: (a) original image, (b) gamma-corrected grayscale image, (c) CLAHE image, (d) binarized image, and (e) binarized image after geometric filtering. 11 Figure 8. Illustration of the predefined ROI (yellow bounding box). 12 Figure 9. Precision-recall curve of the trained shrimp detection model. 13 Figure 10. Challenging scenarios: (a) overly bright lighting, (b) dim lighting, (c) signal distortions, (d) uneven lighting conditions, and (e) partial occlusion of shrimp by feed residue. The figure comprised (I) original images and (II) Grad-cams. 14 Figure 11. FR area under (a) strong and (b) weak conditions during the trait feeding period. 16 Figure 12. Shrimp count under (a) strong and (b) weak conditions during the trait feeding period. 17 Figure 13. Frequency of shrimp entering the RoI under (a) strong and (b) weak conditions during the trait feeding period. 18 Figure 14. Duration of stay within the RoI under (a) strong and (b) weak conditions during the trait feeding period. 19 Figure 15. Shrimp movement under (a) strong and (b) weak conditions during the trait feeding period (represented by heat maps). 20 Figure 16. Illustration of (a) FRAVI, (b) shrimp count, (c) entry frequency, (d) duration of stay, and (e) movement for a single cultivation batch. 22 Figure 17. Boxplots illustrating the differences in (a) shrimp count, (b) duration of stay, (c) movement, and (d) entry frequency between daytime and nighttime conditions. 23 Figure 18. (a) Water quality data and (b) weather data during the cultivation period. 25 Figure 19. Correlation matrix of between characteristics and environmental conditions. 26 Figure 20. Comparison of shrimp counts from two cameras during feeding sessions. 27 LIST OF TABLES Table 1. Amount of training and test images and labelled bounding boxes for the SFRCM 8 Table 2. Evaluation result of shrimp tracking 14 Table 3. Evaluation result of FR measurement algorithms 15 Table 4. T-test of feeding-related characteristics between daytime and nighttime 24 | - |
dc.language.iso | zh_TW | - |
dc.title | 利用深度學習量化白蝦進食相關特性 | zh_TW |
dc.title | Quantifying Feeding-related Characteristics of Shrimp Using Deep Learning | en |
dc.type | Thesis | - |
dc.date.schoolyear | 112-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 陳永耀;韓玉山;朱元南 | zh_TW |
dc.contributor.oralexamcommittee | Yung-Yao Chen;Yu-San Han;Yuan-Nan Chu | en |
dc.subject.keyword | 深度學習,機器視覺,蝦類養殖,蝦類進食行為, | zh_TW |
dc.subject.keyword | Deep learning,shrimp behavior,computer vision,shrimp farming, | en |
dc.relation.page | 33 | - |
dc.identifier.doi | 10.6342/NTU202403695 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2024-08-13 | - |
dc.contributor.author-college | 生物資源暨農學院 | - |
dc.contributor.author-dept | 生物機電工程學系 | - |
顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-112-2.pdf | 3.61 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。