Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 生物資源暨農學院
  3. 生物機電工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73948
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor郭彥甫(Yan-Fu Kuo)
dc.contributor.authorChi-Hsuan Tsengen
dc.contributor.author曾啟軒zh_TW
dc.date.accessioned2021-06-17T08:14:38Z-
dc.date.available2024-08-22
dc.date.copyright2019-08-22
dc.date.issued2019
dc.date.submitted2019-08-14
dc.identifier.citationAbadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Kudlur, M. (2016). Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16) (pp. 265-283).
Ames, R. T., Leaman, B. M., & Ames, K. L. (2007). Evaluation of video technology for monitoring of multispecies longline catches. North American Journal of Fisheries Management, 27(3), 955-964
Chollet, F. (2015). Keras. https://github.com/fchollet/keras
Everingham, M., & Winn, J. (2011). The PASCAL visual object classes challenge 2012 (VOC2012) development kit. Pattern Analysis, Statistical Modelling and Computational Learning, Tech. Rep.
FAO. (2017). Seafood traceability for fisheries compliance: Country-level support for catch documentation schemes. Rome: FAO.
FAO. (2018). The State of World Fisheries and Aquaculture 2018 (SOFIA 2018): meeting the sustainable development goals. ROME: Food and Agriculture Organization.
Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874.
Francisco, F. A., Nührenberg, P., & Jordan, A. L. (2019). A low-cost, open-source framework for tracking and behavioural analysis of animals in aquatic ecosystems. bioRxiv, 571232.
French, G., Fisher, M. H., Mackiewicz, M., & Needle, C. (2015). Convolutional neural networks for counting fish in fisheries surveillance video. Proceedings of the Machine Vision of Animals and their Behaviour (MVAB), 7-1.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in neural information processing systems (pp. 2017-2025).
Kindt-Larsen, L., Kirkegaard, E., & Dalskov, J. (2011). Fully documented fishery: a tool to support a catch quota management system. ICES Journal of Marine Science, 68(8), 1606-1610.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
LeCun, Y., Boser, B. E., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. E., & Jackel, L. D. (1990). Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems (pp. 396-404).
Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2117-2125).
Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... & Zitnick, C. L. (2014, September). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham.
Li, X., Shang, M., Qin, H., & Chen, L. (2015, October). Fast accurate fish detection and recognition of underwater images with fast r-cnn. In OCEANS 2015-MTS/IEEE Washington (pp. 1-5). IEEE.
Li, X., Shang, M., Hao, J., & Yang, Z. (2016, April). Accelerating fish detection and recognition by sharing CNNs with objectness learning. In OCEANS 2016-Shanghai (pp. 1-5). IEEE.
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
Manning, C. D., Manning, C. D., & Schütze, H. (1999). Foundations of statistical natural language processing. MIT press.
Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807-814).
Neubeck, A., & Van Gool, L. (2006, August). Efficient non-maximum suppression. In 18th International Conference on Pattern Recognition (ICPR'06) (Vol. 3, pp. 850-855). IEEE.
Qin, H., Li, X., Liang, J., Peng, Y., & Zhang, C. (2016). DeepFish: Accurate underwater live fish recognition with a deep architecture. Neurocomputing, 187, 49-58.
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
Russell, B. C., Torralba, A., Murphy, K. P., & Freeman, W. T. (2008). LabelMe: a database and web-based tool for image annotation. International journal of computer vision, 77(1-3), 157-173.
Siddiqui, S. A., Salman, A., Malik, M. I., Shafait, F., Mian, A., Shortis, M. R., ... & Handling editor: Howard Browman. (2017). Automatic fish species classification in underwater videos: exploiting pre-trained deep neural network models to compensate for limited labelled data. ICES Journal of Marine Science, 75(1), 374-389.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Strachan, N. J. C. (1993). Recognition of fish species by colour and shape. Image and vision computing, 11(1), 2-10.
Sun, X., Shi, J., Dong, J., & Wang, X. (2016, October). Fish recognition from low-resolution underwater images. In 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI) (pp. 471-476). IEEE.
Sung, M., Yu, S. C., & Girdhar, Y. (2017, June). Vision based real-time fish detection using convolutional neural network. In OCEANS 2017-Aberdeen (pp. 1-6). IEEE.
Van Rossum, G., & Drake Jr, F. L. (1995). Python tutorial (p. 130). Amsterdam, The Netherlands: Centrum voor Wiskunde en Informatica.
Zheng, Z., Guo, C., Zheng, X., Yu, Z., Wang, W., Zheng, H., ... & Zheng, B. (2018, May). Fish Recognition from a Vessel Camera Using Deep Convolutional Neural Network and Data Augmentation. In 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans (OTO) (pp. 1-5). IEEE.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73948-
dc.description.abstract捕撈漁獲的統計是海洋資源永續利用及管理的關鍵因素,近年來有許多船隻已經利用電子觀察員系統(EMS)來記錄漁船作業情況,接著觀察員在資料中心判讀EMS的影片並做出捕撈魚穫的統計,人工的判讀和記錄既費時又消耗大量人工,因此,本研究提出利用深度卷積神經網路自動偵測及計算影片中魚體並測量魚體長的方法,在研究中以遮罩區域卷積神經網路(Mask R-CNN)偵測並切割影片每幀中的魚體,利用時間和距離閥值計算魚體數量,接著以Mask R-CNN預測的機率和遮罩來辨識魚的類別和測量魚體長,本研究的Mask R-CNN模型在魚體偵測上達到96.46%的召回率及93.51%的平均精確率,本研究的魚體計算方法達到93.84%的召回率及77.31%的精確率,本研究在影片中魚類別辨識達到98.06%的準確率。zh_TW
dc.description.abstractThe statistics of harvested fish are key indicators for marine resource management and sustainability. In recent years, electronic monitoring systems (EMS) are used to record the fishing practices of vessels. The statistics of the harvested fish in the EMS videos later are manually read and collected by the operators in data centers. Manual collection is, however, time consuming, and labor intensive. This study proposes to automatically detect harvested fish, identify fish types, and measure fish body lengths in the EMS videos using deep learning. In the study, the fish in the frames of the EMS videos were detected and segmented from the background at pixel level using mask regional-based convolutional neural networks (Mask R-CNN). The counting of the fish was then determined using time thresholding and distance thresholding. Subsequently, the types and body lengths of the fish were next determined using the confidence scores and the masks, respectively, predicted by the Mask R-CNN model. The developed Mask R-CNN model reached a recall of 96.46% and a mean average precision of 93.51% in detection. The proposed method for fish counting reached a recall of 93.84% and a precision of 77.31%. The proposed method for fish type identification reached an accuracy of 98.06%.en
dc.description.provenanceMade available in DSpace on 2021-06-17T08:14:38Z (GMT). No. of bitstreams: 1
ntu-108-R06631004-1.pdf: 2552562 bytes, checksum: 9148208b788162dce385c91d90b48b1c (MD5)
Previous issue date: 2019
en
dc.description.tableofcontentsACKNOWLEDGEMENTS i
摘要 ii
ABSTRACT iii
TABLE OF CONTENTS iv
LIST OF FIGURES vi
LIST OF TABLES viii
CHAPTER 1. INTRODUCTION 1
1.1 Background 1
1.2 Objectives 1
1.3 Organization 2
CHAPTER 2. LITERATURE REVIEW 3
2.1 Image-processing-based approaches for fish detection 3
2.2 Fish detection and counting using deep learning 4
CHAPTER 3. MATERIALS AND METHODS 5
3.1 Image collection and training data preprocessing 5
3.2 Architecture of Mask R-CNN and training methodology 6
3.3 Fish counting in the videos 9
3.4 Fish type identification and body length measurement 10
CHAPTER 4. RESULTS AND DISCUSSION 12
4.1 The training loss of the Mask R-CNN model 12
4.2 Feature maps of the developed Mask R-CNN model 12
4.3 The performance of fish and buoy detection 14
4.4 Failure case study of fish and buoy detection 15
4.5 The performance of the fish counting 17
4.6 The performance of fish type identification 19
4.7 The performance of fish body length estimation 21
CHAPTER 5. CONCLUSION 23
REFERENCES 24
dc.language.isoen
dc.subject卷積類神經網路zh_TW
dc.subject魚體長zh_TW
dc.subject漁業資源管理zh_TW
dc.subject物件偵測zh_TW
dc.subject實體切割zh_TW
dc.subjectfish resource managementen
dc.subjectinstance segmentationen
dc.subjectConvolutional neural networksen
dc.subjectfish body lengthen
dc.subjectobject detectionen
dc.title利用深度卷積類神經網路偵測及計算影片中魚體並測量魚體長zh_TW
dc.titleDetecting and Counting Harvested Fish and Measuring Fish Body Lengths in EMS Videos Using Deep Convolutional Neural Networksen
dc.typeThesis
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鄭文皇(Wen-Huang Cheng),花凱龍(Kai-Lung Hua),謝清祿(Ching-Lu Hsieh)
dc.subject.keyword卷積類神經網路,魚體長,漁業資源管理,物件偵測,實體切割,zh_TW
dc.subject.keywordConvolutional neural networks,fish body length,fish resource management,object detection,instance segmentation,en
dc.relation.page28
dc.identifier.doi10.6342/NTU201903433
dc.rights.note有償授權
dc.date.accepted2019-08-15
dc.contributor.author-college生物資源暨農學院zh_TW
dc.contributor.author-dept生物產業機電工程學研究所zh_TW
顯示於系所單位:生物機電工程學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
2.49 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved