Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 生物資源暨農學院
  3. 生物機電工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73954
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor郭彥甫
dc.contributor.authorYi-Chin Luen
dc.contributor.author呂易晉zh_TW
dc.date.accessioned2021-06-17T08:14:49Z-
dc.date.available2024-08-20
dc.date.copyright2019-08-20
dc.date.issued2019
dc.date.submitted2019-08-14
dc.identifier.citationBishop, C. M. (1995). Neural networks for pattern recognition. Oxford university press.
Cao, Z., Principe, J. C., Ouyang, B., Dalgleish, F., and Vuorenkoski, A. (2015, October). Marine animal classification using combined CNN and hand-designed image features. In OCEANS 2015-MTS/IEEE Washington (pp. 1-6).
Dai, J., Li, Y., He, K., & Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems (pp. 379-387).
FAO. 2018. The State of World Fisheries and Aquaculture 2018 - Meeting the sustainable development goals. Rome.
Fu, J., Zheng, H., and Mei, T. (2017). Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4438-4446).
Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, BioL Cybem. 36 (1980) 193-202. S. Shiotani et al./Neurocomputing 9 (1995) Ill-130, 130.
Glorot, X., Bordes, A., and Bengio, Y. (2011, June). Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 315-323).
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861v1.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
Ioffe, S., and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167v3.
Kingma, D. P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980v9.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
Larsen, R., Olafsdottir, H., and Ersbøll, B. K. (2009, June). Shape and texture based classification of fish species. In Scandinavian Conference on Image Analysis (pp. 745-749). Springer, Berlin, Heidelberg.
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
LeCun, Y. A., Bottou, L., Orr, G. B., and Müller, K. R. (2012). Efficient backprop. In Neural networks: Tricks of the trade (pp. 9-48). Springer, Berlin, Heidelberg.
Lin, M., Chen, Q., and Yan, S. (2013). Network in network. arXiv preprint arXiv:1312.4400v3.
Lin, T. Y., RoyChowdhury, A., and Maji, S. (2015). Bilinear cnn models for fine-grained visual recognition. In Proceedings of the IEEE international conference on computer vision (pp. 1449-1457).
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham.
Lu, Y. C., Tung, C., & Kuo, Y. F. (2019). Identifying the species of harvested tuna and billfish using deep convolutional neural networks. ICES Journal of Marine Science.
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic differentiation in pytorch.
Reithaug, A. (2018). Employing Deep Learning for Fish Recognition. (Master's thesis, Western Norway University of Applied Sciences). Retrieved from https://www.uru.no/content/Employing_Deep_Learning_for_Fish_Recognition.pdf
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).
Salman, A., Siddiqui, S. A., Shafait, F., Mian, A., Shortis, M. R., Khurshid, K., Ulges, A., and Schwanecke, U. (2019). Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system. ICES Journal of Marine Science.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4510-4520).
Siddiqui, S. A., Salman, A., Malik, M. I., Shafait, F., Mian, A., Shortis, M. R., and Harvey, E.S. (2017). Automatic fish species classification in underwater videos: exploiting pre-trained deep neural network models to compensate for limited labelled data. ICES Journal of Marine Science, 75(1), 374-389.
Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556v6.
Strachan, N. J. C., Nesvadba, P., and Allen, A. R. (1990). Fish species recognition by shape analysis of images. Pattern Recognition, 23(5), 539-544.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826).
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.
Wen, Y., Zhang, K., Li, Z., & Qiao, Y. (2016, October). A discriminative feature learning approach for deep face recognition. In European conference on computer vision (pp. 499-515). Springer, Cham.
Zion, B., Alchanatis, V., Ostrovsky, V., Barki, A., and Karplus, I. (2007). Real-time underwater sorting of edible fish species. Computers and Electronics in Agriculture, 56(1), 34-45.
Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2018). Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8697-8710).
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73954-
dc.description.abstract漁獲統計資料是海洋資源管理中必不可少的資訊,而統計資料通常是由海上觀察員或船員手動紀錄的,但這樣人工紀錄的方式非常耗時且極具主觀性,因此存在自動化收集與回報漁獲資料的需求。然而漁船甲板上時常充滿各式的雜物,使得自動化的收集與回報漁獲資料充滿挑戰性。近年來由於卷積神經網路 (convolutional neural networks, CNNs) 越來越受歡迎且廣泛被應用於各種複雜的機器視覺任務。因此本研究利用深度卷積神經網路自動辨識11種延繩釣漁船常捕獲之魚種/類,這些魚種/類包含長鰭鮪魚 (Thunnus alalunga)、大目鮪魚 (T. obesus)、黃鰭鮪魚 (T. albacares)、南方黑鮪 (T. maccoyii)、黑皮旗魚 (Makaira nigricans)、雨傘旗魚 (Istiophorus platypterus)、劍旗魚 (Xiphias gladius)與鬼頭刀 (Coryphaena hippurus)。本研究之自動魚種辨識模型使用四種不同的深度卷積神經網路架構,包含:VGG-16、ResNet-50、DenseNet-201與MobileNetV2,搭配center loss function 進行訓練。研究中卷積神經網路模型之準確率最高可達95.83%,而在圖像顯示卡 (Graphics Processing Unit, GPU) 與中央處理器(Central Processing Unit, CPU)最快之運行速度分別可達1.75與107.82毫秒∕影像。zh_TW
dc.description.abstractFish catch statistics reported by vessels are essential information for the management of marine resource. The statistics were conventionally recorded by observers or fishermen. Manual recording is time consuming and can be subjective; thus, there is a demand for automatic statistics collection and reporting. The decks of fishing vessels are usually full of miscellaneous items, making automatic reporting of the statistics challenging. In recent years, convolutional neural networks (CNNs) have become increasingly popular and been applied to solving complex machine vision tasks. This study proposed to automatically identify 11 species or types of fish harvested by longliners using deep CNNs. The species included albacore (Thunnus alalunga), bigeye tuna (T. obesus), yellowfin tuna (T. albacares), southern bluefin tuna (T. maccoyii), blue marlin (Makaira nigricans), Indo-Pacific sailfish (Istiophorus platypterus), swordfish (Xiphias gladius), and dolphin fish (Coryphaena hippurus). Four deep CNNs modified from architectures VGG-16, ResNet-50, DenseNet-201, and MobileNetV2 were trained to identify the species and types of the fish in images collected on longliners. Center loss function was also applied during training for improving the performance of the CNNs. The CNNs reached an accuracy of as high as 95.83% and required a processing time of as short as 1.75 ms using a GPU and 107.82ms using a CPU.en
dc.description.provenanceMade available in DSpace on 2021-06-17T08:14:49Z (GMT). No. of bitstreams: 1
ntu-108-R06631011-1.pdf: 2004965 bytes, checksum: fc65a59c3aa024c7205751f47077bd94 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontentsACKNOWLEDGEMENTS i
摘要 ii
ABSTRACT iii
TABLE OF CONTENTS iv
LIST OF FIGURES vi
LIST OF TABLES viii
CHAPTER 1. INTRODUCTION 1
1.1 General background information 1
1.2 Objectives 2
1.3 Organization 3
CHAPTER 2. LITERATURE REVIEW 4
2.1 Traditional image-based approaches for fish species identification 4
2.2 Convolutional Neural Network 4
2.3 Fine-grained classification 5
CHAPTER 3. MATERIALS AND METHODS 7
3.1 Fish images collection 7
3.2 Image preprocessing and manipulation 8
3.3 Transfer learning and pre-trained models 9
3.4 Model training details 15
CHAPTER 4. RESULTS AND DISSCUSSION 17
4.1 Convergence of the models 17
4.2 Performance of the trained models 17
4.3 Challenging cases 20
4.4 Processing time of the models 20
4.5 Performance comparison with baseline model 21
CHAPTER 5. CONCLUSIONS 23
REFERENCES 24
dc.language.isoen
dc.subject中央損失函數zh_TW
dc.subject卷積神經網路zh_TW
dc.subject深度學習zh_TW
dc.subject細粒度分類zh_TW
dc.subject魚種辨識zh_TW
dc.subjectConvolutional neural networken
dc.subjectCenter lossen
dc.subjectFish species identificationen
dc.subjectFine-grained classificationen
dc.subjectDeep learningen
dc.title利用深度卷積神經網路辨識延繩釣漁獲中常見之魚種zh_TW
dc.titleIdentifying Species of Common Sea Fish Harvested by Longliner Using Deep Convolutional Neural Networksen
dc.typeThesis
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee謝清祿,花凱龍,鄭文皇
dc.subject.keyword中央損失函數,卷積神經網路,深度學習,細粒度分類,魚種辨識,zh_TW
dc.subject.keywordCenter loss,Convolutional neural network,Deep learning,Fine-grained classification,Fish species identification,en
dc.relation.page28
dc.identifier.doi10.6342/NTU201903085
dc.rights.note有償授權
dc.date.accepted2019-08-15
dc.contributor.author-college生物資源暨農學院zh_TW
dc.contributor.author-dept生物產業機電工程學研究所zh_TW
顯示於系所單位:生物機電工程學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
1.96 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved