請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74082
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 郭彥甫(Yan-Fu Kuo) | |
dc.contributor.author | Kuan-Ying Ho | en |
dc.contributor.author | 何冠穎 | zh_TW |
dc.date.accessioned | 2021-06-17T08:19:15Z | - |
dc.date.available | 2021-02-22 | |
dc.date.copyright | 2021-02-22 | |
dc.date.issued | 2021 | |
dc.date.submitted | 2021-02-02 | |
dc.identifier.citation | Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... Kudlur, M. (2016). Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16) (pp. 265-283). Bewley, A., Ge, Z., Ott, L., Ramos, F., Upcroft, B. (2016, September). Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP) (pp. 3464-3468). IEEE. Cgvict. RoLabelImg. Git code (2017). https://github.com/cgvict/roLabelImg Cowton, J., Kyriazakis, I., Bacardit, J. (2019). Automated Individual Pig Localisation, Tracking and Behaviour Metric Extraction Using Deep Learning. IEEE Access, 7, 108049-108060. Dai, J., Li, Y., He, K., Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems (pp. 379-387). Everingham, M., and Winn, J. (2011). The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Development Kit. Pattern Analysis, Statistical Modelling and Computational Learning, Technical Report. Fawcett, T. (2006). An introduction to ROC analysis. Pattern recognition letters, 27(8), 861-874. from error visibility to structural similarity, IEEE Trans. Image Process. 13 (4) (2004) 600–612. Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448). He, K., Gkioxari, G., Dollár, P., Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969). He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Huang, C. T. (2015). The Restructuring Policy of Agro-Manpower and Farmland in Taiwan, ROC. FFTC Agricultural Policy Platform. http://ap. fftc. agnet. org/ap_db. php. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. Neural Netw. 2004, 2, 985–990. Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167v3. Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Kashiha, M. A., Bahr, C., Ott, S., Moons, C. P., Niewold, T. A., Tuyttens, F., Berckmans, D. (2014). Automatic monitoring of pig locomotion using image analysis. Livestock Science, 159, 141-148. Kingma, D. P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). Kuhn, H. W. (1955). The Hungarian method for the assignment problem. Naval research logistics quarterly, 2(1‐2), 83-97. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2117-2125). Lin, T. Y., Goyal, P., Girshick, R., He, K., Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (pp. 2980-2988). Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham. M. Danelljan, G. Bhat, F.S. Khan, M. Felsberg, ECO: Efficient Convolution Operators for Tracking, 30th Ieee Conference on Computer Vision and Pattern Recognition (Cvpr 2017), (2017) 6931-6939. Manning, C. D., Manning, C. D., Schütze, H. (1999). Foundations of statistical natural language processing. MIT press. Milan, A., Leal-Taixé, L., Reid, I., Roth, S., Schindler, K. (2016). MOT16: A benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831. Nasirahmadi, A., Sturm, B., Edwards, S., Jeppsson, K. H., Olsson, A. C., Müller, S., Hensel, O. (2019). Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. Sensors, 19(17), 3738. Neubeck, A., Van Gool, L. (2006, August). Efficient non-maximum suppression. In 18th International Conference on Pattern Recognition (ICPR'06) (Vol. 3, pp. 850-855). IEEE. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. 2017. Automatic differentiation in pytorch. Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). Ren, S., He, K., Girshick, R., Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99). Ristani, E., Solera, F., Zou, R., Cucchiara, R. Tomasi, C. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. In ECCV workshop on Benchmarking Multi-Target Tracking, 2016. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520). Tzutalin. LabelImg. Git code (2015). https://github.com/tzutalin/labelImg Van Rossum, G., Drake Jr, F. L. (1995). Python tutorial (p. 130). Amsterdam, The Netherlands: Centrum voor Wiskunde en Informatica. Wojke, N., Bewley, A., Paulus, D. (2017, September). Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP) (pp. 3645-3649). IEEE. Yang, Q., Xiao, D., Lin, S. (2018). Feeding behavior recognition for group-housed pigs with the Faster R-CNN. Computers and Electronics in Agriculture, 155, 453-460. Yang, X., Liu, Q., Yan, J., Li, A. (2019). R3det: Refined single-stage detector with feature refinement for rotating object. arXiv preprint arXiv:1908.05612. Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment: Zhang, L., Gray, H., Ye, X., Collins, L., Allinson, N. (2018). Automatic individual pig detection and tracking in surveillance videos. arXiv preprint arXiv:1812.04901. Zheng, C., Zhu, X., Yang, X., Wang, L., Tu, S., Xue, Y. (2018). Automatic recognition of lactating sow postures from depth images by deep learning detector. Computers and Electronics in Agriculture, 147, 51-63. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74082 | - |
dc.description.abstract | 豬肉是台灣和世界許多國家的重要蛋白質來源。維持仔豬的斷奶率對於滿足豬肉不斷增長的需求至關重要。新生仔豬比較脆弱,因此需要更多照顧。然而,人工觀察是費時且勞動密集的,因此,本研究的目的為開發一種自動方法在影片中識別母豬的泌乳行為、定位仔豬、追蹤個別仔豬並量化仔豬的移動狀況。在研究中,視頻以一秒五張的速率轉換為圖像。圖像經過了預處理。開發了手機網路第二板(MobileNetV2)模型來識別泌乳行為。然後開發了精細旋轉視網膜網路(R3Det)模型來定位仔豬。隨後,將簡單的即時跟踪(SORT)演算法應用追蹤個別仔豬。最後,通過頻率和距離閾值過濾量化的仔豬運動誤差。本研究利用圖像顯示卡 (Graphics Processing Unit, GPU)計算的MobileNetV2在哺乳辨識確度達到95.45%的準確率及13.9毫秒∕影像的辨識速度。本研究利用GPU運算的R3Det在仔豬定位上的的平均精度達到87.08%、精準度為93.52%、召回率為88.52%及10.2 影像∕秒的辨識速度。本研究利用CPU運算的SORT演算法在仔豬追蹤上的多物體追蹤準確率為97.53%、多物體追蹤精準率為96.97%及IDF1為97.89%及171.6影像∕秒的辨識速度。本研究所提出的方法可以達到即時的性能表現。 | zh_TW |
dc.description.abstract | Pork is an essential source of protein in Taiwan and many countries around the world. Maintaining the weaning rate of piglets is essential to meet the increasing demand of pork. Newborn piglets are relatively fragile and need more attention. Manual observation is, however, time consuming, and labor intensive. This study aimed to develop an automatic approach for recognize lactating behavior of sows, localizing piglets, tracking individual piglets and quantifying piglet movements in videos. In the proposed approach, a video was converted to images using a rate of 5 fps. Images were preprocessed. A MobileNetV2 model was developed to recognize lactating behavior. A Refined rotation RetinaNet (R3Det) model was then developed to localize piglets. Subsequently, simple online and realtime tracking (SORT) algorithm was applied for individual piglets tracking. Finally, the quantified movement errors of the piglets were filtered by frequency and distance thresholding. The developed MobileNetV2 reached an overall accuracy of 95.45% and a test time of 13.9 ms per image using a GPU. The developed R3Det reached an overall mAP of 87.08%, a precision of 93.52%, a recall of 88.52% and a processing speed of 10.2 fps using a GPU. The SORT algorithm reached an overall MOTA of 97.53%, MOTP of 96.97%, IDF1 of 97.89% and a processing speed of 171.6 fps using a CPU. The proposed approaches could reach a realtime performance. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T08:19:15Z (GMT). No. of bitstreams: 1 U0001-2601202114392400.pdf: 3027307 bytes, checksum: cbf689d35fdc79913c45448d41c9a4df (MD5) Previous issue date: 2021 | en |
dc.description.tableofcontents | ACKNOWLEDGEMENTS i 摘要 ii ABSTRACT iii TABLE OF CONTENTS iv LIST OF FIGURES vi LIST OF TABLES ix CHAPTER 1. INTRODUCTION 1 1.1 Background 1 1.2 Objectives 1 1.3 Organization 2 CHAPTER 2. LITERATURE REVIEW 3 2.1 Image-processing-based and machine learning-based approaches for pig detection 3 2.1 Pig detection and tracking using deep learning 4 CHAPTER 3. MATERIALS AND METHODS 5 3.1 Farrowing crates 5 3.2 Embedded system for video collection 5 3.3 Image preprocessing 6 3.4 Image annotation 7 3.5 Lactating behavior recognition 8 3.6 Piglet localization 8 3.7 Piglet tracking and movement analysis 12 CHAPTER 4. RESULTS AND DISCUSSION 14 4.1 The training of the lactating recognition model 14 4.2 The performance of lactating behavior recognition 14 4.3 Training loss of the R3Det model 16 4.4 Performance of piglet localization 16 4.5 Failure case study of piglet detection 18 4.6 The performance of piglet tracking 19 4.7 The performance of movement quantification 24 CHAPTER 5. CONCLUSION 26 REFERENCES 27 | |
dc.language.iso | en | |
dc.title | 利用卷積神經網路自動辨識母豬哺乳行為及追蹤仔豬 | zh_TW |
dc.title | Automatic Recognizing Lactating Behaviors of Sows and Tracking of Piglets in Farrowing Houses Using Convolutional Neural Networks | en |
dc.type | Thesis | |
dc.date.schoolyear | 109-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林恩仲(EN-CHUNG LIN),鄭文皇(Wen-Huang Cheng),花凱龍(Kai-Lung Hua) | |
dc.subject.keyword | 卷積類神經網路,多物件追蹤,物件偵測,仔豬活動力, | zh_TW |
dc.subject.keyword | Convolutional neural networks,Multi-object tracking,Object detection,Piglet activity, | en |
dc.relation.page | 31 | |
dc.identifier.doi | 10.6342/NTU202100180 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2021-02-03 | |
dc.contributor.author-college | 生物資源暨農學院 | zh_TW |
dc.contributor.author-dept | 生物機電工程學系 | zh_TW |
顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-2601202114392400.pdf 目前未授權公開取用 | 2.96 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。