請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71190
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 丁肇隆 | |
dc.contributor.author | Jun-Hao Huang | en |
dc.contributor.author | 黃俊豪 | zh_TW |
dc.date.accessioned | 2021-06-17T04:57:43Z | - |
dc.date.available | 2023-08-06 | |
dc.date.copyright | 2018-08-06 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-07-26 | |
dc.identifier.citation | [1] Sadhu, T., Albu, A. B., Hoeberechts, M., Wisernig, E., & Wyvill, B. (2016, June). Obstacle detection for image-guided surface water navigation. In Computer and Robot Vision (CRV), 2016 13th Conference on (pp. 45-52). IEEE.
[2] Fefilatyev, S., Goldgof, D., Shreve, M., & Lembke, C. (2012). Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system. Ocean Engineering, 54, 1-12. [3] Yan, Y., Shin, B. S., Mou, X., Mou, W., & Wang, H. (2015). Efficient horizon detection on complex sea for sea surveillance. International Journal of Electrical, Electronics and Data Communication, 3(12), 49-52. [4] Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009, June). Frequency- tuned salient region detection. In Computer vision and pattern recognition, 2009. cvpr 2009. ieee conference on (pp. 1597-1604). IEEE. [5] Kristan, M., Kenk, V. S., Kovačič, S., & Perš, J. (2016). Fast image-based obstacle detection from unmanned surface vehicles. IEEE transactions on cybernetics, 46(3), 641-654. [6] Mou, X., & Wang, H. (2016). Image-based maritime obstacle detection using global sparsity potentials.Journal of information and communication convergence engineering, 14(2), 129-135. [7] Sadhu, T., Albu, A. B., Hoeberechts, M., Wisernig, E., & Wyvill, B. (2016, June). Obstacle detection for image-guided surface water navigation. In Computer and Robot Vision (CRV), 2016 13th Conference on (pp. 45-52). IEEE. [8] Fefilatyev, S. (2008). Detection of marine vehicles in images and video of open sea. [9] Wang, H., Wei, Z., Wang, S., Ow, C. S., Ho, K. T., & Feng, B. (2011, September). A vision-based obstacle detection system for unmanned surface vehicle. In Robotics, Automation and Mechatronics (RAM), 2011 IEEE Conference on (pp. 364-369). IEEE. [10] Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440). [11] Yu, F., & Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122. [12] Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2014). Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. arXiv preprint arXiv:1412.7062. [13] Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2018). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4), 834-848. [14] Chen, L. C., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. [15] Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495. [16] Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham. [17] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2016). Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1), 142-158. [18] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2016). Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1), 142-158. [19] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99). [20] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017, October). Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on (pp. 2980-2988). IEEE. [21] Neural Network and Deep Learning, http://neuralnetworksanddeeplearning.com/ [22] Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10), 1345-1359. [23] Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2), 303-338. [24] Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297-302. [25] Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2), 157-173. [26] Wachs, J. P., Kölsch, M., Stern, H., & Edan, Y. (2011). Vision-based hand-gesture applications. Communications of the ACM, 54(2), 60-71. [27] Szabo, R., & Gontean, A. (2013, November). Controlling a robotic arm in the 3D space with stereo vision. In Telecommunications Forum (TELFOR), 2013 21st (pp. 916-919). IEEE. [28] Kendoul, F., Nonami, K., Fantoni, I., & Lozano, R. (2009). An adaptive vision- based autopilot for mini flying machines guidance, navigation and control. Autonomous robots, 27(3), 165. [29] Schmid, K., Tomic, T., Ruess, F., Hirschmüller, H., & Suppa, M. (2013, November). Stereo vision based indoor/outdoor navigation for flying robots. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp. 3955-3962). IEEE. [30] McGuire, K., de Croon, G., De Wagter, C., Tuyls, K., & Kappen, H. (2017). Efficient optical flow and stereo vision for velocity estimation and obstacle avoidance on an autonomous pocket drone. IEEE Robotics and Automation Letters, 2(2), 1070-1076. [31] LeCun, Y. (2015). LeNet-5, convolutional neural networks. URL: http://yann. lecun. com/exdb/lenet, 20. [32] 李佳謙. (2017). 基於機器視覺之車牌辨識與車輛追蹤. 臺灣大學工程科學及海洋工程學研究所學位論文, 1-95. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71190 | - |
dc.description.abstract | 隨著深度學習(Deep Learning)的蓬勃發展,自動駕駛是近年來最為熱門的研究項目之一,無論是自動駕駛汽車,或是無人飛機都有非常傑出的研究成果,應用到生活中的實際產品也十分的成功。隨著自動駕駛成為了未來的趨勢,我們認為將此技術應用到自動駕駛船舶也能夠獲得很好的成果。現代船舶往往配備多種系統來輔助航行,例如:雷達、聲納等,除了航行外,船舶最為重要的功能是避碰,實時的避免障礙物能夠保證航行的安全,然而在某些情況下,例如:障礙物大小不足以被雷達偵測時,人類仍然需要依靠肉眼來判斷海面上的船隻或是障礙物,因此我們認為電腦視覺能夠成為船舶航行時的輔助系統。近年來,卷積神經網路(Convolutional Neural Network)技術在電腦視覺、影像辨識等應用上獲得非常大的進步與成果,它除了提供強大的計算能力,其深層的架構亦是提供了良好的檢測成果。在本研究中,我們提出了以全卷積神經網路(Fully Convolutional Network, FCN)實現海面障礙物偵測系統,並導入雙目視覺的方法估算出障礙物之距離與方位角,實現船舶避碰之輔助系統。 | zh_TW |
dc.description.abstract | With the rapid development of deep learning, autopilot is one of the most popular research projects in recent years. Whether autopilot car or drone have a very good research result and the actual product in the real world has a great success. It seems that autopilot is a future trend and we supposed that this technical can do a well job in autopilot assistant system. Modern ships are often equipped with various equipment to assist navigation, for example, radar, sonar, etc. In addition to navigating, the most important system of the ship is to avoid collision at sea. A real-time detected system can avoid ship in danger. However, if the size of the object is not enough to be detected by the radar, human still rely on their vision to detect ships or obstacles on sea surface. Therefore, we believe that computer vision can become a reliable assistant system for navigation. In recent years, Convolutional Neural Network (CNN) has achieved great success in computer vision, image recognition, and other applications. CNN provides powerful computing ability and its deep architecture also provides reliable test results. In this research, we proposed a sea surface object detection system using Fully Convolutional Network (FCN) and imported stereo vision to estimate the distance and azimuth of the object to achieving an assistant system for navigation. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T04:57:43Z (GMT). No. of bitstreams: 1 ntu-107-R05525103-1.pdf: 19313509 bytes, checksum: 0e827408812461cf9773f67c89ce5daf (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 口試委員審定書 i
誌謝 ii 摘要 iii ABSTRACT iv 論文目錄 v 圖目錄 vii 表目錄 ix 第一章、 諸論 1 1.1 研究背景與動機 1 1.2 研究貢獻 2 1.3 論文架構 2 第二章、 文獻探討 4 2.1 語義分割 4 2.2 卷積神經網路 5 2.2.1 卷積層 6 2.2.2 池化層 7 2.2.3 全連接層與Softmax層 8 2.3 全卷積網路 8 2.4 雙目視覺原理 10 2.4.1 投影幾何 11 2.4.2 內部參數 13 2.4.3 外部參數 14 2.5 雙攝影機模型 16 第三章、 研究方法 18 3.1 硬體裝備設置 18 3.2 系統架構 22 3.3 海天線偵測模組 24 3.4 海面物體偵測模組 28 第四章、 實驗結果與討論 35 4.1 實驗評估方式及參數說明 35 4.1.1 評估方式 35 4.1.2 參數說明 37 4.2 測試資料集 37 4.3 實驗結果分析 38 4.3.1 海天線偵測模組 38 4.3.2 海面物體偵測模組 44 4.3.3 雙目視覺實驗評估方式及結果 50 4.4 系統偵測結果 55 第五章、 結論與未來展望 57 參考文獻 59 | |
dc.language.iso | zh-TW | |
dc.title | 海面物體偵測 | zh_TW |
dc.title | Sea Surface Object Detection | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 張瑞益,張恆華,林宇銜 | |
dc.subject.keyword | 自動駕駛,卷積神經網路(CNN),全卷積神經網路(FCN),電腦視覺,雙目視覺, | zh_TW |
dc.subject.keyword | Convolutional neural network(CNN),Fully Convolutional Network(FCN),Computer Vision,Stereo Vision, | en |
dc.relation.page | 63 | |
dc.identifier.doi | 10.6342/NTU201802014 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-07-27 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 工程科學及海洋工程學研究所 | zh_TW |
顯示於系所單位: | 工程科學及海洋工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 18.86 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。