請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80934完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 李綱(Kang Li) | |
| dc.contributor.author | Chun-Ju Chen | en |
| dc.contributor.author | 陳俊儒 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:22:35Z | - |
| dc.date.available | 2021-11-08 | |
| dc.date.available | 2022-11-24T03:22:35Z | - |
| dc.date.copyright | 2021-11-08 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-09-21 | |
| dc.identifier.citation | [1]E.Guizzo,R.Klett,'How robots became essential workers: They disinfected hospital rooms. They delivered medical supplies. They swabbed people's throats. Next time around, they'll be treating patients,' in IEEE Spectrum, vol. 57, no. 10, pp. 36-43, Oct. 2020 [2]Starship Technologies, Starship robot, from https://www.starship.xyz/business/ [3]TECO, Meal Delivering Robot, from https://www.teco.com.tw/en/products/agv [4]C. Chen, B. Wang, C.X. Lu, N. Trigoni, and A. Markham, “A survey on deep learning for localization and mapping: Towards the age of spatial machine intelligence,”2020,arXiv:2006.12567.[Online].Available:http://arxiv.org/abs/2006.12567 [5]C.D. Jones, A.B. Smith, and E.F. Roberts, Book Title, Publisher, Location, Date. [6]D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”International journal of computer vision, vol. 60, no. 2,pp. 91–110, 2004 [7]M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in European conference on computer vision, pp. 778–792, Springer, 2010 [8]E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision, pp. 2564–2571, IEEE, 2011. [9]X.S. Gao, X.-R. Hou, J. Tang and H.F. Cheng, 'Complete solution classification for the perspective-three-point problem,' in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25,no. 8, pp. 930-943, Aug. 2003, doi:10.1109/TPAMI.2003.1217599. [10]M.A. Fischler and R.C. Bolles. Random Sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395,June1981. [11]K. Konda and R. Memisevic, “Learning Visual Odometry with a Convolutional Network,” in International Conference on Computer Vision Theory and Applications, pp. 486–490, 2015. [12]S. Wang, R. Clark, H. Wen, and N. Trigoni, “DeepVO : Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks,” in International Conference on Robotics and Automation (ICRA), 2017. [13]M. R. U. Saputra, P. P. de Gusmao, Y. Almalioglu, A. Markham,and N. Trigoni, “Distilling knowledge from a deep pose regressor network,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 263–272, 2019. [14]F. Xue, X. Wang, S. Li, Q. Wang, J. Wang, and H. Zha, “Beyond tracking: Selecting memory and refining poses for deep visual odometry,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8575–8583, 2019. [15]J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi, and A. Fitzgibbon, “Scene coordinate regression forests for camera relocalizationin rgb-d images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2930–2937, 2013. [16]E. Brachmann, A. Krull, S. Nowozin, J. Shotton, F. Michel, S. Gumhold, and C. Rother, “Dsac-differentiable ransac for camera localization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6684–6692, 2017. [17]E. Brachmann and C. Rother, “Learning less is more-6d camera localization via 3d surface regression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4654–4662, 2018. [18]X. Li, J. Ylioinas, J. Verbeek, and J. Kannala, “Scene coordinate regression with angle-based reprojection loss for camera relocalization,”in Proceedings of the European Conference on Computer Vision (ECCV),pp. 0–0, 2018. [19]E. Brachmann and C. Rother, “Visual camera re-localization from rgb and rgb-d images using dsac,” arXiv preprint arXiv:2002.12324,2020. [20]M. Bui, S. Albarqouni, S. Ilic, and N. Navab, “Scene coordinate and correspondence learning for image-based localization,” arXiv preprint arXiv:1805.08443, 2018. [21]E. Brachmann and C. Rother, “Neural-guided ransac: Learning where to sample model hypotheses,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 4322–4331, 2019. [22]M. Cai, H. Zhan, C. Saroj Weerasekera, K. Li, and I. Reid, “Camera relocalization by exploiting multi-view constraints for scene coordinates regression,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0–0, 2019. [23]L. Zhou, Z. Luo, T. Shen, J. Zhang, M. Zhen, Y. Yao, T. Fang, and L. Quan, “Kfnet: Learning temporal camera relocalization using kalman filtering,” arXiv preprint arXiv:2003.10629, 2020. [24]R.E. Kalman, “A new approach to linear filtering and prediction problems.” 1960. [25]A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in Proceedings of the IEEE international Conference on Computer Vision (ICCV),pp. 2938–2946, 2015. [26]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in CVPR, 2015. [27]A. Kendall and R. Cipolla, “Modelling uncertainty in deep learning for camera relocalization,” in 2016 IEEE international conference on Robotics and Automation (ICRA), pp. 4762–4769, IEEE, 2016. [28]Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.arXiv:1506.02142, 2015. [29]A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),pp. 5974–5983, 2017. [30]T. Naseer and W. Burgard, “Deep regression for monocular camerabased6-dof global localization in outdoor environments,” in 2017IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), pp. 1525–1530, IEEE, 2017. [31]F. Walch, C. Hazirbas, L. Leal-Taixe, T. Sattler, S. Hilsenbeck, and D. Cremers, “Image-based localization using lstms for structured feature correlation,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 627–637, 2017. [32]M. Cai, C. Shen, and I. D. Reid, “A hybrid probabilistic model for camera relocalization.,” in BMVC, vol. 1, p. 8, 2018. [33]I. Melekhov, J. Ylioinas, J. Kannala, and E. Rahtu, “Image-based localization using hourglass networks,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 879–886, 2017. [34]K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. [35]P. Purkait, C. Zhao, and C. Zach, “Synthetic view generation for absolute pose regression and image synthesis.,” in BMVC, p. 69,2018. [36]S. Brahmbhatt, J. Gu, K. Kim, J. Hays, and J. Kautz, “Geometry-Aware Learning of Maps for Camera Localization,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),pp. 2616–2625, 2018. [37]B. Wang, C. Chen, C. X. Lu, P. Zhao, N. Trigoni, and A. Markham,“Atloc: Attention guided camera localization,” arXiv preprintarXiv:1909.03557, 2019. [38]A. Valada, N. Radwan, and W. Burgard, “Deep auxiliary learning for visual localization and odometry,” in 2018 IEEE international conference on robotics and automation (ICRA), pp. 6939–6946,IEEE, 2018. [39]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2015. [40]N. Radwan, A. Valada, and W. Burgard, “Vlocnet++: Deep multitask learning for semantic visual localization and odometry,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4407–4414, 2018. [41]T. Sattler, Q. Zhou, M. Pollefeys, and L. Leal-Taixe, “Understanding the limitations of cnn-based absolute camera pose regression,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3302–3312, 2019. [42]陳俊翰,“自主移動機器人之實時視覺定位與不確定估測系統”,in 國立臺灣大學機械工程研究所,2020 [43]A.J. Davison, I. D. Reid,N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 6, pp. 1052-1067, 2007. [44]E. Joseand M. D. Adams, 'Millimetre wave radar spectra simulation and interpretation for outdoor slam,' in IEEE International Conference on Robotics and Automation, vol. 2: IEEE, pp. 1321-1326 , 2004. [45]P. J. Besl et al, “A method for registration of 3-D shapes,” IEEE Trans. on PAMI, vol. 14, no. 2, 1992. [46]O. Bengtssonet al, “Localization in changing environments -estimation of a covariance matrix for the IDC algorithm,” in Proc. IEEE/RSJIROS, 2001. [47]F. Lu, Shape registration using optimization for mobile robot navigation. PhD thesis, Department of CS., University of Toronto, 1995. [48]A. Censi, 'An accurate closed-form estimate of ICP's covariance,' Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 2007, pp. 3167-3172, doi: 10.1109/ROBOT.2007.363961. [49]Y. Haraet al, “6DOF iterative closest point matching considering a priori with maximum a posteriori estimation,” in Proc. IEEE/RSJIROS, 2013. [50]N. Akaiet al, “Robust localization using 3D NDT scan matching with experimentally determined uncertainty and road marker matching,” in Proc. IEEE IV, pp. 1357–1364, 2017. [51]L. Marınet al, “Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots,” Sensors, pp. 14133–14160, 2013. [52]G. Jochmannet al, “Efficient multi-hypotheses unscented Kalman filtering for robust localization,” RoboCup 2011: Robot Soccer WorldCup XV, pp. 222–233, 2011. [53]P. Sundvallet al, “Fault detection for mobile robots using redundant positioning systems,” in Proc. IEEEICRA, pp. 3781–3786, 2006. [54]J. P. Mendozaet al, “Mobile robot fault detection based on redundant information statistics,” in Proc. IEEE/RSJ IROS, 2012 [55]F. Dellaertet al, “Monte Carlo localization for mobile robots,” in Proc. IEEE ICRA, vol. 2, pp. 1322–1328, 1999. [56]S. Thrunet al, “Probabilistic robotics,” MIT Press, 2005. [57]V. Vermaet al, “Particle filters for rover fault diagnosis,” Robotics Automation Magazine (RAM), 2004. [58]P. Goelet al, “Fault detection and identification in a mobile robot using multiple model estimation and neural network,” in Proc. IEEEICRA, 2000. [59]Z. Alsayedet al, “Failure detection for laser-based SLAM in urban and peri-urban environments,” in Proc. IEEE ITSC, pp. 126–132, 2017 [60]L. T. Hsu, “GNSS mulitpath detection using a machine learning approach,” in Proc. IEEE ITSC, pp. 1414–1419, 2017. [61]N. Akail, L. Y. Moralesl and H. Murase, 'Reliability Estimation of Vehicle Localization Result,' 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 2018, pp. 740-747, doi: 10.1109/IVS.2018.8500625. [62]S. Xu, W. Chou,H. Dong, “A Robust Indoor Localization System Integrating Visual Localization Aided by CNN-Based Image Retrieval with Monte Carlo Localization,” Sensors 2019, 19, 249. https://doi.org/10.3390/s19020249 [63]H. Jo, E. Kim, “New Monte Carlo Localization Using Deep Initialization : A Three-Dimensional LiDAR and a Camera Fusion Approach,” In: IEEE Access. 2020 ; Vol. 8. pp. 74485-74496. [64]A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision application,” CoRR, abs/1704.04861, 2017. [65]M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, L.C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks. mobile networks for classification, detection and segmentation,” CoRR, abs/1801.04381,2018. [66]K. He, X. Zhang, S. Ren, J. Sun, 'Deep residual learning for image recognition,' Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. [67]D. Caruso, J. Engel and D. Cremers, 'Large-scale direct SLAM for omnidirectional cameras,' 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 141-148, doi: 10.1109/IROS.2015.7353366, 2015. [68]A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” arXiv preprint arXiv:1705.07115, 2017. [69]Alex Graves, “Practical variational inference for neural networks” In Advances in Neural Information Processing Systems, pages 2348–2356, 2011. [70]Yarin Gal and Zoubin Ghahramani, “Bayesian convolutional neural networks with bernoulli approximate variational inference,”arXiv:1506.02158, 2015. [71]Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research,15(1):1929–1958, 2014. [72]C. Wu, “Towards linear-time incremental structure from motion,” In 3D Vision-3DV 2013, 2013 International Conference on, pages 127–134. IEEE, 2013. [73]Z. Huang, Y. Xu, J. Shi, X. Zhou, H. Bao, and G. Zhang, “Prior guided dropout for robust visual localization in dynamic environments,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2791–2800, 2019. [74]R. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. Davi-son, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon,“KinectFusion: Real-time dense surface mapping and tracking”. InProc. IS-MAR, 2011 [75]M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2. Kobe, Japan, 2009, p. 5. [76]G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE transactions on Robotics, vol. 23, no. 1, pp. 34–46, 2007. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80934 | - |
| dc.description.abstract | 為解決機器人定位系統之初始姿態估測與機器人綁架問題,本研究之深度學習模型藉由單張RGB影像預測位於地圖中的六維姿態,並且可實時運作於嵌入式系統上。在特殊結構的卷積神經網絡基礎下,加入幾何限制模組以及輔助學習方法增加模型的準確性,以此深度學習模型預測初始姿態。基於此深度學習模型,本研究開發一套定位失效偵測演算法,藉由評估目前主要定位系統長期與短期的代價變化,做到實時運算、低誤判率,且高失效偵測率的結果。最後經由實驗可證明,本研究模型運算速度可達到現今地圖座標迴歸模型(map coordinate regression model)最快的4.7ms,並且在嵌入式系統上達到19.15fps的速度。於室外資料集達到平均誤差1.51m與7.79°、室內資料集平均誤差0.40m與1.78°的精度;此外,定位失效偵測演算法實驗當中達到0.98~1.00的精準率與0.78~1.00的召回率,顯示演算法的低誤判率與高定位失效偵測成功率。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:22:35Z (GMT). No. of bitstreams: 1 U0001-1409202110324400.pdf: 4009491 bytes, checksum: 993f328351d6e8cc827e0d978f35847a (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | 口試委員會審定書...# 誌謝...i 中文摘要...ii ABSTRACT...iii CONTENTS...iv LIST OF FIGURES...vi LIST OF TABLES...viii Chapter 1Introduction...1 1.1研究背景...1 1.2研究動機與目的...2 1.3研究成果...3 Chapter 2文獻回顧...4 2.1基於機器學習方式定位方法...4 2.1.1視覺里程計...6 2.1.3地圖座標迴歸...7 2.2定位失效偵測...9 Chapter 3方法架構介紹...11 3.1基礎模型架構...12 3.1.1深度分離卷積...12 3.1.2線性瓶頸結構與反殘差結構...15 3.1.3全域性平均池化層...17 3.1.4姿態迴歸器...18 3.2深度學習模型訓練方法...18 3.2.1幾何限制模組架構...19 3.2.2輔助學習架構...21 3.2.3可學習權重之損失函數...24 3.3蒙地卡羅方法不確定性估測...26 3.4初始姿態估測與定位失效偵測模組...29 3.4.1初始姿態估測...29 3.4.2定位失效偵測演算法...30 3.4.3定位失效參數設計方法...33 Chapter 4實驗結果與討論...38 4.1地圖迴歸座標模型實驗分析...38 4.1.1模型訓練方法...38 4.1.2模型參數量與運算資源比較...39 4.1.3劍橋地標資料集實驗與分析...40 4.1.4七場景資料集實驗與分析...42 4.2輔助定位方法實驗分析...45 4.2.1實驗設備與場域介紹...45 4.2.2初始姿態估測實驗與分析...48 4.2.3定位失效偵測演算法實驗與分析...51 Chapter 5結論與未來建議...61 5.1結論...61 5.2未來建議...61 REFERENCE...62 | |
| dc.language.iso | zh-TW | |
| dc.subject | 卷積深度學習 | zh_TW |
| dc.subject | 視覺定位 | zh_TW |
| dc.subject | 定位失效偵測 | zh_TW |
| dc.subject | Deep Convolutional Neural Network | en |
| dc.subject | Visual Localization System | en |
| dc.subject | Localization Failure Detection | en |
| dc.title | 基於深度學習六維姿態迴歸之自主移動機器人初始姿態估測與定位失效偵測方法 | zh_TW |
| dc.title | Initial localization and localization failure detection for autonomous mobile robots using deep learning based 6D pose regression | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 林沛群(Hsin-Tsai Liu),蘇偉儁(Chih-Yang Tseng) | |
| dc.subject.keyword | 視覺定位,定位失效偵測,卷積深度學習, | zh_TW |
| dc.subject.keyword | Visual Localization System,Localization Failure Detection,Deep Convolutional Neural Network, | en |
| dc.relation.page | 68 | |
| dc.identifier.doi | 10.6342/NTU202103161 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2021-09-23 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1409202110324400.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 3.92 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
