Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70718
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平
dc.contributor.authorPin-Hsin Linen
dc.contributor.author林品忻zh_TW
dc.date.accessioned2021-06-17T04:36:00Z-
dc.date.available2028-08-08
dc.date.copyright2018-08-14
dc.date.issued2018
dc.date.submitted2018-08-08
dc.identifier.citation[1] Bârsan, I. A., et al., “Robust Dense Mapping for Large-Scale Dynamic Environments.” In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2018.
[2] Girshick, R., et al., “Rich feature hierarchies for accurate object detection and semantic segmentation.” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014.
[3] Girshick, R., “Fast r-cnn.” In: Proceedings of the IEEE international conference on computer vision. 2015.
[4] Ren, S., et al., “Faster r-cnn: Towards real-time object detection with region proposal networks.” In: Advances in Neural Information Processing Systems (NIPS). 2015.
[5] Redmon, J., et al., “You only look once: Unified, real-time object detection.” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (CVPR) 2016.
[6] Liu, W., et al., “Ssd: Single shot multibox detector.” In: European Conference on Computer Vision (ECCV). Springer. Cham. 2016.
[7] Dai, J., et al., “Instance-aware semantic segmentation via multi-task network cascades.” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
[8] Pinheiro, P. O., et al., “Learning to refine object segments.” In: European Conference on Computer Vision (ECCV). Springer. Cham. 2016.
[9] Intel RealSense Depth Camera SR300. https://click.intel.com/intelrealsense-developer-kit-featuring-sr300.html
[10] Weeview Stereo Camera. https://www.esentra.com.tw/product/sid-3d-camera/
[11] Kähler, O., et al., “Very high frame rate volumetric integration of depth images on mobile devices.” IEEE Transactions on Visualization and Computer Graphics (TVCG). 2015.
[12] Pinheiro, P. O., et al., “Learning to segment object candidates.” In: Advances in Neural Information Processing Systems (NIPS). 2015.
[13] Geiger, A., et al., “Stereoscan: Dense 3d reconstruction in real-time.” In: IEEE Intelligent Vehicles Symposium (IV). 2011.
[14] Geiger, A., et al., “Efficient large-scale stereo matching.” Asian Conference on Computer Vision (ACCV). Springer. 2010.
[15] Lin, T. Y., et al., “Microsoft coco: Common objects in context.” In: European Conference on Computer Vision (ECCV). Springer. Cham. 2014.
[16] Wu, C., “Towards linear-time incremental structure from motion.” In: International Conference on 3D Vision (3DV). 2013.
[17] Mur-Artal, R., et al., “ORB-SLAM: a versatile and accurate monocular SLAM system.” IEEE Transactions on Robotics. 2015.
[18] Newcomben, R. A., et al., “KinectFusion: Real-time dense surface mapping and tracking.” In: IEEE International Symposium on Mixed and augmented reality (ISMAR). 2011.
[19] Curless, B., et al., “A volumetric method for building complex models from range images.” In: Proceedings of conference on Computer graphics and interactive techniques. ACM. 1996.
[20] GoPro, GoPro Hero 4, https://zh.shop.gopro.com/International/cameras.
[21] Hu, Y. T., et al., “Maskrnn: Instance level video object segmentation.” In: Advances in Neural Information Processing Systems (NIPS). 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70718-
dc.description.abstract即時定位與地圖構建(SLAM)是一種用來解決機器人自我定位問題的方法。近年來這個問題越來越多人在討論及研究。一般而言,SLAM的方法都假設是在靜態環境下做定位。但是在現實生活中是個動態的環境,例如有行人或是車輛在走動等等。如果我們使用在動態物體身上的特徵點來做定位,會影響到定位的準確度。因此我們提出了透過結合兩種深度學習方法的優點,來去除屬於動態物體的特徵點,用剩餘的畫面來做定位的方法。在本篇論文中,我們著重在探討目前新興的深度學習物體切割方法,並且研究移除動態物體特徵點對定位結果的影響。最後我們的方法能夠增加每一幀中偵測到可能會動的物體的召回率,並且在定位同時判斷這些物體的運動狀態後,將動態物體穩定地去除以提升定位準確度。zh_TW
dc.description.abstractSimultaneous Localization and Mapping (SLAM) is a solution of robotic ego-positioning problem, which is more and more popular nowadays. Normally, it was assumed that the SLAM technique can only be performed in static environments. However, we are often in a dynamic environment, such as those containing other vehicles or pedestrians. Using features on dynamic objects to do SLAM will influence the positioning accuracy. Therefore, we proposed a method to use images after dynamic object segmentation during SLAM by combining the advantages of two deep-learning-based segmentation methods. In this paper, we focus on investigating the state-of-the-art deep-learning-based segmentation methods and the impact of dynamic object segmentation on SLAM. Our method can first increase the recall of detecting potential moving objects in each frame and neglect dynamic objects robustly to improve the positioning accuracy.en
dc.description.provenanceMade available in DSpace on 2021-06-17T04:36:00Z (GMT). No. of bitstreams: 1
ntu-107-R05944008-1.pdf: 3273356 bytes, checksum: fe555024d31e43c61960e42ffc900f00 (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents口試委員會審定書 I
誌謝 II
中文摘要 III
ABSTRACT IV
CONTENTS V
LIST OF FIGURES VII
LIST OF TABLES IX
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Deep-learning-based Object Segmentation 2
1.3 Type of Cameras in SLAM 4
Chapter 2 Related Work 6
2.1 SLAM System 6
2.2 Deep-learning-based Object Segmentation 8
Chapter 3 Dynamic SLAM Based on Deep Learning 10
3.1 System Overview 10
3.1.1 Deep-learning-based Segmentation Method Combination 10
3.1.2 Dynamic Object Segmentation 12
3.2 System Details 13
3.2.1 Deep-learning-based Segmentation Method Combination 13
3.2.2 Sparse Scene Flow and Masked Scene Flow 13
3.2.3 Robust Visual Odometry and Object Motion Detection 14
3.2.4 Depth Computation and Static Map Reconstruction 16
Chapter 4 Experiments 18
4.1 Deep-learning-based Segmentation 18
4.1.1 Experiment Purpose 18
4.1.2 Experiment Evaluation 19
4.1.3 Experiment Result 20
4.2 SLAM in Dynamic Environments 21
4.2.1 Experiment Equipment 21
4.2.2 Experiment Evaluation 22
4.2.3 Experiment Scenario 1: Environments with Many Small and Medium Dynamic Objects 24
4.2.4 Experiment Scenario 2: Environments with One Large Dynamic Object 29
4.3 Summary 34
Chapter 5 Conclusion 36
Chapter 6 Future Work 37
REFERENCE 39
dc.language.isoen
dc.subject動態環境zh_TW
dc.subject深度學習zh_TW
dc.subject物體分割zh_TW
dc.subject戶外環境zh_TW
dc.subject即時定位與地圖構建zh_TW
dc.subjectsimultaneous localization and mappingen
dc.subjectoutdoor environmenten
dc.subjectdeep learningen
dc.subjectdynamic environmenten
dc.subjectobject segmentationen
dc.title在動態環境中使用深度學習之基於物體切割的即時定位與地圖構建zh_TW
dc.titleSLAM with Object Segmentation in Dynamic Environments Using Deep Learningen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee莊榮宏,陳冠文,邱志義,石勝文
dc.subject.keyword即時定位與地圖構建,深度學習,物體分割,動態環境,戶外環境,zh_TW
dc.subject.keywordsimultaneous localization and mapping,deep learning,object segmentation,dynamic environment,outdoor environment,en
dc.relation.page40
dc.identifier.doi10.6342/NTU201802702
dc.rights.note有償授權
dc.date.accepted2018-08-09
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
3.2 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved