請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77292
標題: | PolarPillars:針對光達點雲之360度物件偵測器 PolarPillars: A 360 Object Detector for LiDAR Point Clouds |
作者: | 王凱群 Kai-Chun Wang |
指導教授: | 徐宏民 Winston Hsu |
關鍵字: | 三維物件偵測,光達感測器,三維深度學習,自動車, 3D Object Detection,LiDAR Sensor,3D Deep Learning,Autonomous Driving, |
出版年 : | 2019 |
學位: | 碩士 |
摘要: | 三維物件偵測是自動車領域很重要的問題。過去的方法利用深度學習在二維影像偵測的成功經驗設計三維物件偵測的架構。大多數的方法著重在如何有效地從點雲抽取的特徵,或是如何有效地融合點雲及圖片的特徵,然而卻無人深究光達點雲與圖片的不同。在這篇論文中,我們研究光達點雲的分布並提出兩點觀察:一,點雲的稀疏度與物件與感測器的遠近有關;二,物件被感測的部分與物件和感測器的相對方向有關。基於上述的觀察,我們提出了創新的深度學習架構:PolarPillars,將點雲轉換至極座標後進行學習,使網路能對物件的方向具有不變性。我們將網路測試在KITTI及nuScenes資料集,實驗的結果顯示我們方法的準確度能超越當前最先進的偵測器,並且應用至360度環景偵測時能有更好的泛化能力。 3D object detection is a crucial problem in autonomous driving. Previous methods exploit the success in 2D images to build 3D detection frameworks. Most of them focus on how to extract robust features from raw point clouds or how to fuse extracted features with image features, while few study the characteristics of point clouds generated from LiDAR sensors. In this paper, we study the distribution of LiDAR point clouds, and observe two priors. First, the observed part of objects is related to objects' direction to LiDAR sensor. Second, the sparsity of scanned point cloud is related to object's distance to LiDAR sensor. We propose a novel PolarPillars to exploit our observations, which learns point clouds' features in polar coordinates to make our network invariant to object directions. We test our network on KITTI 3D object detection benchmark and nuScenes dataset. The experiment result shows our network achieves higher accuracy than state of the art methods, and has higher generalizability when applied to 360 degree point clouds. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77292 |
DOI: | 10.6342/NTU201902766 |
全文授權: | 未授權 |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-2.pdf 目前未授權公開取用 | 5.78 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。