請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90134
標題: | 惡劣天氣條件下基於關聯注意力機制融合雷達和光達進行物件偵測 Fusion of Radar and LiDAR Using Associative Mechanism for Object Detection in Adverse Weather Conditions |
作者: | 陳柏維 Bo-Wei Chen |
指導教授: | 李明穗 Ming-Sui Lee |
關鍵字: | 深度學習,多模態物件偵測,基於注意力機制進行特徵融合, deep learning,multimodal object detection,feature fusion based on attention mechanism, |
出版年 : | 2023 |
學位: | 碩士 |
摘要: | 隨著深度學習技術的不斷發展,物件偵測的準確性也日益提高。自動駕駛 Level 5 的實現已經近在眼前。在良好的天氣條件下,物件偵測的平均精確度可以 高達百分之八十五以上。然而,天氣並非時時都理想,有時候會下雨、起霧,甚 至下雪,這種惡劣天氣會大幅降低物件偵測的準確性。 傳統的感測器,如攝像頭和LiDAR,都容易受到惡劣天氣的影響。因此,我 們採用RADAR和LiDAR的融合來進行物件偵測。RADAR在惡劣環境下不受影響,但會產生許多噪點雲。因此,我們需要使用LiDAR作為輔助,因為LiDAR 能提供精確的環境點雲信息,有助於減少虛擬偵測。我們使用注意力機制來融合LiDAR和RADAR的特徵。同時,我們提出了特 徵選取模塊(Feature Selection Module),解決了注意力機制中關注權重的問題。 此外,我們還提出了關聯融合模塊(Associative Feature Fusion Module),充分利 用注意力機制選取的特徵。通過實驗證明,我們提出的模型優於目前最先進的 RADAR和LiDAR模型。 With the continuous development of deep learning technology, the accuracy of object detection has been steadily improving. The realization of Level 5 autonomous driving is within reach. In favorable weather conditions, the average accuracy of object detection can reach over 85 percent. However, the weather is not always ideal, and conditions such as rain, fog, and even snow can significantly reduce the accuracy of object detection. Traditional sensors like cameras and LiDAR are susceptible to the influence of harsh weather conditions. Therefore, we adopt a fusion of RADAR and LiDAR for object de tection. RADAR is unaffected by adverse environmental conditions but introduces a lot of noisy point clouds. Hence, we utilize LiDAR as an auxiliary sensor because it provides accurate environmental point cloud information, which helps mitigate ghost detection. We employ an attention mechanism to fuse the features from LiDAR and RADAR. Additionally, we propose a Feature Selection Module to address the issue of attention weights in the attention mechanism. Furthermore, we introduce an Associative Feature Fusion Module to fully utilize the selected features from the attention mechanism. Through experiments, we demonstrate that our proposed model outperforms the state-of-the-art RADAR and LiDAR models. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90134 |
DOI: | 10.6342/NTU202302360 |
全文授權: | 同意授權(全球公開) |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 5.12 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。