請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71716
標題: | 基於混合特徵和深度學習模型應用於車輛追蹤與重新識別 Based on Hybrid Features and Deep Learning Model to Handle Vehicle Tracking and Re-identification |
作者: | Chih-Wei Wu 吳治緯 |
指導教授: | 丁建均(Jian-Jiun Ding) |
關鍵字: | 深度學習,車輛追蹤,車輛重新識別,捲積神經網絡, Deep Learning,Vehicle Tracking,Vehicle Re-identification,Convolutional Neural Network, |
出版年 : | 2020 |
學位: | 碩士 |
摘要: | 在這幾年,車輛的追蹤與辨識已經是越來越熱門的主題,更可以成為在自動駕駛領域中,不可或缺的一環。在傳統物體追蹤和重新辨識的技術上,大多是採用低階特徵,例如:邊緣檢測,來進行演算法架構的設計,但是往往會面臨到兩大問題: 偵測物體多視角現象和遮蔽,為了改善這兩個問題,人們開始嘗試其他不同的方法,由於硬體技術近年來的提升,在運算資源提升的幫助下,近年來深度學習技術快速發展,並且將深度學習應用在電腦視覺相關領域的研究更是不勝枚舉,因此在本論文中,針對這兩個問題,我們基於傳統方法和深度學習方法,提出兩種不同的演算法,來改善之前方法的不足。 第一個方法是基於傳統方法上來進行改善,我們利用混合特徵,包含局部、全域、重要部分和位置資訊,來對車輛原始圖片進行處理,根據這些特徵比對的結果,去計算相似分數,並利用相似分數去區分不同的車輛類別。 第二個方法是基於深度學習的架構來進行設計,在這個深度學習架構內,我們設計了三種不同面向的子模組,分別針對全域、區域和細部特徵資訊來進行重要特徵抽取,我們的方法相較於現今的方法,在指標性的資料集內取得相當好的表現。在本篇論文中,我們為了比較第一個方法跟現今方法的表現結果,我們收集了10組不同的車輛行進影片,包含夜間跟白天的情形,並考慮到多視角現象和遮蔽情形,根據我們所收集的資料集,我們所提出的第一個方法相較於其他現今的方法,表現是最好的。 Vehicle tracking and re-identification become one of most popular topics in the present. Moreover, they are indispensable parts on self-driving vehicles. In traditional object tracking and re-identification techniques, we usually use low-level features, such as edge detection to design our algorithm. However, we often face two important problems on this topic: multi-viewpoint patterns and occlusions. In order to handle these two problems, people attend to utilize other methods. Due to the improvement of hardware technology and the help of increased computing resources, deep learning-based methods have developed rapidly in recent years and the researches on the application of deep learning-based algorithms in computer vision fields are even more numerous. In this thesis, we propose two different algorithms based on traditional and deep learning-based methods to improve the shortcomings of the previous methods in response to these two problems. The first method is based on conventional methods to improve. We use hybrid features, including local, global, salient sections and location information to process the original vehicle image, and calculate the similarity score based on the comparison results of these features. Finally, we utilize similarity scores to distinguish different vehicle categories. The second method is designed based on the deep learning-based architecture. In this deep learning-based structure, design three different oriented sub-modules to extract important features for global, regional, and detailed feature information. Our proposed method compared with the state-of-the-art approaches has achieved pretty good performance in benchmark datasets. In this thesis, in order to compare the performance of the first method with current methods, we have collected 10 sets of different vehicle driving videos, including night and day time conditions, and taking into account the multi-viewpoint phenomenon and occlusions. According to our collected dataset, the first method we proposed has the best performance compared to other current approaches. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71716 |
DOI: | 10.6342/NTU202004304 |
全文授權: | 有償授權 |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-2310202014132500.pdf 目前未授權公開取用 | 3.77 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。