請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71674
標題: | 用於行人重新識別的跨解析度生成模型 A Generative Dual Model for Cross-Resolution Person Re-Identification |
作者: | Yu-Jhe Li 李宇哲 |
指導教授: | 王鈺強(Yu-Chiang Wang) |
關鍵字: | 行人重新辨識,超解析度,深度學習,機器學習,特徵學習,電腦視覺, Person re-identification,Super-resolution,Deep learning,Machine learning,Representation learning,Computer vision, |
出版年 : | 2019 |
學位: | 碩士 |
摘要: | 行人重新識別(重新ID)旨在跨攝影機角度匹配相同身份的圖像。由於攝像機和感興趣的人之間的距離變化,可以預期攝影機解析度不匹配,這將降低現實場景中的人重新ID效果。為了克服這個問題,我們提出了一種新穎的生成對抗網絡來解決跨分辨率的人重新ID,允許查詢圖像具有不同的分辨率。通過推進對抗性學習技術,我們提出的模型學習解析度不變性的圖像表示法,同時能夠恢復低解析度輸入圖像中的缺失細節。因此,可以聯合應用所得到的特徵以改善重新ID性能。我們對三個基準數據集的實驗證實了我們的方法的有效性及其優於現有技術方法的優勢,特別是在訓練期間看不到輸入解析度時 Person re-identification (re-ID) aims at matching images of the same identity across camera views. Due to varying distances between the cameras and persons of interest, resolution mismatch can be expected, which would degrade person re-ID performances in real-world scenarios. To overcome this problem, we propose a novel generative adversarial network to address cross-resolution person re-ID, allowing query images with varying resolutions. By advancing adversarial learning techniques, our proposed model learns resolution-invariant image representations while being able to recover missing details in low-resolution input images. Thus, the resulting features can be jointly applied for improved re-ID performances. Our experiments on three benchmark datasets confirm the effectiveness of our method and its superiority over the state-of-the-art approaches, especially when the input resolutions are unseen during training. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71674 |
DOI: | 10.6342/NTU201900069 |
全文授權: | 有償授權 |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 13.21 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。