Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20646
Title: | 利用情境資訊的影像搜尋 Exploiting Contextual Information for Visual Search |
Authors: | Yin-Hsi Kuo 郭盈希 |
Advisor: | 徐宏民 |
Keyword: | 視覺文字,整合區域特徵,深度特徵,情境資訊,影像搜尋, Bag-of-Words (BoW),Vector of Locally Aggregated Descriptors (VLAD),Deep features,Contextual information,Visual search, |
Publication Year : | 2017 |
Degree: | 博士 |
Abstract: | With the prevalence of capture devices, people are used to share their images and videos on the social media (e.g., Flickr and Facebook). To provide relevant information (e.g., reviews, landmark names, products) for these uploaded media, the need for effective and efficient visual search (e.g., image retrieval, mobile visual search, product search) is emerging. It enables plenty of applications such as recommendation, annotation, and advertisement. The state-of-the-art approaches (visual features) usually suffer from low recall rates because small changes in lighting conditions, viewpoints, or occlusions could degrade the performance significantly. We observe that enormous media collections are along with rich contextual cues such as tags, geo-locations, descriptions, and time. Hence, we propose to exploit different contextual information with the state-of-the-art visual features for solving the above challenges, and are able to improve the retrieval accuracy and provide diverse search results. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20646 |
DOI: | 10.6342/NTU201702219 |
Fulltext Rights: | 未授權 |
Appears in Collections: | 資訊網路與多媒體研究所 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-106-1.pdf Restricted Access | 19.1 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.