Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66922
Title: 利用跨視角學習與空間估計辨識四軸飛行器視角的建築物
Drone-View Building Identification by Cross-View Visual Learning and Spatial Estimation
Authors: Chun-Wei Chen
陳俊瑋
Advisor: 徐宏民(Winston H. Hsu)
Keyword: 四軸飛行器,四軸飛行器視角,建築物辨識,
Drone,Drone-View Image,Building Identification,
Publication Year : 2017
Degree: 碩士
Abstract: 近年來,四軸飛行器日趨盛行,同時有別於傳統攝影機額外裝備了多種傳感器(可以同時得到影像與四軸的地理位置),同時為了實踐四軸相關應用,提供環境相關資訊(例如周遭建築物資訊)是非常重要的。因此我們定義了這樣的四軸飛行器視角建築物辨識的問題:給予一個地標還有他的若干張影像和地理位置,以及四軸的地理位置,在四軸視角影像中找出最有可能的建築物影像。然而雖然當前很少有標記的四軸影像,但我們有許多其他視角的影像可以利用像是地面上拍的,或者街景圖與高空圖。因此,我們提出了一個跨視角三重態神經網路去學習四軸視角與其他視角的視覺相似度。此外,我們進一步考慮了空間估計:每一個建築物影像的四軸角度與四軸長度,利用四軸與地標的地理位置空間資訊來改進這樣困難的跨視角視覺搜尋。除此之外,因為缺乏標記的四軸視角資料我們也收集了嶄新的四軸視角數據集(Drone-BR)。接著我們計算並實驗了不同的神經網路並研究出在不同情況下如何得到最好的表現。最後,我們提出的方法比最新卷積網路要好0.29 mAP,能讓四軸更深入地了解他的周遭環境。
Recently, drones become more popular and equip several types of sensors (for image and geo-location). Simultaneously, to enable drone-based applications, it is essential to provide related information (e.g., building information) to understand the environment around the drone. We frame this extbf{drone-view building identification} as building retrieval problem: given a building (multimodal query) with its images, geo-location and drone's current location, retrieve the most likely proposal (building candidate) in a drone-view image. Although there are few annotated drone-view images to date, fortunately, there are a lot of images from other viewpoints, such as ground-level, street-view and aerial images. Hence, we propose a extit{cross-view triplet neural network} to learn visual similarity between drone-view and other views. In addition, we further consider spatial estimation ( extit{drone-angle} and extit{drone-distance}) for each building proposal to utilize drone's geo-location on geographic map in order to solve this challenging cross-view image retrieval problem. Moreover, we collect a new drone-view dataset ( extit{Drone-BR}) on our own owing to the lack of annotated drone-view dataset. We evaluate different neural networks and investigate how to achieve the best performance on various conditions. Finally, our method outperforms state-of-the-art approaches (CNN features) by 0.29 mAP, which indeed helps drones more deeply understand surroundings.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66922
DOI: 10.6342/NTU201703157
Fulltext Rights: 有償授權
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-106-1.pdf
  Restricted Access
14.06 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved