Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/47689
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕
dc.contributor.authorTing-Tzu Changen
dc.contributor.author張庭慈zh_TW
dc.date.accessioned2021-06-15T06:12:49Z-
dc.date.available2010-08-16
dc.date.copyright2010-08-16
dc.date.issued2010
dc.date.submitted2010-08-12
dc.identifier.citation[1] Bing Maps. http://www.bing.com/maps.
[2] A. Agarwala, M. Agrawala, M. Cohen, D. Salesin1, and R. Szeliski. Photographing
long scenes with multi-viewpoint panoramas. Proceedings of SIGGRAPH,
25(3):853–861, 2006.
[3] A. Agarwala, C. Zheng, C. Pal, M. Agarwala, M. Cohen, B. Curless, D. Salesin, and
R. Szeliski. Panoramic video textures. Proceedings of SIGGRAPH, 24(3):821–827,
2005.
[4] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool. Surf: Speeded up robust features.
Computer Vision and Image Understanding, 110:346–359, 2008.
[5] B. Chen, B. Neubert, E. Ofek, O. Deussen, and M. F. Cohen. Integrated videos and
maps for driving directions. User Interface Science and Technology, 2009.
[6] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge
University Press, second edition, 2004.
[7] R. I. Hartley and R. Gupta. Linear pushbroom cameras. IEEE Transaction on Pattern
Analysis and Machine Intelligence, 19(9):963–975, 1997.
[8] M. Havlena, A. Torii, and T. Pajdla. Randomized structure from motion based on
atomic 3d models from camera triplets. IEEE Conference on Computer Vision and
Pattern Recognition, 2009.
[9] P. J. Huber. Robust Statistics. John Wiley, 1981.
[10] M. Koller. seamless city - San Francisco. http://www.seamlesscity.com/.
39
[11] J. Kopf, B. Chen, R. Szeliski, and M. F. Cohen. Street slide: Browsing street level
imagery. Proceedings of SIGGRAPH, 29(4):853–861, 2010.
[12] D. G. LOWE. Distinctive image features from scale-invariant keypoints. International
Journal of Computer Vision, 60:91–110, 2004.
[13] B. Micusik and J. Kosecka. Piecewise planar city 3d modeling from street view
panoramic sequences. IEEE Conference on Computer Vision and Pattern Recognition,
2009.
[14] J.-M. Morel and G. Yu. Asift: A new framework for fully affine invariant image
comparison. SIAM Journal on Imaging Sciences, 2(2):438–469, 2009.
[15] D. Nister. An efficient solution to the five-point relative pose problem. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 26:756–770, 2004.
[16] A. Rav-Acha, G. Engel, and S. Peleg. Minimal Aspect Distortion (MAD) Mosaicing
of Long Scenes. International Journal of Computer Vision, 78(2-3):187–206, 2007.
[17] A. Roman, G. Garg, and M. Levoy. Interactive design of multi-perspective images
for visualizing urban landscapes. Proceedings of IEEE Visualization., page
537–544, 2004.
[18] A. Roman and H. P. Lensch. Automatic multiperspective images. Proceedings of
Eurographics Symposium on Rendering., page 161–171, 2006.
[19] R. Szeliski and H.-Y. Shum. Creating full view panoramic image mosaics and environment
maps. Proceedings of SIGGRAPH, pages 251–258, 1997.
[20] J.-P. Tardif, Y. Pavlidis, and K. Daniilidis. Monocular visual odometry in urban environments
using an omnidirectional camera. International Conference on Intelligent
Robots and Systems, 2008.
[21] A. Torii, M. Havlena, and T. Pajdla. From google street view to 3d city models.
OMNIVIS, 2009.
40
[22] A. Torii, M. Havlena, and T. Pajdla. Omnidirectional image stabilization by computing
camera trajectory. PSIVT, 5414:71–82, 2009.
[23] A. Zomet, D. Feldman, and S. Peleg. Mosaicing new views: The crossed-slits projection.
IEEE Transaction on Pattern Analysis and Machine Intelligence, 25(6):741–
754, 2003.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/47689-
dc.description.abstract谷歌街景(Google Street View)現在提供了一個街道瀏覽系統給使用者線上使用,可瀏覽街道遍及全世界大部分區域。系統利用全方向的影像(omnidirectional images)建立一個擬真的360度環景泡泡(bubble)為使用者帶來如臨其境的虛擬行走感。然而,由於使用者在泡泡中行為受到限制以及在泡泡間移動為不連續跳動,系統並沒有辦法為較長的街道提供一個好的視覺摘要。
因此,本篇論文提出一個新的系統呈現谷歌街景環景視覺化。系統只需要使用者輸入起點和終點的住址,便會自動擷取谷歌街景的資料、經由SFM(Struture From Motion)找出路段的立體模型、並用不密集的連續全方向圖為資料藉由圖割(Graph-Cut)最小化目標方程式產生出多視點環景圖(Multi-Viewpoint Panorama)。我們將證明我們的結果相當有用,只需讓使用者看一眼,便可簡單快速得到一段長路程視覺摘要。
zh_TW
dc.description.abstractNowadays, Google Street View provides user a street navigating system online available in many areas over the world. The system brings photorealism virtual visit sense by constructing immersive $360^{circ}$ panorama or bubble using omnidirectional images. However, it does not provide a good summary vision of the long street due to its limited action and discretely jumping from bubble to bubble.
As a result, we bring a system presenting Google Street View panoramic visualization. Once user input addresses of the starting point and the goal, the system automatically fetching data from Google Street View, recovering 3D models through SFM framework, and producing multi-viewpoint panorama from these sparse omnidirectional consecutive images by minimizing objective function using Graph-Cut. We show that our result is useful for user to easily and rapidly retrieve a visual summary just one glance of long scene.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T06:12:49Z (GMT). No. of bitstreams: 1
ntu-99-R97922035-1.pdf: 34520656 bytes, checksum: bde58a55059d69289ed9f967711bb6a1 (MD5)
Previous issue date: 2010
en
dc.description.tableofcontents口試委員會審定書i
致謝ii
中文摘要iv
Abstract v
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 RelatedWork 3
2.1 Omnidirectional Images . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Multi-Viewpoint Panorama . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 System Overview 5
4 Google Street View Data Retrieve 7
4.1 Google Street View Introduction . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Google Street View Images Online . . . . . . . . . . . . . . . . . . . . . 7
4.3 Fetching Every Panorama on the Route . . . . . . . . . . . . . . . . . . 8
4.3.1 Google Direction Service . . . . . . . . . . . . . . . . . . . . . . 9
4.3.2 Traverse Along the Route . . . . . . . . . . . . . . . . . . . . . 11
5 Structure From Motion 13
5.1 Feature Matching on Omnidirectional Images . . . . . . . . . . . . . . . 13
5.1.1 Projecting Images . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1.2 ASIFT Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 Structure From Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2.1 Transformation of Coordinate System . . . . . . . . . . . . . . . 16
5.2.2 Refining Matching Pairs and Finding Essential Matrix Through
RANSAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2.3 Camera Pose Estimation by Epipolar Geometry . . . . . . . . . . 18
6 Image Based Rendering 22
6.1 Picture Surface Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.1 User Defined Picture Surface . . . . . . . . . . . . . . . . . . . . 23
6.1.2 Automatically Defined Picture Surface . . . . . . . . . . . . . . 23
vi
6.2 Viewpoint Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7 Experiment Results 28
7.1 Weights Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 User and Automatically Picture Surface Selection . . . . . . . . . . . . . 28
7.3 More Panorama Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.4 Failure Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.4.1 Feature Matching Stage . . . . . . . . . . . . . . . . . . . . . . 29
7.4.2 Image Based Rendering Stage . . . . . . . . . . . . . . . . . . . 29
8 Conclusion and FutureWork 38
8.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Bibliography 39
dc.language.isoen
dc.subject谷歌街景zh_TW
dc.subject圖割zh_TW
dc.subjectStructure from Motion(SFM)zh_TW
dc.subject全方向圖zh_TW
dc.subject環景圖zh_TW
dc.subjectGraph-Cuten
dc.subjectGoogle Street Viewen
dc.subjectStructure from Motion(SFM)en
dc.subjectPanoramaen
dc.subjectOmnidirectional Imagesen
dc.title谷歌街景圖之長場景全景視覺化zh_TW
dc.titleLong-Scene Panoramic Visualization for Google Street View Imagesen
dc.typeThesis
dc.date.schoolyear98-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳文進,周承復
dc.subject.keyword谷歌街景,Structure from Motion(SFM),環景圖,全方向圖,圖割,zh_TW
dc.subject.keywordGoogle Street View,Structure from Motion(SFM),Panorama,Omnidirectional Images,Graph-Cut,en
dc.relation.page41
dc.rights.note有償授權
dc.date.accepted2010-08-13
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-99-1.pdf
  未授權公開取用
33.71 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved