請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68064
標題: | 從移動的單一視角360相機生成虛擬實境立體視角影像 Stereo View Synthesis for VR from a Moving Monocular 360 Camera |
作者: | Jing Gu 顧晶 |
指導教授: | 莊永裕 |
關鍵字: | Omnistereo,SfM,新視角影像生成,形變, Omnistereo,SfM,novel view synthesis,warping, |
出版年 : | 2018 |
學位: | 碩士 |
摘要: | 本論文提出了由單一視角360相機拍攝影像序列生成虛擬實境立體視角影像的方法。通過使用Omnistereo的技術生成兩張360影像,讓眼睛看到不同視角的左右眼的影像, 則可以得到360立體影像。利用Gear 360拍攝得到的影像序列,通過Structure from Motion得到空間中的三維點以及相機的位置。將這些三維點作為影像的特徵點,接著利用多視點圓形投影從而生成新視角影像的特徵點,接著我們利用特徵點的對應關係去引導圖片的形變以生成新視角的影像,並且在形變的過程中保持立體性質及維持內容結構,最終生成左右眼360影像。最後利用PsViewer (一款顯示VR 的手機軟體),將左右眼分別設置成我們生成的影像,通過谷歌cardboard進行觀看。 In this thesis, we introduce a method about synthesizing stereo view for VR with 360 image sequence captured by a monocular 360-camera. We generate different 360 images for different eyes by referring to the ideas of Omnistereo. To generate a stereo view in VR, we combined the images,one for left, and the other for right eye. The images are captured by Gear 360, and then the three-dimensional points and poses were illustrated by using the structure from motion with Visual SfM. The three-dimensional points could work as the features of the images, and accordingly generate the features in new images by using multiple viewpoint circular projection. We use feature correspondences to synthesize novel views, while simultaneously maintaining stereoscopic properties and preserving image structures. Finally, we get the 360 images for left and right eyes. We use PsViewer to display our resulting images by setting different images for different eyes, and view in google cardboard. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68064 |
DOI: | 10.6342/NTU201800009 |
全文授權: | 有償授權 |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 2.55 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。