Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69366
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor歐陽明(Ming Ouhyoung)
dc.contributor.authorTze-How Liewen
dc.contributor.author劉志豪zh_TW
dc.date.accessioned2021-06-17T03:13:53Z-
dc.date.available2018-07-19
dc.date.copyright2018-07-19
dc.date.issued2018
dc.date.submitted2018-07-11
dc.identifier.citation[1] Opensfm. https://github.com/mapillary/OpenSfM. Dec 2014.
[2] R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. H. Esteban, S. Agarwal, and S. M. Seitz. Jump: Virtual reality video. 2016.
[3] P. Chang and M. Hebert. Omni-directional structure from motion. In Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704), pages 127–133, 2000.
[4] Y. Furukawa and J. Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1362–1376, Aug 2010.
[5] J. Huang, Z. Chen, D. Ceylan, and H. Jin. 6-dof vr videos with a single 360-camera. In 2017 IEEE Virtual Reality (VR), pages 37–44, March 2017.
[6] H.-S. Lin, C.-C. Chang, H.-Y. Chang, Y.-Y. Chuang, T.-L. Lin, and M. Ouhyoung. A low-cost portable polycamera for stereoscopic 360 imaging. IEEE Transactions on Circuits and Systems for Video Technology, page to appear.
[7] K. Matzen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski. Low-cost 360 stereo photography and video capture. ACM Trans. Graph., 36(4):148:1–148:12, July 2017.
[8] B. Micusik and T. Pajdla. Structure from motion with wide circular field of view cameras. IEEE Trans. Pattern Anal. Mach. Intell., 28(7):1135–1149, July 2006.
[9] S. Peleg, M. Ben-Ezra, and Y. Pritch. Omnistereo: panoramic stereo imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):279–290, Mar 2001.
[10] E. Penner and L. Zhang. Soft 3d reconstruction for view synthesis. 36(6), 2017.
[11] D. Scaramuzza, A. Martinelli, and R. Siegwart. A flexible technique for accurate omnidirectional camera calibration and structure from motion. In Fourth IEEE International Conference on Computer Vision Systems (ICVS’06), pages 45–45, Jan 2006.
[12] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: Exploring photo collections in 3d. In SIGGRAPH Conference Proceedings, pages 835–846, New York, NY, USA, 2006. ACM Press.
[13] X. Xia and B. Kulis. W-net: A deep model for fully unsupervised image segmentation. CoRR, abs/1711.08506, 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69366-
dc.description.abstract本論文提出使用少量的全景影像,就可以產生出使用者想要的虛擬視角,並可透過虛擬實境的裝置檢視內容,以達到沉浸式的體驗。一般上為了記錄一個場景,都需要拍攝大量的影像或者錄製一段影片,但拍攝大量的影像或者影片是一個成本相當高的事情,其中包括時間的消耗、裝置的儲存空間等等。因此,我們希望能提出一個系統和辦法,可以利用少量的影像就能產生足夠描述一個場景且使用者取向的資訊,並且結果是可令人接受的。我們提出的步驟包括:運動恢復結構(Structure from Motion)、影像的校正與深度估計(Image Rectification and Depth Estimation)、 3D 重建(3D Reconstruction)、視角合成(View Synthesis)。我們希望可以先透過運動恢復結構從拍攝到的影像中取得影像之間的旋轉和平移關係,再透過第二步的影像校正成對地算出視差,爾後再轉換成場景的深度。第三步的重建就是以算到的深度重建出 3D 點,然後把這些 3D 點連接起來形成三角形網格(Triangle Meshes)。最後,利用重建好的 3D 資訊產生虛擬視角的影像。zh_TW
dc.description.abstractThis thesis presents a method for a free viewpoint synthesis with a sparse view of panoramic images. Traditionally, the task of constructing a playback data set to navigate through a scene has required a particularly inefficient procedure. The conventional method of taking pictures and videos with a pinhole camera model is costly due to the slow run time and memory space required. We propose a method which takes advantage of a less costly setup and improves the visual quality of the final images. This method allows users to choose the desired viewpoint, as well as whether the output should be computed as a panoramic or perspective image. This entire procedure consists of four steps: structure from motion (SfM), image rectification and depth estimation, 3D reconstruction, and view synthesis. First, the extinsic parameters of the cameras are extracted by implementing the structure from motion algorithm/technique. Next, image pairs are rectified and their disparities can be computed, which can then be converted into depth maps for 3D reconstruction. Finally, the obtained 3D triangle meshes are transformed to the coordinates of target virtual camera, and the target image can be generated by intersecting rays with meshes.en
dc.description.provenanceMade available in DSpace on 2021-06-17T03:13:53Z (GMT). No. of bitstreams: 1
ntu-107-R05922017-1.pdf: 24741899 bytes, checksum: be50393e3ee20ead31c79b2e9041dfe1 (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
1 Introduction 1
1.1 Background and Motivation 1
1.2 Proposed Method 2
1.3 Thesis Organization 3
2 Related Works 4
2.1 Structure from Motion 4
2.2 Omnistereo Panorama 5
2.3 View Synthesis 5
3 Proposed Method 7
3.1 Structure from Motion 7
3.2 Image Rectification and Depth Estimation 8
3.3 3D Reconstruction 11
3.4 View Synthesis 12
4 Result 14
5 Conclusion 19
5.1 Conclusion 19
5.2 Limitation 19
5.3 Future Work 20
Bibliography 21
dc.language.isoen
dc.subject視角生成zh_TW
dc.subject虛擬實境zh_TW
dc.subject任意視角zh_TW
dc.subject全景影像zh_TW
dc.subjectFree-viewpointen
dc.subjectView synthesisen
dc.subjectVirtual realityen
dc.subjectPanorama imageen
dc.title以全景影像生成任意的虛擬視角zh_TW
dc.titleFree-viewpoint Synthesis over Panoramic Imagesen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee傅楸善(Chiou-Shann Fuh),葉正聖(Jeng-Sheng Yeh)
dc.subject.keyword虛擬實境,視角生成,任意視角,全景影像,zh_TW
dc.subject.keywordVirtual reality,View synthesis,Free-viewpoint,Panorama image,en
dc.relation.page22
dc.identifier.doi10.6342/NTU201801067
dc.rights.note有償授權
dc.date.accepted2018-07-11
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
24.16 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved