Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/15985
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕
dc.contributor.authorYu-Hsiang Huangen
dc.contributor.author黃昱翔zh_TW
dc.date.accessioned2021-06-07T17:57:13Z-
dc.date.copyright2012-08-16
dc.date.issued2012
dc.date.submitted2012-08-13
dc.identifier.citation[1] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-up robust features (SURF). CVIU, 110(3):346–359, June 2008.
[2] C.-H. Chang, C.-K. Liang, and Y.-Y. Chuang. Content-aware display adaptation and interactive editing for stereoscopic images. IEEE Transactions on Multimedia, 13(4):589–601, August 2011.
[3] M. Farre, O. Wang, M. Lang, N. Stefanoski, A. Hornung, and A. Smolic. Automatic
content creation for multiview autostereoscopic displays using image domain warping. In Multimedia and Expo (ICME), 2011 IEEE International Conference on, pages 1–6, July 2011.
[4] C. Fehn. A 3D-TV approach using depth-image-based rendering. Proceedings of 3rd IASTED Conference on Visualization, Imaging, and Image Processing, 3:482–487, 2003.
[5] C. Fehn. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV. Stereoscopic Displays and Virtual Reality Systems XI. Proceedings of the SPIE, 5291:93–104, 2004.
[6] Y.-H. Huang, T.-K. Huang, Y.-H. Huang, W.-C. Chen, and Y.-Y. Chuang. Warping-based novel view synthesis from a binocular image for autostereoscopic displays. In Multimedia and Expo (ICME), 2012 IEEE International Conference on, pages 302–307, July 2012.
[7] ISO/IEC JTC1/SC29/WG11. View synthesis reference software, May 2009. version 3.0.
[8] ISO/IEC JTC1/SC29/WG11. Depth estimation reference software, July 2010. version 5.0.
[9] M. Lang, A. Hornung, O. Wang, S. Poulakos, A. Smolic, and M. Gross. Nonlinear disparity mapping for stereoscopic 3D. ACM Transactions on Graphics, 29(4):75:1–75:10, 2010.
[10] F. Liu, M. Gleicher, H. Jin, and A. Agarwala. Content-preserving warps for 3D video stabilization. ACM Transactions on Graphics, 28(3):44:1–44:9, 2009.
[11] D. Lowe. Object recognition from local scale-invariant features. In Proceedings of ICCV, volume 2, pages 1150–1157, 1999.
[12] B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2, IJCAI’81, pages 674–679, San Francisco, CA, USA, 1981. Morgan Kaufmann Publishers Inc.
[13] E. Rosten and T. Drummond. Fusing points and lines for high performance tracking. In IEEE International Conference on Computer Vision, volume 2, pages 1508–1511, October 2005.
[14] E. Rosten and T. Drummond. Machine learning for high-speed corner detection. In European Conference on Computer Vision, volume 1, pages 430–443, May 2006.
[15] B. Smith, L. Zhang, and H. Jin. Stereo matching with nonparametric smoothness priors in feature space. In Proceedings of CVPR, pages 485–492, June 2009.
[16] A. Smolic, P. Kauff, S. Knorr, A. Hornung, M. Kunter, M. M‥uandller, and M. Lang. Three-dimensional video postproduction and processing. Proceedings of the IEEE, 99(4):607–625, April 2011.
[17] C. Tomasi and T. Kanade. Detection and tracking of point features. Technical report, International Journal of Computer Vision, 1991.
[18] T. Tuytelaars. Dense interest points. In Proceedings of CVPR, pages 2281–2288, June 2010.
[19] R. von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall. LSD: A fast line segment detector with a false detection control. IEEE TAPMI, 32(4):722–732, April 2010.
[20] H.Wang, M. Sun, and R. Yang. Space-time light field rendering. IEEE Transactions on Visualization and Computer Graphics, 13(4):697–710, July 2007.
[21] Y.-S. Wang, C.-L. Tai, O. Sorkine, and T.-Y. Lee. Optimized scale-and-stretch for image resizing. ACM Transactions on Graphics, 27(5):118:1–118:8, 2008.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/15985-
dc.description.abstract本論文提出一個以雙眼立體影像的基於型變的新視角影像生成方法。裸眼立體顯示器需要多視角影像(包含圖片及影片)作為輸入,但現今大多立體相機只能擷取兩個視角(左右眼)的影像。近年來較為流行的新視角影像方法,如深度影像繪圖法,大多很依賴準確的深度圖以取得好結果,然而準確的深度圖目前仍是難以取得的。我們所提出的方法不需要深度圖,並且也不需要使用者的介入。對於生成多視角圖片的部分,首先我們偵測密集且可靠的特徵點。接著我們利用特徵點的對應關係去引導圖片的型變以生成新視角影像,並且在型變的過程中保持立體性質及維持內容結構。基於相同的架構,我們修改了針對圖片的方法使之能處理影片,加強了對於時間軸上的對應關係並且加快生成的速度。相較於深度影像繪圖法,我們提出的方法可以以高效率產生高品質的多視角影像,而且可以免除惱人的參數設定。此方法可以直接將立體相機所拍攝雙眼影像轉換成能在裸眼立體顯示器上撥放的多視角立體影像。zh_TW
dc.description.abstractThis thesis presents a warping-based novel view synthesis framework for both binocular stereoscopic images and videos. Autostereoscopic displays require multiple views while most stereoscopic cameras can only capture two. Popular novel view synthesis methods, such as depth image based rendering (DIBR), often heavily rely on accurate depth maps, which are still difficult to obtain. The proposed framework requires neither depth maps nor user intervention. To synthesize multi-view images, it first extracts dense and reliable features. Next, feature correspondences guide image warping to synthesize novel views while simultaneously maintaining stereoscopic properties and preserving image structures. Based on the same framework, a modified method for binocular videos are proposed to better maintaining temporal coherence and accelerating the processing speed. Compared to DIBR, the proposed framework produces higher-quality multi-view images and videos more efficiently without tedious parameter tuning. The method can be used to convert stereoscopic images and videos taken by binocular cameras into multi-view images and videos ready to be displayed on autostereoscopic displays.en
dc.description.provenanceMade available in DSpace on 2021-06-07T17:57:13Z (GMT). No. of bitstreams: 1
ntu-101-R99922020-1.pdf: 6095707 bytes, checksum: 1aa2f03f679ab71f7380dacedfe452d8 (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents致謝i
中文摘要ii
Abstract iii
1 Introduction 1
2 Multi-View Image Synthesis 5
2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Dense Interest Points . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Content-Preserving Warps . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Line Bending . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Semi-Dense Stereo Correspondence . . . . . . . . . . . . . . . . 8
2.2.2 Virtual View Generation . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Modified Content-Preserving Warps . . . . . . . . . . . . . . . . 10
2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Multi-View Video Synthesis 20
3.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Semi-Dense Stereo Correspondences . . . . . . . . . . . . . . . 22
3.2.2 Virtual View Generation . . . . . . . . . . . . . . . . . . . . . . 24
3.2.3 Triangular Mesh Warps . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Conclusion and Future Work 27
Bibliography 29
dc.language.isoen
dc.subject形變zh_TW
dc.subject新視角影像生成zh_TW
dc.subject裸眼立體視覺zh_TW
dc.subjectwarpingen
dc.subjectnovel view synthesisen
dc.subjectautostereoscopyen
dc.title基於型變的雙眼影像新視角生成快速演算法zh_TW
dc.titleFast Warping-Based Novel View Synthesis from Binocular Image/Video for Autostereoscopic Displaysen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳維超,鄭文皇,胡敏君
dc.subject.keyword新視角影像生成,裸眼立體視覺,形變,zh_TW
dc.subject.keywordnovel view synthesis,autostereoscopy,warping,en
dc.relation.page32
dc.rights.note未授權
dc.date.accepted2012-08-14
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  未授權公開取用
5.95 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved