請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21485
標題: | 基於位移導向之立體影像視角合成應用於單一或多視角彩色深度相機 Displacement-oriented View Synthesis for Single/Multiple RGBD Cameras |
作者: | Yu-Sheng Hsu 徐佑昇 |
指導教授: | 陳良基 |
關鍵字: | 影像處理,相機,影像,視角合成,多視角,深度,位移, camera,RGBD,image processing,view synthesis,displacement,depth,multiview, |
出版年 : | 2019 |
學位: | 碩士 |
摘要: | With the rapid improvement of technology, RGB/D cameras have been growing more and more popular. Depth map plays an important role for many 3D applications in human–computer interaction systems. There are several technical challenges in producing a high-quality synthesized view. To generate more comfortable novel view has become the bottleneck for current research.
Multimedia applications such as 3DTV, and virtual reality provide viewers with 3D experience by presenting videos from different viewpoints to our eyes. Through we can build rich 3D maps of environments is an important task for mobile robotics, with applications in navigation, manipulation, semantic mapping, and telepresence. But 3D point clouds frame-to-frame alignment and dense 3D reconstruction require high bandwidth, memory and computational costs because of costly iterative operations, the original point cloud is computationally expensive for real-time system implementation. View synthesis is just like 3D information projection to 2D, is an efficient implementation in daily life. Most popular view synthesis system adopted by the ITU/MPEG standard uses depth image based rendering (DIBR) techniques. In this thesis, we propose to tackle the artifacts, pinhole, disocclusion of RGB-D multiview images when synthesizing new views of a scene by changing its viewpoint. We first examine how ghost contour come from, why the disocclusion region cannot be seen in the original view but exposed in the virtual view, and pinholes/Cracks appear in the derived frame for surfaces whose normal has rotated towards the user in the derived frame. 2D information from images transformation to 3D and make connection between reference view and novel view. We analyze 3D warping techniques: background erosion is proposed here to remove the wrongly warped boundary, forward warping is to write a single derived pixel for each warped reference pixel. And we define a vector displacement as corresponding feature points’ “movement” between different views to help us to do backward warping. Further more, we point out the disocclusion is the area between foreground-background edge displacement difference, and we combine convention inpainting technique and our growing guidance to get improvement on hole filling. Large holes are pretty hard to be filled with an acceptable subjective quality. The situation can be relieved by multiview. Despite of z-buffering, we also propose view weighting based on the distance between reference and novel view and the other method winner take all. In order to generate free view, we exploit quaternion rotation to do inter-view interpolation, and analyze the quality versus view point. Finally, we can provide quality good and comfortable virtual view synthesis through our proposed. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21485 |
DOI: | 10.6342/NTU201902139 |
全文授權: | 未授權 |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 25.17 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。