請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/18522
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳少傑(Sao-Jie Chen) | |
dc.contributor.author | Po-Jui Tseng | en |
dc.contributor.author | 曾柏叡 | zh_TW |
dc.date.accessioned | 2021-06-08T01:09:40Z | - |
dc.date.copyright | 2014-08-25 | |
dc.date.issued | 2014 | |
dc.date.submitted | 2014-08-18 | |
dc.identifier.citation | [1] K. Khoshelham, “Accuracy Analysis of Kinect Depth Data,” International
Archives of the Photogrammetry and Remote Sensing Information Science (ISPRS), Aug. 2011, pp. 133-138. [2] J.J. Fu, D. Miao, W.R. Yu, S.Q. Wang, Y. Lu, and S.P. Li, “Kinect-Like Depth Data Compression,” IEEE Transactions on Multimedia, vol. 15, no.6, Oct. 2013, pp. 1340-1352. [3] K. Essmaeel, L. Gallo, E. Damiani, G. De Pietro, and A. Dipanda, “Temporal Denoising of Kinect Depth Data,” The 8th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Nov. 2012, pp. 47-52. [4] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. A. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. J. Davison, and A. Fitzgibbon, “KinectFusion: Real-time 3D Reconstruction and Interaction using a Moving Depth Camera,” The 24th Annual ACM Symposium on User Interface Software and Technology (UIST), Oct. 2011, pp. 559-568. [5] F. Prada and L. Velho, “Grabcut+d.” VISGRAF PROJECT, 2011, http: //www.impa.br/~faprada/courses/procImagenes/. [6] L. Cruz, D. Lucio, L. Velho, “Kinect and RGBD Images: Challenges and Applications,” The 25th SIBGRAPI Conference on Graphics, Patterns and Image Tutorials, Aug. 2012 pp. 36-49. [7] A. Maimone, and H. Fuchs, “Enhanced Personal Autostereoscopic Telepresence System using Commodity Depth Camera,” Computer Graphics, vol. 36, no. 7, Nov. 2012, pp. 791-807. [8] A. Buades, B. Coll, and J.M. Morel, “A non-local Algorithm for Image Denoising,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, June 2005, pp. 60-65. [9] http://www.ifixit.com/Teardown/Microsoft-Kinect-Teardown/4066/. [10] http://www.asus.com/Multimedia/Xtion_PRO_LIVE/#specifications. [11] http://viml.nchc.org.tw/blog/paper_info.php?CLASS_ID=1&SUB_ID=1&PAP ER_ID=278. [12] http://en.wikipedia.org/wiki/File:Gaussian_Filter.svg. [13] F. Durand, and J. Dorsey, “Fast Bilateral Filtering For the Display of High-Dynamic-Range Images,” ACM Transactions on Graphic, vol. 21, no. 3, July 2007, pp. 257-266. [14] J. Fu, S. Wang, Y. Lu, S. Li, W. Zeng, “Kinect-Like Depth Denoising”, IEEE International Symposium on Circuits and Systems (ISCAS), May 2012, pp.512-515. [15] S. Matyunin, D. Vatolin, Y. Berdnikov, M. Smirnov, “Temporal Filtering For Depth Maps Generated by Kinect Depth Camera,” 3DTV Conference: The True Vision- Capture, Transmission and Display of 3D Video (3DTV-CON), May 2011, pp. 1-4. [16] P. P. A. Criminisi and K. Toyama “Object Removal by Exemplar-Based Inpainting,' IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, Jun. 2003, pp. II-721-II-728. [17] http://www.cvlab.cs.tsukuba.ac.jp/dataset/tsukubastereo.php | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/18522 | - |
dc.description.abstract | 本研究改良了在 RGB-D 攝影機上所獲得的深度影像之品質。隨著新型的結
構光源攝影機的推出,例如:Microsoft Kinect 和Asus Xtion PRO LIVE,深度影像的獲得變得更便利且快速。這種深度影像可以應用在許多的領域上,如:虛擬實境、影像處理、3D 列印等等諸多應用。不過,這種深度的影像的產生通常都會伴隨著種種雜訊,像是不合法的影像深度值、錯誤的影像深度值、影像深度值在時間上所受到的擾動,這些雜訊會大幅度降低深度影像的應用推廣。為了 在未來能夠有更廣泛地應用,解決雜訊的干擾是提升影像品質、增加應用效果的 必要工作。因此,我們提出了一套有效的演算法可以成功解決以上所提及之雜訊 干擾,這套演算法是改良了exemplar-based 的影像修補方法[16];該演算法原本應用於填補彩色影像上所消失之區域之像素值,我們將之改良並且應用在深度影 像之雜訊的填補,進而提升RGB-D 攝影機所獲取之深度影像的影像品質。在實 驗最後的結果評估方面,我們將演算法實驗在日本筑波大學的立體影像資料庫 (Tsukuba Stereo Dataset)和Asus Xtion PRO LIVE 上所拍攝的深度影像,並且採用了峰值信噪比(Peak Signal-to-Noise Ratio)和計算時間量化的數據作比較,來證明我們所提出的演算法能夠大幅度提升深度影像的品質,讓深度影像在未來各種應用場合能有顯著的效果提升。 | zh_TW |
dc.description.abstract | This work presents a refinement procedure of depth map acquired by RGB-D(Depth) cameras. With the release of many new structured-light RGB-D cameras,such as Microsoft Kincet or Asus Xtion PRO LIVE, it is very conventional and consumer-accessible to acquire high-resolution depth maps. This 3D depth information can be applied to many fields, like augmented reality, image processing,and 3D printer. However, RGB-D cameras suffered from problems such as undesired occlusion, inaccuracy of depth value, and temporal variation. To broaden its application, it is crucial to solve the above-mentioned problems. Thus, The proposed
novel algorithm based on the exemplar-based inpainting method to cope with the artifact in RGB-D cameras’ depth maps. This exemplar-based inpainting has been used to repair an object-removed image with missing information. The idea of this inpainting method is similar to the procedure of padding the occlusions of RGB-D cameras’ depth data. Therefore, the proposed method enhances and adjusts the inpainting method to fit and refine the image quality of RGB-D cameras’ depth data. For evaluating the experiment results, our proposed method will be tested on Tsukuba Stereo Dataset, which provides a 3D video with ground truth of depth maps, occlusion maps, and RGB images, PSNR, and computational time as evaluation metrics. Moreover, a set of self-shooting RGB-D depth maps and their refinement results will also be shown to prove the improvement of our performance compared with the original occluded depth maps. | en |
dc.description.provenance | Made available in DSpace on 2021-06-08T01:09:40Z (GMT). No. of bitstreams: 1 ntu-103-R01943155-1.pdf: 6855655 bytes, checksum: b012d0e3f95e4d3bdd5bdeef9cc75ec6 (MD5) Previous issue date: 2014 | en |
dc.description.tableofcontents | ABSTRACT ........................................................................................................ i
LIST OF FIGURES .......................................................................................................v LIST OF TABLES .....................................................................................................vii CHAPTER 1 INTRODUCTION ........................................................................2 1.1 Development and Application .......................................................................1 1.1.1 3D Reconstruction .............................................................................2 1.1.2 Augmented Reality ............................................................................2 1.1.3 Image Processing ...............................................................................3 1.2 Motivation .....................................................................................................4 1.3 Thesis Organization.......................................................................................6 CHAPTER 2 BACKGROUND ..........................................................................5 2.1 Kinect-like Depth Camera Characteristics ....................................................7 2.1.1 Triangulation ......................................................................................8 2.1.2 Error Sources ................................................................................... 11 2.1.2.1 Sensor................................................................................ 11 2.1.2.2 Measurement Setup........................................................... 11 2.1.2.3 Properties of Object Surface .............................................12 2.2 Related Works on Filtering..........................................................................13 2.2.1 Convolution......................................................................................13 2.2.2 Gaussian Filter .................................................................................13 2.2.3 Bilateral Filter ..................................................................................14 2.2.4 Kinect-like Denoising Algorithm.....................................................15 2.2.4.1 Spatial-Temporal Depth Denoising...................................15 2.2.4.2 Temporal Denoising Algorithm ........................................17 2.2.4.3 Temporal Filtering ............................................................18 CHAPTER 3 METHODOLOGIES ..................................................................23 3.1 Exemplar-based Inpainting..........................................................................23 3.2 System Architecture.....................................................................................25 3.3.1 Edge Marking ..................................................................................28 3.3.2 Middleware Platform.......................................................................28 3.3.3 Hole Padding ...................................................................................32 3.3.3.1 3-Step Search....................................................................35 3.3.3.2 Modified 3-Step Search....................................................38 3.3.4 Updating Priority Value ...................................................................39 3.3.5 Temporal Filtering ...........................................................................39 CHAPTER 4 EXPERIMENTAL RESULTS ....................................................41 4.1 Experiments on Tsukuba Stereo Dataset .....................................................41 4.2 Experiments on Real-World Scene..............................................................48 4.3 Discussion....................................................................................................52 CHAPTER 5 CONCLUSION...........................................................................55 REFERENCE .....................................................................................................57 | |
dc.language.iso | zh-TW | |
dc.title | 深度圖像之時間與空間上的雜訊過濾 | zh_TW |
dc.title | Temporal and Spatial Denoising of Depth Maps | en |
dc.type | Thesis | |
dc.date.schoolyear | 102-2 | |
dc.description.degree | 碩士 | |
dc.contributor.coadvisor | 林伯星(Bor-Shing Lin) | |
dc.contributor.oralexamcommittee | 吳安宇(An-Yeu Wu),游竹(Chu Yu) | |
dc.subject.keyword | 深度影像,Asus Xtion PRO LIVE,Kinect,RGB-D 攝影機,影像修補, | zh_TW |
dc.subject.keyword | Depth Map,Asus Xtion PRO LIVE,Kinect,RGB-D Camera,Inpainting, | en |
dc.relation.page | 58 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2014-08-18 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電子工程學研究所 | zh_TW |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-103-1.pdf 目前未授權公開取用 | 6.69 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。