請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59684完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
| dc.contributor.author | Wei-Che Chien | en |
| dc.contributor.author | 簡偉哲 | zh_TW |
| dc.date.accessioned | 2021-06-16T09:33:05Z | - |
| dc.date.available | 2021-02-20 | |
| dc.date.copyright | 2017-02-20 | |
| dc.date.issued | 2017 | |
| dc.date.submitted | 2017-02-14 | |
| dc.identifier.citation | [1: Yang et al. 2012]
J. Y. Yang, X. Ye, K. Li, and C. Hou, “Depth recovery using an adaptive color-guided auto-regressive model, “European Conference on Computer Vision, Firenze, Italy, Vol, 23, pp. 158-171, Oct. 7-13, 2012, [2: Han et al. 2013] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced Computer Vision with Microsoft Kinect Sensor: A Review,” IEEE Transactions on Cybernetics, vol. 43, no. 5, Oct. 2013. [3: Henry et al. 2012] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping - Using Kinect-style depth cameras for dense 3D modeling of indoor environments,” The International Journal of Robotics Research, vol. 31, no. 5, pp. 647-663, February 10, 2012. [4: Wang et al. 2015] Z. Wang, J. Hu, S. Wang and T. Lu, “Trilateral Constrained Sparse Representation for Kinect Depth Hole Filling,” Pattern Recognition Letters, Vol. 65, pp. 95-102, Aug. 2015 [5: Yang et al. 2014] J. Y. Yang, X. Ye, K. Li, C. Hou and Y. Wang, “Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive Model,” IEEE Transactions on Image Processing, Vol. 23, No. 8, pp. 3443-3458, Jun. 2014 [6: Lowe 2004] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, Nov. 2004 [7: Yu & Morel 2011] Guoshen Yu and Jean-Michel Morel “ASIFT: An Algorithm for Fully Affine Invariant Comparison,” IPOL Journal Image Processing On Line, Feb. 24, 2011 [8: Tanskanen et al. 2013] P. Tanskanen, K. Kolev, L. Meier, F. Camposeco, O. Saurer and M. Pollefeys, “Live Metric 3D Reconstruction on Mobile Phones,” IEEE International Conference on Computer Vision, Sydney, pp. 65–72, Dec. 1-8, 2013 [9: Fischler & Bolles 1981] Martin A. Fischler and Robert C. Bolles “Random Sample Consensus: A Paradigm for Model Fitting with Apphcatlons to Image Analysis and Automated Cartography,” Communications of the ACM, vol. 24, pp. 381–395, Jun. 1981. [10: Szeliski et al. 2008] R. Szeliski, M. Res, R. WA, R. Zabih, D. Scharstein and O. Veksler, “A Comparative Study of Energy Minimization Methods for Markov Random Fields with Smoothness-Based Priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 30, No. 6, pp. 1068–1080, Jun. 2008. [11: Scharstein & Szeliski 2002] D. Scharstein and R. Szeliski, “A Taxonomy and Evaluation of Dense Two Frame Stereo Correspondence Algorithms,” in Proceedings of IEEE Workshop on Stereo and Multi-Baseline Vision, Vol. 47, No. 1, pp. 7–42, 2002. [12: Thanusutiyabhorn et al. 2011] P. Thanusutiyabhorn, P. Kanongchaiyos, and W. Mohammed, “Image-Based 3D Laser Scanner,” in Proceedings of Internationl Conference of Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Vol. pp. 975–978, 2011 [13: Kolb et al. 2010] A. Kolb, E. Barth, R. Koch, and R. Larsen, “Time-of-Flight Cameras in Computer Graphics,” Comput. Graph. Forum, Vol. 29, No. 1, pp. 141–159, 2010. [14: He et al. 2013] K. He, J. Sun and X. Tang, “Guided Image Filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No.6, pp. 1397–1409, Oct. 2012 [15: Oliveira et al. 2011] M. M. Oliveira, B. Bowen, R. McKenna and Y. S. Chang, “Fast Digital Image Inpainting,” in Proceedings of International Conference on Visualization, Imaging and Image Processing, pp.261–266, 2011 [16: Chen et al. 2013] C. Chen, J. Cai, J. Zheng, T. J. Cham and G. Shi, “A Color-Guided, Region-Adaptive and Depth-Selective Unified Framework for Kinect Depth Recovery,” in Proceedings of IEEE International Workshop on Multimedia Signal Processing, pp. 7–12, 2013 [17: Kwon & Ha 2010] O. S. Kwon and Y. H. Ha, 'Panoramic Video Using Scale-Invariant Feature Transform with Embedded Color-Invariant Values,' IEEE Transactions on Consumer Electronics, vol. 56, no. 2, pp. 792-798, May 2010 [18: Kopf et al. 2007] J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Transaction on Graphics, vol. 26, no. 3, p. 96, July, 2007 [19: Random Sample Consensus from wiki 2013] RANSAC. (2013, May 13). In Wikipedia. Retrieved November 26, 2016, from http://en.wikipedia.org/wiki/RANSAC [20: Online SIFT & ASIFT demo 2014] Online Demo Analysis of the SIFT and ASIFT method (2014). In IPOL Journal Image Processing On line. Retrieved November 26, 2016, from http://www.ipol.im. [21: Kinect for Windows Sensor Components and Specifications] Kinect for Windows Sensor Components and Specifications. In Microsoft Developer Network Website. Retrieved November 26, 2016, from http://msdn.microsoft.com/en-us/library/jj131033.aspx [22: The Middlebury Datasets] The Middlebury Datasets. In Middlebury website. Retrieved January 26, 2017, from http://vision.middlebury.edu/stereo/ | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59684 | - |
| dc.description.abstract | 隨著三維感測器的快速發展,三維空間技術也被利用在許多方面,而在三維數據方面,深度圖的品質好壞嚴重地影響了應用方面的成效。有別於雷射僅提供空間資訊或單一攝影機僅提供色彩資訊,微軟的Kinect感測器能同時提供色彩及空間資訊,更能完整地描述周遭環境狀態。在影像數據方面,空間資訊的取得並不像硬體設備已趨成熟的彩色資訊來得完整,其中一種空間資訊取得的方法乃是藉由光線反射時間差來估測環境中的深度資訊,因遮蔽物阻擋導致待測物無法被探測光線照射到、過於光滑或反光的待測物表面都可能會造成空間資訊的取得錯誤。在本文中,我們提出一個修補深度資訊的方法,乃是藉由不同位置所截取到的四維資料集,使用後取得的影像深度資訊去填補前一位置的深度影像中缺乏深度資訊的部分。藉由影像疊合的演算法中基於彩色影像上的特徵點去尋找兩張影像相同的部分並假定存在一轉換關係使後取的影像能藉此轉換矩陣疊合到前一影像上。利用匹配特徵點與隨機抽樣演算法去計算得夠獲得最大內群數的轉換矩陣,接著再將此轉換矩陣使用在深度影像中,將後取得的深度影像藉由此轉換矩陣轉換至前一深度影像座標下,前一深度影像中的破碎點即可由後一深度影像中的深度值來填補。利用感測器的移動來取得多個不同位置的四維資料集,盡可能地建立出一完整的深度影像。 | zh_TW |
| dc.description.abstract | With the quickly development of three-dimensional scanners, three-dimensional techniques are used for many applications. Focus on data in three-dimensional, the quality of depth image has a significantly influence in the implementation of 3D applications. Unlike sensors such as mono camera which can only provide color information or laser range finder that only provides spatial information, RGB-D sensor like Microsoft Kinect can provide color and spatial information simultaneously that can display the surrounding environment more complete. One of a method to acquire spatial information is estimating the surrounding depth information based on the known speed of light, measuring the time-of-flight of a light signal between the sensor and objects for each point of the image. Another method named Light Coding technique uses infrared to mark objects by speckles, and each speckle has a unique shape to represent spatial information. Due to the physical offset or interference noises like shiny surfaces, smooth surfaces or boundary of objects, the acquisition of depth information may be incorrect.
This thesis proposes a depth information recovery method that uses the latter depth image to fill holes region in the former depth image by the RGB-D data captured from different location. Based on the feature points in two color images and transfer one to the other, the algorithm can find overlapping areas of these two images. Using the matched feature points and Random Sample Consensus (RANSAC) to estimate transformation matrix with maximum number of inliers, the transformation matrix can be utilized in depth images afterwards. Then, the holes region in former depth image can be filled by depth information in transferred latter depth image. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T09:33:05Z (GMT). No. of bitstreams: 1 ntu-106-R02921017-1.pdf: 4801571 bytes, checksum: 208bd01e0025c85cef397c06f7bd6d4a (MD5) Previous issue date: 2017 | en |
| dc.description.tableofcontents | 摘要 i
ABSTRACT iii CONTENTS v LIST OF FIGURES vii LIST OF TABLES ix Chapter 1 Introduction 1 1.1 Motivation 2 1.2 Problem Formulation 4 1.3 Contributions 5 1.4 Organization of the Thesis 6 Chapter 2 Background and Literature Survey 7 2.1 Obtainment of Depth Information 7 2.2 Defect of the Depth Map and Recovery 8 Chapter 3 Related Algorithms 10 3.1 Scale Invariant Feature Transform 10 3.2 Random Sample Consensus (RANSAC) 13 Chapter 4 Depth Map Recovery 17 4.1 Feature Detection and Keypoints Matching 18 4.1.1 Feature Detection 18 4.1.2 Keypoints Matching 20 4.2 Matching-Based Depth Map Hole Filled 26 Chapter 5 Experimental Results and Analysis 30 5.1 Hardware: Microsoft Kinect Sensor 30 5.2 Adjusting of Depth Map and Color image 33 5.2.1 Alignment of Depth Map and Color Image 33 5.2.2 Depth Holes Region Detection 36 5.3 Depth Map Hole Filled by Image Matching 38 5.3.1 Setting of Experimental Environment 39 5.3.2 Feature Detection and Matching 43 5.3.3 Comparison of RANSAC Results 54 5.3.4 Depth Holes Filled by Correlation Color Image 67 5.4 Method Comparison on Synthetically Degraded Datasets 76 Chapter 6 Conclusions and Future Works 85 6.1 Conclusions 85 6.2 Future Works 86 REFERENCES 87 | |
| dc.language.iso | en | |
| dc.subject | 深度修補 | zh_TW |
| dc.subject | 隨機抽樣演算法 | zh_TW |
| dc.subject | 特徵點 | zh_TW |
| dc.subject | 破碎點填補 | zh_TW |
| dc.subject | 隨機抽樣演算法 | zh_TW |
| dc.subject | 特徵點 | zh_TW |
| dc.subject | 破碎點填補 | zh_TW |
| dc.subject | 深度修補 | zh_TW |
| dc.subject | RGB-D感測器 | zh_TW |
| dc.subject | RGB-D感測器 | zh_TW |
| dc.subject | Depth recovery | en |
| dc.subject | RANSAC | en |
| dc.subject | Feature | en |
| dc.subject | Hole filling | en |
| dc.subject | Depth recovery | en |
| dc.subject | RGB-D sensor | en |
| dc.subject | RANSAC | en |
| dc.subject | Feature | en |
| dc.subject | RGB-D sensor | en |
| dc.subject | Hole filling | en |
| dc.title | 利用相對應的彩色影像結構與特徵點匹配進行深度圖復原 | zh_TW |
| dc.title | Depth Image Recovery Based on Correlative Color Image Structure and Feature Matching | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 105-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 簡忠漢(Zhong-Han Jian),李後燦(Hou-Tsan Lee),黃正民(Cheng-Ming Huang) | |
| dc.subject.keyword | RGB-D感測器,深度修補,破碎點填補,特徵點,隨機抽樣演算法, | zh_TW |
| dc.subject.keyword | RGB-D sensor,Depth recovery,Hole filling,Feature,RANSAC, | en |
| dc.relation.page | 89 | |
| dc.identifier.doi | 10.6342/NTU201700139 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2017-02-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-106-1.pdf 未授權公開取用 | 4.69 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
