請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/5894完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力 | |
| dc.contributor.author | Chung-Yi Hung | en |
| dc.contributor.author | 洪中易 | zh_TW |
| dc.date.accessioned | 2021-05-16T16:18:06Z | - |
| dc.date.available | 2018-08-22 | |
| dc.date.available | 2021-05-16T16:18:06Z | - |
| dc.date.copyright | 2013-08-22 | |
| dc.date.issued | 2013 | |
| dc.date.submitted | 2013-08-16 | |
| dc.identifier.citation | [1: Henry et al. 2012]
Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren, Dieter Fox, “RGB-D Mapping: Using Kinect-style Depth Cameras for Dense 3D Modeling of Indoor Environments,” International Journal of Robotics Research, vol. 31, no. 5, pp. 647-663, April 2012. [2: Marcincin et al. 2012] J.Novak-Marcincin, J. Torok, J. Barna, M. Janak, L. Novakova-Marcincinova and V. Fecova, “Realization of 3D Models for Virtual Reality by Use of Advanced Scanning Methods,” in Proceedings of IEEE International Conference on Cognitive Infocommunications, pp. 787-790, December 2-5, 2012. [3: Park et al. 2012] DongRyeol Park, Joon-Kee Cho and Yeon-Ho Kim, “A Visual Guidance System for Minimal Invasive Surgery Using 3D Ultrasonic and Stereo Endoscopic Images,” in Proceedings of IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Roma, Italy, pp. 872-877, June 24-27, 2012. [4: Noonan et al. 2009] David P. Noonan, Peter Mountney, Daniel S. Elson, Ara Darzi and Guang-Zhong Yang, “A Stereoscopic Fibroscope for Camera Motion and 3D Depth Recovery during Minimally Invasive Surgery,” in Proceedings of IEEE Conference on Robotics and Automation, Kobe, Japan, pp. 4463-4468, May 12-17, 2009. [5: Zeisl et al. 2012] Bernhard Zeisl, Kevin Koser and Marc Pollefeys, “Viewpoint Invariant Matching via Developable Surfaces,” in Proceedings of the 12th International Conference on Computer Vision, pp. 62-71, 2012. [6: Suarez et al. 2012] Jesus Suarez and Robin R. Murphy, “Using the Kinect for Search and Rescue Robotics,” in Proceedings of the 2012 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), pp. 1-2, November 5-8, 2012. [7: Hu et al. 2012] Gibson Hu, Shoudong Huang, Liang Zhao, Alen Alepijevic and Gamimi Dissanayake, “A robust RGB-D SLAM algorithm,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems, Vilamoura, pp. 1714-1719, 7-12 October, 2012. [8: Murray et al. 2005] Don Murray and James J. Little, “Patchlets: Representing Stereo Vision Data with Surface Elements,” in Proceedings of IEEE International Workshop on Applications of Computer Vision, Breckenridge, CO, vol. 1, pp. 192-199, 5-7 January, 2005. [9: Bsel et al. 1992] Paul J. Bsel and Neil D. McKay, “A Method for Registration of 3D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no.2, pp. 239-256, February 1992. [10: Turk et al. 1994] Greg Turk and Marc Levoy, “Zippered Polygon Meshes from Range Images,” in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, New York, USA, pp. 311-318, July 24-29, 1994. [11: Chen et al. 1991] Yang Chen and Gerard Medioni, “Object Modeling by Registration of Multiple Range Images,” in Proceedings of IEEE International Conference on Robotics and Automation, Sacramento, CA, vol. 3, pp. 2724-2729, April 9-11, 1991. [12: Johnson et al. 1997] Andrew Edie Johnson and Sing Bing Kang, “Registration and Integration of Textured 3D Data,” in Proceedings of IEEE International Conference on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Canada, pp. 234-241, May 12-15, 1997. [13: Men et al. 2011] Hao Men, Biruk Gebre and Kishore Pochiraju, “Color Point Cloud Registration with 4D ICP Algorithm,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Shanghai, pp. 1511-1516, May 9-13, 2011. [14: Makadia et al. 2006] Ameesh Makadia, Alexander Patterson and Kostas Daniilidis, “Fully Automatic Registration of 3D Point Clouds,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 1297-1304, June 17-22, 2006. [15: Arun et al. 1987] K. S. Arun, T. S. Hung and S. D. Blostein, “Least-squares Fitting of Two 3-D point Sets,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 9, no. 5, pp. 698-700, September 1987. [16: Scaramuzza et al. 2011] Davide Scaramuzza and Friedrich Fraundorfer, “Visual Odometry [Tutorial],” IEEE Robotics and Automation Magazine, vol. 18, no. 4, pp. 80-92, December 2011. [17: Nister et al. 2004] David Nister, Oleg Naroditsky and James Bergen, “Visual Odometry,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.1, pp. 652-659, June 27-July 2, 2004. [18: Kitt et al. 2010] B.Kitt, A. Geiger and H. Lategahn, “Visual Odometry Based on Stereo Image Sequences with RANSAC-based Outliers Rejection Scheme,“ in Proceedings of IEEE Intelligent Vehicles Symposium, San Diego, USA, pp. 486-492, June 21-24, 2010. [19: Jachalsky et al. 2010] Jorn Jachalsky, Markus Schlosser and Dirk Gandolph, “Confidence Evaluation for Robust, Fast-Converging Disparity Map Refinement,” in Proceedings of IEEE International Conference on Multimedia and Expo (ICME), Suntec City, pp. 1399-1040, July 19-23, 2010. [20: Lowe 2004] David G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, January 2004. [21: Dalal et al. 2005] Navneet Dalal and Bill Triggs, “Histograms of Oriented Gradients for Human Detection,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, vol. 1, pp. 886-893, June 25, 2005. [22: Saravanakumar et al. 2010] S. Saravanakumar, A.Vadivel and C.G Saneem Ahmed, “Multiple Human Object Tracking using Background Subtraction and Shadow Removal Techniques,“ in Proceedings of IEEE International Conference on Signal and Image Processing (ICSIP), Chennai, pp. 79-84, December 15-17, 2010. [23: Lee et al. 2003] Dar-Shyang Lee, Jonathan J. Hull and Berna Erol, “A Bayesian Framework for Gaussian Mixture Background Modeling,” in Proceedings of IEEE International Conference on Image Processing , vol. 3, pp. 973-976, September 14-17, 2003. [24: Barnich et al. 2011] Olivier Barnich and Marc Van Droogenbroeck, “ViBe: A Universal Background Subtraction Algorithm for Video Sequences,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709-1724, June 2011. [25: Enzweiler et al. 2009] Markus Enzweiler and Dariu M. Gavrila, “Monocular Pedestrian Detection: Survey and Experiments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no.12, pp. 2179-2195, December 2009. [26: Tang et al. 2008] Feng Tang, Michael Harville, Hai Tao and Ian N. Robinson, “Fusion of Local Appearance with Stereo Depth for Object Tracking,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, pp. 1-8, 23-28 June, 2008. [27: Labayrade et al. 2002] Raphael Labayrade, Didier Aubert and Jean-Philippe Tarel, “Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through “V-disparity” Representation,” in Proceedings of IEEE Intelligent Vehicle Symposium, vol.2, pp. 646-651, June 17-21, 2002. [28: Hu et al. 2005] Zhencheng Hu, Francisco Lamosa and Keiichi Uchimura, “A Complete U-V-Disparity Study for Stereovision Based 3-D Driving Environment Analysis,” in Proceedings of IEEE International Conference on 3-D Digital Imaging and Modeling, pp.204-211, June 13-16, 2005. [29: Perrollaz et al. 2012] Mathias Perrollaz, John-David Yoder, Amaury Negre, Anne Spalanzani and Christian Laugier, “A Visibility-Based Approaching for Occupancy Grid Computation in Disparity Space,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 3, pp. 1383-1393, September 2012. [30: Perrollaz et al. 2010] Mathias Perrollaz, Anne Spalanzani and Didier Aubert, “Probabilistic Representation of the Uncertainty of Stereo-Vision and Application to Obstacle Detection,” in Proceedings of IEEE Intelligent Vehicles Symposium, San Diego, USA, pp. 313-318, June 21-24, 2010. [31: Oniga et al. 2010] Florin Oniga and Sergiu Nedevschi, “Processing Dense Stereo Data Using Elevation Maps: Road Surface, Traffic Isle, and Obstacle Detection,” IEEE Transactions on Vehicular Technology, vol. 59, no. 3, pp. 1172-1182, March 2010. [32: Viola et al. 2003] Paul Viola, Michael J. Jones and Daniel Snow, “Detecting Pedestrians Using Patterns of Motion and Appearance,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), Nice, France, vol. 2, pp. 734-741, October 13-16, 2003. [33: Enzweiler et al. 2008] M. Enzweiler, P. Kanter and M. Gavrila, “Monocular Pedestrian Recognition Using Motion Parallax,” in Proceedings of IEEE Intelligent Vehicles Symposium, Eindhoven, Netherlands, pp. 792-797, June 4-6, 2008. [34: Wolf et al. 2004] Denis Wolf and Gaurav S. Sukhatme, “Online Simultaneous Localization and Mapping in Dynamic Environments,” in Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, vol. 2, pp. 1301-1307, April 26-May 1, 2004. [35: Danescu et al. 2012] Radu Danescu, Cosmin Pantilie, Florin Oniga, and Sergiu Nedevschi, “Particle Grid Tracking System Stereovision Based Obstacle Perception in Driving Environments,” IEEE Transactions on Intelligent Transportation Systems Magazine, vol. 4, no. 1, pp. 6-20, January 26, 2012. [36: Barth et al. 2009] Alexander Barth and Uwe Franke, “Estimating the Driving State of Oncoming Vehicles From a Moving Platform Using Stereo Vision,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 4, pp. 560-571 , December 2009. [37: Nedevschi et al. 2007] Sergiu Nedevschi, Corneliu Tomiuc and Silviu Bota, “Stereo-Based Pedestrian Detection for Collision Avoidance Applications,” IEEE Transactions on Intelligent Transportation System, vol. 10, no. 3, pp. 380-391, September 2009. [38: Krotosky et al. 2007] Stephen J. Krotosky and M.M. Trivedi, “On Color-, Infrared-, and Multimodal-Stereo Approaches to Pedestrian Detection”, IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 4, pp.619-629, December 2007. [39: Li et al. 2009] Liyuan Li, Jerry Kah Eng Hoe, Shuicheng Yan and Xinguo Yu, “ML-Fusion based Multi-Model Human Detection and Tracking for Robust Human-Robot Interfaces,” in Proceedings of IEEE International Workshop on Applications of Computer Vision (WACV), Snowbird, UT, pp. 1-8, December 7-8, 2009. [40: Zitnick et al. 2002] C. Lawrence Zitnick and Takeo Kanade, “A Cooperative Algorithm for Stereo Matching and Occlusion Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 7, pp. 675-684, July 2000. [41: Comaniciu et al. 2003] Dorin Comaniciu, Visvanathan Ramesh and Peter Meer, “Kernel-Based Object Tracking,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, May 2003. [42: Steder et al. 2011] Bastian Steder, Radu Bogdan Rusu, Kurt Konolige and Wolfram Burgard, “Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Shanghai, pp. 2601-2608, May 9-13, 2011. Websites [43: SIFT Keypoint Detector from David Lowe 2013] SIFT Keypoint Detector. (2005, July). In David Lowe Personal Page. Retrieved April 3, 2013, from http://www.cs.ubc.ca/~lowe/keypoints/ [44: Kalman Filter from website 2013] Kalman Filter Toolbox for MATLAB. (2004, June 7). In UBC. Retrieved June 6, 2013, from http://www.cs.ubc.ca/~murphyk/Software/Kalman/kalman.html [45: HSL and HSV from wiki 2013] HSL and HSV. (2013, June 6). In Wikipedia. Retrieved June 6, 2013, from http://en.wikipedia.org/wiki/HSL_and_HSV [46: Connected-Component Labeling from wiki 2013] Connected-Component Labeling. (2013, June 6). In Wikipedia. Retrieve June 6, 2013, from http://en.wikipedia.org/wiki/Connected-component_labeling [47: Random Sample Consensus from wiki 2013] RANSAC. (2013, May 13). In Wikipedia. Retrieved May 3, 2013, from http://en.wikipedia.org/wiki/RANSAC [48: Interpolation from wiki 2013] Interpolation. (2013, May 31). In Wikipedia. Retrieved May 31, 2013, from http://en.wikipedia.org/wiki/Interpolation [49: Accuracy For Stereo Vision from PointGrey 2010] Article 63: How is depth determined from a disparity image? (2010, July 19). In PointGrey Official Knowledge Base. Retrieved May 31, 2013, from http://www.ptgrey.com/support/kb/index.asp?a=4&q=85 [50: UTE120 Combo ExpressCard from Uptech 2013] UTE120 Combo ExpressCard. (2013, July). In Uptech Website. Retrieved July 30, 2013, from http://www.uptech.tw/product_detail.php?prod_id=488 [51: BumbleBee2 Product Datasheet from PointGrey 2013] BumbleBee2 Documents- Product Datasheet. (2012, June). In PointGrey Official Website. Retrieved July 12, 2013, from http://www.ptgrey.com/products/bumblebee2/bumblebee2_xb3_datasheet.pdf [52: URG-04LX-UG01 from Hokuyo] Hokuyo URG-04LX-UG01 Documents- Product Datasheet. (2013, July 30). In Hokuyo Official Website. Retrieved July 30, 2013, from http://www.hokuyo-aut.jp/02sensor/07scanner/urg_04lx_ug01.html [53: SICK LMS100 from SICK] SICK LMS100 Datasheet. (2013, July 30). In SICK Official Website. Retrieve July 30, 2013, from http://www.sick-automation.ru/images/File/pdf/DIV05/LMS100_manual.pdf [54: Point Cloud Library from PCL Website 2013] Point Cloud Library. (2013, May 31). In PCL Website. Retrieved May 31, 2013, from http://pointclouds.org/ [55: NARF feature from PCL 2013] NARF feature from Point Cloud Library. (2013, July 30). In PCL Website. Retrieved July 30, 2013, from http://pointclouds.org/documentation/tutorials/narf_feature_extraction.php [56: Rigid body from wiki 2013] Rigid body. (2013, June 27). In Wikipedia. Retrieved June 27, 2013, from http://en.wikipedia.org/wiki/Rigid_body [57: OpenCV from OpenCV official website 2013] Open Source Computer Vision Library. (2013, June 27). In OpenCV official website. Retrieved June 27, 2013, from http://opencv.org/. Books: [58: Gonzalez & Woods 2008] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd adapted ed., Editor: S. G. Miaou, Taiwan: Pearson, June 2008. [59: Laganiere 2011] Robert Laganiere, OpenCV 2 Computer Vision Application Programming Coolbook, 1st ed., Editor: Neha Shetty, Packt Publishing Ltd., May 2011. [60: Spong 2005] Mark W. Spong, Seth Hutchinson, M. Vidyasagar, Robot Modeling and Control, 1st ed., John Wiley & Sons, Inc., November 18, 2005 [61: Szeliski 2010] Richard Szeliski, Computer Vision: Algorithms and Applications, 2011 ed., Springer, November 24, 2010. [62: Thrun 2005] S.Thrun, W. Burgard and D. Fix, Probabilistic Robotics, Editor: R. Arkin, London: The MIT Press, 2005. [63: Buhmann 2003] M. D. Buhmann, Radial Basis Functions: Theory and Implementations, Cambridge University Press, 2003. [64: Bradski et al. 2008] Gary Bradski and Adrian Kaehler, Learning OpenCV, Editor: Mike Loukides, O’Reilly Media, Inc., 2008. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/5894 | - |
| dc.description.abstract | 三維環境重建是目前一項熱門且應用廣泛的議題,諸如室內環境導覽、虛擬實境以及微創手術之影像導覽系統。立體攝影機同時提供色彩及空間資訊,相較於雷射僅提供空間資訊或單一攝影機提供色彩資訊,更能完整描述環境狀態,提供充足的資訊於三維重建任務上。若能精確地將每一時刻攝影機的相對轉換關係估算出來,立體攝影機量測點便能夠放置在正確的世界座標上,進而建立出三為環境模型。因此首要的任務是利用連續影像上相同特徵點達成立體攝影機的定位。然而,由於立體攝影機的不確定性及錯誤特徵點匹配,不將離群匹配點剃除直接估測攝影機相對姿態將導致定位不精確或是錯誤估測。因此,隨機抽樣一致演算法在此論文中用來作為離群匹配對的剃除。另一方面,由於立體攝影機為被動式感測器,在許多情況如低紋理及光滑材質下,視差影像將產生許多破碎區域,影響三維重建所需的資訊量。因此本論文將提出一個資料前處理的方法,降低量測破碎,進而提高空間重建的品質。
此外,考量到動態環境下建置靜態地圖時,必須將動態物偵測出並將其濾除。因此本論文提出了一套物體偵測及追蹤演算法,以機率形式建立佔據網格地圖擷取出候選物體。接著,候選物體利用HSV色彩模型中的色相及飽和度分佈相似性對應到正確的資料庫物體,以解決資料關聯性問題。最後,物體狀態的更新以本論文所提出的更新策略搭配卡爾曼濾波器來達成。實驗結果顯示此系統能夠同時追蹤多重物體,即使物體在一段時間超出攝影機視野或是被遮擋後再被偵測,仍能夠準確追蹤。 | zh_TW |
| dc.description.abstract | Three-dimensional environment reconstruction is a key technology that has been widely researched over the last decade and has many applications such as indoor environment navigation, virtual reality and visual guidance system for minimal invasive surgery. Stereo camera provides color and spatial information together and therefore is more suitable in 3D environment reconstruction task than other sensors like laser range finder that only provides spatial information or mono camera that only provides color information. Once each camera relative pose is estimated precisely, measurement points provided from stereo camera can be placed at the correct position in the global coordinate to reconstruct the 3D environment model. Thus, the most important task is to achieve the goal of localizing the camera pose by using the same feature points in the consecutive frames. However, because of the uncertainty caused by the stereo camera noise and the feature point mismatching, estimating the camera pose directly without eliminating the outliers could lead to an inaccurate or wrong result. Therefore, Random Sample Consensus (RANSAC) algorithm is applied to solve the outlier problem in this thesis. On the other hand, because of the limitation of the passive type sensor like stereo camera, the disparity map has many missing data areas that occur in several situations such as measuring object in low textureness or glossy surface. This problem may affect the quality of the reconstructed 3D model. Thus, the data preprocessing method is proposed to enhance the 3D reconstruction quality by reducing the missing data areas.
In addition, considering 3D model reconstruction task in dynamic scene, moving object needs to be detected and removed. Therefore, the object detection and tracking method is proposed to detect an object by constructing the occupancy grid map in probability representation to extract object candidate. Then the distributions of hue and saturation in HSV color space are used to link the candidate to the corresponding database object correctly to solve the data association problem. Finally, the proposed update strategy with Kalman filter is used to renew object states. The experiment results demonstrate that the system can track multiple objects simultaneously and even though an object is out of the field of view for a while or is in occlusion, the object can still be tracked correctly. | en |
| dc.description.provenance | Made available in DSpace on 2021-05-16T16:18:06Z (GMT). No. of bitstreams: 1 ntu-102-R00921014-1.pdf: 10554428 bytes, checksum: e2b85e22f1606d3ea9a715bf77dc7d7b (MD5) Previous issue date: 2013 | en |
| dc.description.tableofcontents | 中文摘要 i
ABSTRACT iii Contents vi List of Figures viii List of Tables xii Chapter 1 1 1.1 Motivation 1 1.2 Problem Formulation 4 1.3 Contribution 6 1.4 Organization of the Thesis 8 Chapter 2 9 2.1 Three-Dimensional Environment Reconstruction 9 2.2 Object Detection and Tracking 13 Chapter 3 18 3.1 Pin-hole Camera Model 18 3.2 Random Sample Consensus 20 3.3 Image Processing and Description 23 3.3.1 HSV Color Space 23 3.3.2 Morphological Image Processing 23 3.3.3 Connected-Component Labeling 28 3.4 Radial Basis Function 30 Chapter 4 33 4.1 Stereo Camera Localization and Mapping 34 4.1.1 Feature Point Extraction 39 4.1.2 Feature Point Matching in Two Consecutive Frames 43 4.1.3 Estimate the relative transformation matrix of rigid body by Least-Squares method using SVD. 46 4.1.4 Camera Pose Estimation with RANSAC Outlier Rejection 51 4.2 Stereo Vision Refinement 58 4.2.1 Forbidden Area Detection and Elimination 59 4.2.2 Holes Detection 66 4.2.3 Dual Orthogonal Linear Interpolation 68 4.2.4 Radial Basis Function 71 Chapter 5 74 5.1 System Architecture 74 5.2 Object Detection 76 5.2.1 Visibility-Based U-Disparity Occupancy Grid 77 5.2.2 Post-processing 85 5.2.3 Object Candidates Bounding Box Extraction 89 5.3 Object Tracking 91 5.3.1 Remove Background Pixels in Bounding Box 91 5.3.2 Registration between Candidates and Objects using Feature Vectors 93 5.3.3 Update Strategy with Kalman Filter 102 Chapter 6 107 6.1 Experimental Hardware 107 6.2 Stereo Camera Localization and Mapping 112 6.2.1 Experimental Scenario Setup 112 6.2.2 The Effect of RANSAC Outlier Rejection Algorithm 116 6.2.3 Relation between Localization Accuracy and Mapping Quality 118 6.2.4 The Accuracy of Feature-based Localization Algorithm 122 6.2.5 Three-Dimensional Reconstruction 125 6.2.6 Mapping Quality and the Proposed Stereo Refinement Algorithm Evaluation 129 6.2.7 Evaluate the Proposed Stereo Refinement in Spatial Aspect 135 6.3 Object Detection and Tracking 148 6.3.1 Object Detection 150 6.3.2 Object Tracking 156 Chapter 7 167 7.1 Conclusion 167 7.2 Future Work 168 References 171 | |
| dc.language.iso | en | |
| dc.subject | 基於可視度之佔據網格 | zh_TW |
| dc.subject | 立體攝影機 | zh_TW |
| dc.subject | RGB-D定位 | zh_TW |
| dc.subject | 三維環境重建 | zh_TW |
| dc.subject | 物體追蹤 | zh_TW |
| dc.subject | Stereo camera | en |
| dc.subject | RGB-D localization | en |
| dc.subject | 3D environment reconstruction | en |
| dc.subject | object tracking | en |
| dc.subject | visibility-based occupancy grid | en |
| dc.title | 利用立體攝影機進行色彩與深度感測以達成三維環境重建及物體追蹤 | zh_TW |
| dc.title | Three-Dimensional Environment Reconstruction and Object Tracking Using RGB-D Sensing of Stereo Camera | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 101-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 簡忠漢,李後燦,黃正民 | |
| dc.subject.keyword | 立體攝影機,RGB-D定位,三維環境重建,物體追蹤,基於可視度之佔據網格, | zh_TW |
| dc.subject.keyword | Stereo camera,RGB-D localization,3D environment reconstruction,object tracking,visibility-based occupancy grid, | en |
| dc.relation.page | 184 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2013-08-16 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-102-1.pdf | 10.31 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
