請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56451完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
| dc.contributor.author | Yung-Cheng Huang | en |
| dc.contributor.author | 黃詠政 | zh_TW |
| dc.date.accessioned | 2021-06-16T05:29:15Z | - |
| dc.date.available | 2015-08-17 | |
| dc.date.copyright | 2014-08-17 | |
| dc.date.issued | 2014 | |
| dc.date.submitted | 2014-08-14 | |
| dc.identifier.citation | [1: Han et al. 2013]
Jungong Han, Ling Shao, Dong Xu, and Jamie Shotton, “Enhanced Computer Vision with Microsoft Kinect Sensor: A Review,” IEEE Transactions on Cybernetics, vol. 43, no. 5, Oct. 2013. [2: Bailey & Durrant-Whyte 2006] Tim Bailey and Hugh Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM): Part II State of the Art,” IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108–117, September 2006. [3: Henry et al. 2012] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping - Using Kinect-style depth cameras for dense 3D modeling of indoor environments,” The International Journal of Robotics Research, vol. 31, no. 5, pp. 647-663, February 10, 2012. [4: Biswas & Veloso 2012] J. Biswas and M. Veloso, “Depth camera based indoor mobile robot localization and navigation,” in Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, Minnesota, pp.1697–1702, May 14-18, 2012. [5: Gedik & Alatan 2013] O. Serdar Gedik and A. Aydın Alatan, “3-D Rigid Body Tracking Using Vision and Depth Sensors,” IEEE Transactions on Cybernetics, vol. 43, no. 5, October 2013. [6: Almeida et al. 2013]. Joao Emilio Almeida, Rosaldo J. F. Rossetti, and Antonio Leca Coelho, “Mapping 3D Character Location for Tracking Players’ Behaviour,” in Proceedings of 2013 8th Iberian Conference on Information Systems and Technologies (CISTI), Lisboa, Portugal, pp. 1-5, 19-22 June 2013. [7: Lowe 2004] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 2, no. 60, pp. 91-110, 2004. [8: Berkmann & Caelli 1994] Jens Berkmann and Terry Caelli. “Computation of Surface Geometry and Segmentation Using Covariance Techniques.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 16, no. 11, pp. 1114–1116, November 1994. [9: Scaramuzza & Fraundorfer 2011] Davide Scaramuzza and Friedrich Fraundorfer, “Visual Odometry [Tutorial],” IEEE Robotics and Automation Magazine, vol. 18, no. 4, pp. 80-92, December 2011. [10: Hsu et al. 2014] Chih-Ming Hsu, Feng-Li Lian, Cheng-Ming Huang, “A Systematic Spatiotemporal Modeling Framework for Characterizing Traffic Dynamics Using Hierarchical Gaussian Mixture Modeling and Entropy Analysis,” IEEE Systems Journal, May 2014. [11: Davison et al. 2007] A. Davison, I. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, June 2007. [12: Nister et al. 2004] David Nister, Oleg Naroditsky and James Bergen, “Visual Odometry,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, vol.1, pp. 652-659, June 27-July 2, 2004. [13: Henry et al. 2010] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using depth cameras for dense 3-D modeling of indoor environments,” in Proceedings of the 12th International Symposium on Experimental Robotics (ISER), Delhi, India, 18-21 December 2010. [14: Engelhard et al. 2011] N. Engelhard, F. Endres, J. Hess, J. Sturm, and W. Burgard, “Realtime 3-D visual SLAM with A hand-held camera,” in Proceedings of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, 2011. [15: Rosten et al. 2010] E. Rosten, R. Porter, and T. Drummond, “Faster and Better: A machine learning approach to corner detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 105–119, Jan. 2010. [16: Pathak et al. 2010] K. Pathak, A. Birk, N. Vaškevičius, and J. Poppinga, “Fast Registration Based on Noisy Planes With Unknown Correspondences for 3-D Mapping,” IEEE Transactions on Robotics, vol. 26, no. 3,pp. 424-441, June 2010. [17: Newcombe et al. 2011] R. Newcombe, A. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, D. Molyneaux, S. Hodges, D. Kim, and A. Fitzgibbon, “KinectFusion: Real-time dense surface mapping and tracking,” in Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Washington, DC, USA, pp. 127–136, 26-29 Oct. 2011. [18: Izadi et al. 2011] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, New York, NY, USA, pp. 559–568, Oct. 2011. [19: Sandhu et al. 2010] Romeil Sandhu, Samuel Dambreville, and Allen Tannenbaum, “Point Set Registration via Particle Filtering and Stochastic Dynamics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, August 2010. [20: Bay et al. 2008] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding (CVIU), vol. 110, no. 3, pp. 346-359, June 2008. [21: Calonder et al. 2010] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary Robust Independent Elementary Features,” in Proceedings of the 11th European Conference on Computer Vision (ECCV), Heraklion, Greece, pp. 778–792, 5-11 September 2010. [22: Rublee et al. 2011] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, pp. 2564–2571, 6-13 Nov. 2011. [23: Leutenegger et al. 2011] S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: Binary Robust Invariant Scalable Keypoints,” in Proceedings of 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, pp. 2548–2555, 6-13 Nov. 2011. [24: Alahi et al. 2012] A. Alahi, R. Ortiz, and P. Vandergheynst, “FREAK: Fast Retina Keypoint,” in Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island, USA, pp. 510–517, 16-21 June 2012. [25: Johnson & Hebert 1999] A. Johnson and M. Hebert, “Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 433 – 449, May 1999. [26: Frome et al. 2004] A. Frome, D. Huber, R. Kolluri, T. Bulow, and J. Malik, “Recognizing Objects in Range Data Using Regional Point Descriptors,” in Proceedings of the 8th European Conference on Computer Vision (ECCV), Prague, Czech Republic, May 11-14, 2004. [27: Tombari et al. 2010] F. Tombari, S. Salti, and L. Di Stefano. “Unique Signatures of Histograms for Local Surface Description.” in Proceedings of the 11th European Conference on Computer Vision (ECCV), Heraklion, Greece, pp. 356–369, 5-11 September 2010. [28: Rusu et al. 2009] R. B. Rusu, N. Blodow, and M. Beetz, “Fast Point Feature Histograms (FPFH) for 3D Registration,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, pp. 3212–3217, May 12-17, 2009. [29: Hwang et al. 2012] Hyoseok Hwang, Seungyong Hyung, Sukjune Yoon, and Kyungshik Roh, “Robust Descriptors for 3D Point Clouds using Geometric and Photometric Local Feature,” in Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 4027-4033, Oct. 7-12, 2012. [30: Nister et al. 2004] David Nister, Oleg Naroditsky, and James Bergen, “Visual Odometry,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, vol. 1, pp. 652-659, June 27-July 2, 2004. [31: Gionis et al. 1999] A. Gionis, P. Indyk, and R. Motwani, “Similarity Search in High Dimensions via Hashing,” in Proceedings of the 25th International Conference on Very Large Data Bases (VLDB), Edinburgh, Scotland, pp. 518-529, September 7-10, 1999. [32: Golub & Reinsch 1970] G. H. Golub, and C. Reinsch, “Singular Value Decomposition and Least Squares Solutions,” Numerische Mathematik, vol. 14, no. 5, pp. 403-420, 1970. [33: Besl & McKay 1992] P. J. Besl and N. D. Mckay, “A Method for Registration of 3-D Shaped,” IEEE Transactions on pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, Feb. 1992. [34: Chen & Medioni 1992] Y. Chen and G. Medioni, “Object Modeling by Registration of Multiple Range Images,” Image and Vision Computing, vol. 10, no. 3, pp. 145–155, 1992. [35: Magnusson et al. 2007] M. Magnusson, A. Lilienthal, and T. Duckett, “Scan registration for autonomous mining vehicles using 3D-NDT,” Journal of Field Robotics, vol. 24, no. 10, pp. 803–827, 2007. [36: Laganiere 2011] Robert Laganiere, “OpenCV 2 Computer Vision Application Programming Coolbook,” 1st ed., Editor: Neha Shetty, Packt Publishing Ltd., May 2011. [37: Shakarji 1998] Craig Shakarji, “Least-Squares Fitting Algorithms of the NIST Algorithm Testing System,” Journal of Research of the National Institute of Standards and Technology, no. 103, vol. 6, pp. 633–641, November-December 1998. [38: Berger et al. 2011] Kai Berger, Kai Ruhl, Christian Bruemmer, Yannic Schroeder, Alexander Scholz, and Marcus Magnor, “Markerless Motion Capture Using Multiple Color-Depth Sensors,” in Proceedings of the 16th International Workshop on Vision, Modeling and Visualization (VMV), Berlin, Germany, pages 317–324, October 2011. [39: Smisek et al. 2011] Jan Smisek, Michal Jancosek, and Tomas Pajdla, “3D with Kinect,” in Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshop), Barcelona, Spain, pp. 1154-1160, 6-13 Nov. 2011. [40: Li et al. 2011] Hongsheng Li, Tian Shen, Xiaolei Huang, “Approximately Global Optimization for Robust Alignment of Generalized Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 6, pp. 1116-1131, June 2011. [41: Pomerleau et al. 2012] F. Pomerleau, M. Liu, F. Colas, and R. Siegwart, “Challenging Data Sets for Point Cloud Registration Algorithms,” International Journal of Robotic Research, vol. 31, no. 14, pp. 1705–1711, Dec. 2012. [42: Konolige & Agrawal 2008] K. Konolige, and M. Agrawal, “FrameSLAM: From Bundle Adjustment to Real-Time Visual Mapping,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1066–1077, Oct. 2008. [43: Myronenko & Song 2010] Andriy Myronenko and Xubo Song, “Point Set Registration: Coherent Point Drift,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 12, December 2010. [44: Low 2004] Kok-Lim Low, “Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration,” Technical Report TR04004, Department of Computer Science, University of North Carolina at Chapel Hill, February 2004. [45: Bulow & Birk 2013] Heiko Bulow and Andreas Birk, “Spectral 6 DOF Registration of Noisy 3D Range Data with Partial Overlap,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 4, pp. 954-969, April 2013. [46: Kim et al. 2009] Y. M. Kim, C. Theobalt, J. Diebel, J. Kosecka, B. Micusik, and S. Thrun, “Multi-View Image and ToF Sensor Fusion for Dense 3D Reconstruction,” in Proceedings of IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), Kyoto, Japan, pp. 1542–1549, Sept. 27 2009- Oct. 4 2009. [47: Paz et al. 2008] L. Paz, P. Pinies, J. Tardos, and J. Neira, “Large-Scale 6-DOF SLAM with Stereo-in-Hand,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 946–957, Oct. 2008. [48: Men et al. 2011] Hao Men, Biruk Gebre and Kishore Pochiraju, “Color Point Cloud Registration with 4D ICP Algorithm,” in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Shanghai, pp. 1511-1516, May 9-13, 2011. [49: Tam et al. 2013] G. K. L. Tam, Zhi-Quan Cheng, Yu-Kun Lai, F. C. Langbein, YongHuai Liu, D. Marshall, R. R. Martin, Xian-Fan Sun, and P. L. Rosin, “Registration of 3D Point Clouds and Meshes: A Survey from Rigid to Nonrigid,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 7, July 2013. [50: Wang et al. 2001] C. Wang, H. Tanahashi, H. Hirayu, Y. Niwa, and K. Yamamoto, “Comparison of Local Plane Fitting Methods for Range Data,” in Proceedings of 2001 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI, USA, vol. 1, pp. 663–669, 8-14 December 2001. [51: Jung & Ho 2012] Jae-Il Jung, Yo-Sung Ho, “Depth Image Interpolation Using Confidence-based Markov Random Field,” IEEE Transactions on Consumer Electronics, vol. 58, no. 4, pp. 1399-1402, November 2012. [52: Rusu 2009] Radu Bogdan Rusu, “Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments,” Ph.D. dissertation, Computer Science department, Technische Universitaet Muenchen, Germany, October 2009. [53: Camera Calibration Toolbox for Matlab 2013] Camera Calibration Toolbox for Matlab. (2013, December 15) In Jean-Yves Bouguet personal Page. Retrieved December 15, 2013, from http://www.vision.caltech.edu/bouguetj/calib_doc/ [54: Kinect for Windows Sensor Components and Specifications] Kinect for Windows Sensor Components and Specifications. In Microsoft Developer Network Website. Retrieved June 05, 2013, from http://msdn.microsoft.com/en-us/library/jj131033.aspx [55: URG-04LX-UG01 from Hokuyo] Kokuyo URG-04LX-UG01 Documents Product Datasheet. (2013, July 30). In Hokuyo Official Website. Retrieved July 30, 2013, from http://www.hokuyo-aut.jp/02sensor/07scanner/urg_04lx_ug01.html [56: Iterative Closest Point from MATLAB CENTRAL 2014] Iterative Closest Point. (2014, February 12) In MATLAB CENTRAL. Retrieved February 12, 2014, from http://www.mathworks.com/matlabcentral/fileexchange/27804-iterative-closest-point [57: Point Cloud Library website] Point Cloud Library. (2014, June 9). In PCL website. Retrieved June 6, 2014 from http://pointclouds.org/ [58: Estimating Surface Normals in a Point Cloud] Estimating Surface Normals. In PCL website. Retrieved June 6, 2014 from http://pointclouds.org/documentation/tutorials/normal_estimation.php#normal-estimation [59: SIFT Keypoint Detector from David Lowe 2013] SIFT Keypoint Detector. (2005, July). In David Lowe Personal Page. Retrieved April 3, 2013, from http://www.cs.ubc.ca/~lowe/keypoints/ [60: The CloudViewer] The CloudViewer. In PCL website. Retrieved June 6, 2014 from http://pointclouds.org/documentation/tutorials/cloud_viewer.php#cloud-viewer [61: ASL Dataset: Apartment] ASL Dataset: Apartment (2014, June 18) In the Autonomous Systems Lab Website. Retrieve June 18, 2014, from https://github.com/ethz-asl/libpointmatcher/blob/master/doc/ICPIntro.md | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56451 | - |
| dc.description.abstract | 隨著低成本 RGB-D 感測器的發展以及二維即時定位與地圖建構的成功,環境的三維表面重建成了熱門的研究議題,大多數的重建方法著重在資料的轉換,藉由這些資料的重複區域來計算它們相對的位置,將不同時間所擷取到不同視角的四維資料集(RGB-D dataset)整合在同一坐標系下。三維的地圖可以應用在機器人視覺、虛擬擴增實境以及娛樂上,因為地圖提供了完整的色彩資訊以及幾何資訊,對於人類或者是機器人在認知環境有著相當大的幫助,例如微創手術。一般來說,三維環境重建的方法可以分為三個階段來討論,分別是特徵估測、去除離群匹配點以及最後的相對位置估算。特徵估測是利用找尋環境中具有代表性的特徵資料點來取代原本的全部資料點,並且將錯誤的特徵資料匹配對濾除,最後,再將剩餘正確的特徵點帶入疊代最近點演算法中,計算其相對位置,並利用計算出的相對位置將不同時擷取到不同視角的資料轉換至同一坐標系下。
在本文中,我們所提出的方法是利用結構亂度的特徵來描述環境中幾何資訊的變化,再來,利用亂度影像的配對來濾除錯誤的特徵資料對,亂度影像的配對方式是利用尋找兩張影像共同區域中最大亂度的積,此方法不僅可濾除錯誤匹配對,並且可以提供粗略的初始位置來當作疊代最近點演算法的起始位置,再由疊代最近點演算法計算出更精確的相對位置,最後,將每個位置所擷取的點雲利用座標轉換,轉至同一座標系中並畫在三維空間上形成一個三維地圖。 在本文的實驗結果裡展示了兩組資料集,分別是網狀系統控制實驗室的資料集以及Autonomous Systems Lab 所提供的資料集。實驗的結果顯示我們所提出的方法的準確度優於傳統的疊代最近點演算法,並且相對於色彩定位演算法,我們提出的方法不受色彩變化的影響甚至在黑暗的環境中也可進行。 | zh_TW |
| dc.description.abstract | With development of low-cost RGB-D sensor which can capture high resolutiondepth and visual information synchronously, and the success of two-dimensionalsimultaneous localization and mapping, three-dimension surface reconstruction of environment has been a popular research. Most of the 3D environment reconstruction approaches rely on data registration. By doing so, three-dimensional dataset scanned in different viewpoints can be transformed into the same coordinate system by aligning overlapping components of these sets. The constructed 3D map can be used in robot vision, virtual and augmented reality, and entertainment. With providing both color and spatial information, humans or robots can easily perceive their environments, such as minimal invasive surgery. In general, the task of three-dimension environment reconstruction can be divided into three stages: feature descriptor estimation, outlier rejection, and transformation estimation. First, feature descriptor estimation is used to find some distinct features with their special characteristics. Second, feature outlier removal can remove the incorrect corresponding pairs between two consecutive frames. Third, transformation estimation uses the correct corresponding pairs to find the transformation matrix which can transfer different viewpoint frames into global coordinate.
In this thesis, the proposed method uses structure-entropy based feature to describe the energy in the environment. The region of the spatial structural change can be extracted because of their structure entropy energy. Then, a new method to remove outlier is presented, which is called entropy image matching. With finding maximum entropy energy of the overlapping area, the relative pose between two consecutive frames can be estimated roughly, which can serve as a good initial guess for transformation estimation. In the final step, transformation estimation uses the remaining region to implement iterative closest point (ICP) to determine a rigid transformation matrix. With that transformation matrix, all of the frames can be transformed into global coordinate and plot a 3D virtual map with point cloud format. Experimental results demonstrate two dataset, Networked Control Systems laboratory (NCSLab) dataset and Autonomous Systems Lab (ASL) dataset repository: Apartment. The experimental results show that the accuracy of the proposed method is better than traditional ICP algorithm. And the proposed method is unaffected by color or even in complete darkness compared to RGB-based feature mapping method. A 3D mapping system that can generate 3D maps of indoor environments with spatial information features is presented. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T05:29:15Z (GMT). No. of bitstreams: 1 ntu-103-R01921063-1.pdf: 10378210 bytes, checksum: e20dc57300389fa7bd0cce0489c63f74 (MD5) Previous issue date: 2014 | en |
| dc.description.tableofcontents | 摘要 ................................................................................................................................ iii
ABSTRACT ..................................................................................................................... v CONTENTS .................................................................................................................... ix LIST OF FIGURES....................................................................................................... xiii LIST OF TABLES.......................................................................................................... xv Chapter 1 Introduction .................................................................................................. 1 1.1 Motivation .................................................................................................... 3 1.2 Problem Formulation.................................................................................... 5 1.3 Contributions ................................................................................................ 7 1.4 Organization of the Thesis............................................................................ 8 Chapter 2 Background and Literature Survey............................................................... 9 2.1 Three-Dimensional Environment Reconstruction ........................................ 9 2.2 Feature Descriptor Estimation .....................................................................11 2.3 Outliers Rejection....................................................................................... 13 2.4 Transformation Estimation ......................................................................... 14 Chapter 3 Related Algorithms ..................................................................................... 17 3.1 Pin-hole Model ........................................................................................... 17 3.2 Point Cloud................................................................................................. 19 x 3.3 Surface Normal Vector Calculation............................................................ 20 3.4 Entropy Estimation ..................................................................................... 23 3.5 Iterative Closest Point................................................................................. 24 Chapter 4 3D Environment Reconstruction ................................................................ 27 4.1 Camera Calibration..................................................................................... 28 4.2 Feature Descriptor Calculation................................................................... 29 4.3 Entropy Image Matching ............................................................................ 33 4.4 Geometric Relationship Estimation............................................................ 34 Chapter 5 Experimental Results and Analysis ............................................................ 37 5.1 Hardware .................................................................................................... 37 5.1.1 Microsoft Kinect Sensor................................................................. 37 5.1.2 Laser Scanner ................................................................................. 42 5.1.3 Digital Single Lens Reflex Camera ................................................ 43 5.2 Properties of 3D Environment Reconstruction Algorithms........................ 44 5.2.1 Implementation of Dense Point-to-Point ICP Algorithm ............... 44 5.2.2 Implementation of Entropy-based Feature with ICP Algorithm..... 45 5.2.3 Implementation of RGB-based Feature Mapping Algorithm ......... 53 5.2.4 Implementation of Camera Localization ........................................ 55 5.2.5 Implementation of PSO-ICP algorithm .......................................... 56 xi 5.2.6 Parameters of Algorithms Implementation..................................... 56 5.3 NCSLab Scene Dataset............................................................................... 57 5.3.1 Experimental Setup ........................................................................ 58 5.3.2 Position Estimation Accuracy......................................................... 62 5.3.3 Analysis of Localization Error ....................................................... 68 5.3.4 3D Environment Reconstruction .................................................... 73 5.3.5 NCSLab Dark Scene Dataset.......................................................... 81 5.4 ASL Datasets Repository: Apartment......................................................... 86 5.4.1 Experimental Results...................................................................... 86 5.4.2 Overlapping Task Performance ...................................................... 96 Chapter 6 Conclusions and Future Works..................................................................111 6.1 Conclusions ...............................................................................................111 6.2 Future Works .............................................................................................112 APPENDIX A................................................................................................................113 REFERENCES............................................................................................................. 123 | |
| dc.language.iso | en | |
| dc.subject | 座標轉換 | zh_TW |
| dc.subject | RGB-D 感測器 | zh_TW |
| dc.subject | 三維環境重建 | zh_TW |
| dc.subject | 結構亂度 | zh_TW |
| dc.subject | 點雲 | zh_TW |
| dc.subject | RGB-D sensor | en |
| dc.subject | rigid transformation | en |
| dc.subject | point cloud | en |
| dc.subject | structure entropy | en |
| dc.subject | 3D environment reconstruction | en |
| dc.title | 利用結構亂度特徵進行室內環境三維表面重建 | zh_TW |
| dc.title | Three-Dimensional Surface Reconstruction of Indoor Environment Based on Structure-Entropy Feature | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 102-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 簡忠漢,李後燦,黃正民 | |
| dc.subject.keyword | RGB-D 感測器,三維環境重建,結構亂度,點雲,座標轉換, | zh_TW |
| dc.subject.keyword | RGB-D sensor,3D environment reconstruction,structure entropy,point cloud,rigid transformation, | en |
| dc.relation.page | 129 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2014-08-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-103-1.pdf 未授權公開取用 | 10.13 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
