請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/50775
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
dc.contributor.author | Yu-Min Wang | en |
dc.contributor.author | 王喻民 | zh_TW |
dc.date.accessioned | 2021-06-15T12:57:40Z | - |
dc.date.available | 2021-02-20 | |
dc.date.copyright | 2021-02-20 | |
dc.date.issued | 2021 | |
dc.date.submitted | 2021-02-05 | |
dc.identifier.citation | [1: Pérez et al. 2016] Luis Pérez, Íñigo Rodríguez, Nuria Rodríguez, Rubén Usamentiaga, and Daniel F. García, “Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review,” Sensors, Vol. 16, No. 3, March 2016. [2: Buchholz 2015] Dirk Buchholz, “Bin-Picking: New Approaches for a Classical Problem,” 1st ed., vol. 44, Switzerland: Springer, 2015. [3: Du et al. 2019] Guoguang Du, Kai Wang, and Shiguo Lian, “Vision-based Robotic Grasping from Object Localization, Pose Estimation, Grasp Detection to Motion Planning: A Review,” in arXiv: 1905.06658v1, May. 16, 2019. [4: Fujita et al. 2019] M. Fujita, Y. Domae, A. Noda, G. A. Garcia Ricardez, T. Nagatani, A. Zeng, S. Song, A. Rodriguez, A. Causo, I. M. Chen, and T. Ogasawara, “What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics,” Journal of Advanced Robotics, pp. 1-15, December 2019. [5: Levine et al. 2017] Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International Journal of Robotics Research, Vol. 37, No. 4-5, pp. 421-436, June 2017. [6: Ikeuchi et al. 1983] Katsushi Ikeuchi, Berthold K.P. Horn, Shigemi Nagata, Tom Callahan, and Oded Feirigold, “Picking up an object from a pile of objects,” in Proceedings of the First International Symposium on Robotics Research, Vol. 2, pp. 139-166, May 1983. [7: Dessimoz et al. 1984] Jean-Daniel Dessimoz, John R. Birk, Robert B. Kelley, Henrique A. S. Martins, and Chi Lin, “Matched Filters for Bin Picking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-6, No. 6, pp. 686-697, November 1984. [8: Bolles Horaud 1986] Robert C. Bolles and Patrice Horaud, “3DPO: A Three-Dimensional Part Orientation System,” The International Journal of Robotics Research, Vol. 5, No. 3, pp. 3-26, September 1986. [9: Choi et al. 2012] Changhyun Choi, Yuichi Taguchi, Oncel Tuzel, Ming-Yu Liu, and Srikumar Ramalingam, “Voting-Based Pose Estimation for Robotic Assembly Using a 3D Sensor,” IEEE International Conference on Robotics and Automation, pp. 1724-1731, May 2012. [10: Nieuwenhuisen et al. 2013] Matthias Nieuwenhuisen, David Droeschel, Dirk Holz, Jörg Stückler, Alexander Berner, Jun Li, Reinhard Klein, and Sven Behnke, “Mobile Bin Picking with an Anthropomorphic Service Robot,” IEEE International Conference on Robotics and Automation, pp. 2327-2334, May 2013. [11: Domae et al. 2014] Yukiyasu Domae, Haruhisa Okuda, Yuichi Taguchi, Kazuhiko Sumi, and Takashi Hirai, “Fast Graspability Evaluation on Single Depth Maps for Bin Picking with General Grippers,” IEEE International Conference on Robotics and Automation, pp. 1997-2004, May 31- June 7, 2014. [12: Schwarz et al. 2017] Max Schwarz, Anton Milan, Arul Selvam Periyasamy, and Sven Behnke, “RGB-D object detection and semantic segmentation for autonomous manipulation in clutter,” The International Journal of Robotics Research, Vol. 37, No. 4-5, pp. 437-451, June 2017. [13: Lowe 1999] David G. Lowe, “Object Recognition from Local Scale-Invariant Features,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 1150-1157, September 1999. [14: Bay et al. 2006] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded Up Robust Features,” in Proceedings of European Conference on Computer Vision, pp. 404-417, May 2006. [15: Rublee et al. 2011] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski, “ORB: an efficient alternative to SIFT or SURF,” International Conference on Computer Vision, pp. 2564-2571, November 2011. [16: Lepetit et al. 2009] Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua, “EPnP: An Accurate O(n) Solution to the PnP Problem,” International Journal of Computer Vision, vol. 81, no. 2, pp. 155-166, February 2009. [17: Rusu et al. 2009] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz, “Fast Point Feature Histograms (FPFH) for 3D Registration,” IEEE International Conference on Robotics and Automation, pp. 3212-3217, May 2009. [18: Salti et al. 2014] Samuele Salti, Federico Tombari, and Luigi Di Stefano, “SHOT: Unique signatures of histograms for surface and texture description,” Computer Vision and Image Understanding, Vol. 125, pp. 251-264, August 2014. [19: Besl McKay 1992] Paul J. Besl and Neil D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256, February 1992. [20: Mian et al. 2006] Ajmal S. Mian, Mohammed Bennamoun, and Robyn Owens, “Three-Dimensional Model-Based Object Recognition and Segmentation in Cluttered Scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 10, pp. 1584-1601, October 2006. [21: Liebelt et al. 2008] Joerg Liebelt, Cordelia Schmid, and Klaus Schertler, “Viewpoint-Independent Object Class Detection using 3D Feature Maps,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, June 2008. [22: Drost et al. 2010] Bertram Drost, Markus Ulrich, Nassir Navab, and Slobodan Ilic, “Model Globally, Match Locally: Efficient and Robust 3D Object Recognition,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 998-1005, June 2010. [23: Sahbani et al. 2012] Anis Sahbani, Sahar El-Khoury, and Philippe Bidaud, “An overview of 3D object grasp synthesis algorithms,” Robotics and Autonomous Systems, Vol. 60, No. 3, pp. 326-336, March 2012. [24: Chinellato et al. 2003] Eris Chinellato, Robert B. Fisher, Antonio Morales, and Angel P. del Pobil, “Ranking Planar Grasp Configurations For A Three-Finger Hand,” IEEE International Conference on Robotics and Automation, pp. 1133-1138, September 2003. [25: Miller et al. 2003] Andrew T. Miller, Steffen Knoop, Henrik I. Christensen, and Peter K. Allen, “Automatic Grasp Planning Using Shape Primitives,” IEEE International Conference on Robotics and Automation, pp. 1824-1829, September 2003. [26: Chinellato et al. 2005] Eris Chinellato, Antonio Morales, Robert B. Fisher, and Angel P. del Pobil, “Visual Quality Measures for Characterizing Planar Robot Grasps,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 35, No. 1, pp. 30-41, February 2005. [27: Goldfeder et al. 2007] Corey Goldfeder, Peter K. Allen, Claire Lackner, and Raphael Pelossof, “Grasp Planning via Decomposition Trees,” IEEE International Conference on Robotics and Automation, pp. 4679-4684, April 2007. [28: Klingbeil et al. 2011] Ellen Klingbeil, Deepak Rao, Blake Carpenter, Varun Ganapathi, Andrew Y. Ng, and Oussama Khatib, “Grasping with Application to an Autonomous Checkout Robot,” IEEE International Conference on Robotics and Automation, pp. 2837-2844, May 2011. [29: Kim et al. 2013] Junggon Kim, Kunihiro Iwamoto, James J. Kuffner, Yasuhiro Ota, and Nancy S. Pollard, “Physically Based Grasp Quality Evaluation Under Pose Uncertainty,” IEEE Transactions on Robotics, Vol. 29, No. 6, pp. 1424-1439, December 2013. [30: Pelossof et al. 2004] Raphael Pelossof, Andrew Miller, Peter Allen, and Tony Jebara, “An SVM Learning Approach to Robotic Grasping,” IEEE International Conference on Robotics and Automation, pp. 3512-3518, April 26- May 1, 2004. [31: Detry et al. 2013] Renaud Detry, Carl Henrik Ek, Marianna Madry, and Danica Kragic, “Learning a Dictionary of Prototypical Grasp-predicting Parts from Grasping Experience,” IEEE International Conference on Robotics and Automation, pp. 601-608, May 2013. [32: Lenz et al. 2015] Ian Lenz, Honglak Lee, and Ashutosh Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, Vol. 34, No. 4-5, pp. 705-724, March 2015. [33: Kappler et al. 2015] Daniel Kappler, Jeannette Bohg, and Stefan Schaal, “Leveraging Big Data for Grasp Planning,” IEEE International Conference on Robotics and Automation, pp. 4304-4311, May 2015. [34: Pinto Gupta 2016] Lerrel Pinto and Abhinav Gupta, “Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours,” IEEE International Conference on Robotics and Automation, pp. 3406-3413, May 2016. [35: Mahler et al. 2017] Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio, and Ken Goldberg, “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics,” in Proceedings of Robotics: Science and Systems, July 2017. [36: Laganière 2011] Robert Laganière, “OpenCV 2 Computer Vision Application Programming Cookbook,” 1st ed., Birmingham, UK, Packt, 2011. [37: Zhang 2000] Zhengyou Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330-1334, November 2000. [38: Li et al. 2010] Aiguo Li, Lin Wang, and Defeng Wu, “Simultaneous robot-world and hand-eye calibration using dual-quaternions and Kronecker product,” International Journal of the Physical Sciences, Vol. 5, No. 10, pp. 1530-1536, September 2010. [39: Zhuang et al. 1994] Hanqi Zhuang, Zvi S. Roth, and R. Sudhakar, “Simultaneous Robot/World and Tool/Flange Calibration by Solving Homogeneous Transformation Equations of the form AX=YB,” IEEE Transactions on Robotics and Automation, Vol. 10, No. 4, pp. 549-554, August 1994. [40: Rusu 2009] Radu Bogdan Rusu, “Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments,” Ph.D. dissertation, Technische Universität Müchen, Germany, 2009. [41: Berkmann Caelli 1994] Jens Berkmann and Terry Caelli, “Computation of Surface Geometry and Segmentation Using Covariance Techniques,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 11, pp. 1114-1116, November 1994. [42: Bentley 1975] Jon Louis Bentley, “Multidimensional Binary Search Trees Used for Associative Searching,” Communications of the ACM, Vol. 18, No. 9, pp. 509-517, September 1975. [43: Holzer et al. 2012] Stefan Holzer, Radu Bogdan Rusu, Michael Dixon, Suat Gedikli, and Nassir Navab, “Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2684-2689, October 2012. [44: Trevor et al. 2013] Alexander J. B. Trevor, Suat Gedikli, Radu B. Rusu, and Henrik I. Christensen, “Efficient Organized Point Cloud Segmentation with Connected Components,” in 3rd Workshop on Semantic Perception, Mapping and Exploration, May 2013. [45: Slabaugh 1999] Gregory G. Slabaugh, “Computing Euler angles from a rotation matrix,” Retrieved August, Vol. 6, No. 2000, pp. 39-63, August 1999. [46: Squizzato 2012] Stefano Squizzato, “Robot bin picking: 3D pose retrieval based on Point Cloud Library,” University of Padova, December 2012. [47: ten Pas Platt 2015] Andreas ten Pas and Robert Platt, “Using Geometry to Detect Grasp Poses in 3D Point Clouds,” in Proceedings of the International Symposium on Robotics Research, Vol. 1, pp. 307-324, September 2015. [48: Gualtieri et al. 2016] Marcus Gualtieri, Andreas ten Pas, Kate Saenko, and Robert Platt, “High precision grasp pose detection in dense clutter,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 598-605, October 2016. [49: Chen Burdick 1993] I-Ming Chen and Joel W. Burdick, “Finding Antipodal Point Grasps on Irregularly Shaped Objects,” IEEE Transactions on Robotics and Automation, Vol. 9, No. 4, pp. 507-512, August 1993. [50: Nguyen 1988] Van-Duc Nguyen, “Constructing Force-Closure Grasps,” The International Journal of Robotics Research, Vol. 7, No. 3, pp. 3-16, June 1988. [51: Zapata-Impata et al. 2019] Brayan S Zapata-Impata, Pablo Gil, Jorge Pomares, and Fernando Torres, “Fast geometry-based computation of grasping points on three-dimensional point clouds,” International Journal of Advanced Robotic Systems, Vol. 16, No. 1, pp. 1-18, January 2019. [52: Tabb Yousef 2017] Amy Tabb and Khalil M. Ahmad Yousef, “Solving the robot-world hand-eye(s) calibration problem with iterative methods,” Machine Vision and Applications, Vol. 28, No. 5-6, pp. 569-590, August 2017. [53: Garrido-Jurado et al. 2014] S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, Vol. 47, No. 6, pp. 2280-2292, June 2014. [54: Li et al. 2018] Jianhua Li, Siyuan Dong, and Edward Adelson, “Slip Detection with Combined Tactile and Visual Information,” IEEE International Conference on Robotics and Automation, pp. 7772-7777, May 2018. [55: Yu et al. 2018] Kuan-Ting Yu and Alberto Rodriguez, “Realtime State Estimation with Tactile and Visual Sensing. Application to Planar Manipulation.,” IEEE International Conference on Robotics and Automation, pp. 7778-7785, May 2018. [56: Ackerman 2016] Evan Ackerman (2016, Mar.). How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves. IEEE Spectrum [Online]. Available: https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/google-large-scale-robotic-grasping-project [57: Maths - Rotation conversions] [Online]. Available: https://www.euclideanspace.com/maths/geometry/rotations/conversions/ [58: Intel, Inc.] Intel, Inc. Intel Realsense Depth Camera D435. [Online]. Available: https://www.intelrealsense.com/depth-camera-d435/ [59: OpenCV, Org.] OpenCV, Org. Open Source Computer Vision Library. [Online]. Available: https://opencv.org/ [60: PointClouds, Org.] PointClouds, Org. Point Cloud Library. [Online]. Available: http://pointclouds.org/ [61: Intel, Inc.] Intel, Inc. Intel RealSense SDK 2.0. [Online]. Available: https://www.intelrealsense.com/sdk-2/ | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/50775 | - |
dc.description.abstract | 本篇論文提出了一個工業機器人隨機取物系統,可用於拾起任意擺放的物體,物體姿態估測和抓取偵測的方法皆基於幾何,此研究主要的貢獻在於改善了物體姿態估測方法的可靠性,抓取偵測方法所需的處理時間可以減少和收斂,以及呈現了一個穩健的機器人隨機取物系統。 在藉由一個三維視覺感測器偵測是否有任何物體在工作空間中之後,採用了一個基於點對特徵的物體姿態估測方法,藉由一種有效率的投票機制可以獲得一些姿態,然後,在本篇論文中提出的驗證函式可以用來驗證估測姿態的正確性。 接著藉由在本篇論文中提出的排名函式判定整體最佳的抓取姿態可以偵測最適當的抓取姿態,為了使任意形狀的物體可以被成功拾起,目標物體和夾具的幾何性質需要被分析,並提出了一個基於目標物體的局部表面幾何之穩定性量測,除此之外,必須確保夾具與任何物體之間不會有碰撞,採用的碰撞偵測方法是基於表面法向量,然後,此系統可以規劃如何使機械手臂能夠到達抓取姿態並抓取目標物體,在機械手臂移動的期間可以最佳化目標物體的姿態以減少量測誤差的負面影響,並且如果有必要的話,可以連帶調整抓取姿態。 最後,本篇論文呈現一些實驗結果以證實提出方法的可行性和分析提出系統的性能,提出的系統可以藉由一個備有一台深度相機的工業機械手臂實作出來。 | zh_TW |
dc.description.abstract | In this thesis, an industrial robotic random picking system for picking objects placed randomly is proposed. The methods for object pose estimation and grasp detection are both based on geometry. The main contributions of this research are that the reliability of the method for object pose estimation is improved, that the processing time that the proposed grasp detection method requires can decrease and converge, and that a robust robotic random picking system is presented. After detecting whether there is any object in the workspace with a 3D vision sensor, a method based on point pair features is adopted for object pose estimation. By using an efficient voting scheme, a number of poses can be obtained. Then, a validation function proposed in this thesis can be used to validate the correctness of the estimated poses. Next, the most appropriate grasp pose can be detected by determining the overall optimal grasp pose with the ranking function proposed in this thesis. In order that objects of arbitrary shape can be picked successfully, the geometric properties of the target object and the gripper need to be analyzed, and a stability measurement based on the local surface geometry of the target object is proposed. In addition, it must be ensured that there is no collision between the gripper and any objects. The adopted method for collision detection is based on surface normals. Then, the system can plan how to enable the robotic arm to reach the grasp pose and grasp the target object. During the motion of the robotic arm, the pose of the target object can be optimized to reduce the negative effects of measurement errors, and the grasp pose can be adjusted jointly if necessary. Finally, some experimental results are presented in this thesis to verify the feasibility of the proposed methods and analyze the performance of the proposed system. The proposed system can be implemented with an industrial robotic arm equipped with a depth camera. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T12:57:40Z (GMT). No. of bitstreams: 1 U0001-0502202117100400.pdf: 9452080 bytes, checksum: f378bf1fca962af4f1df78f756bd79a8 (MD5) Previous issue date: 2021 | en |
dc.description.tableofcontents | 摘要 i ABSTRACT iii CONTENTS v LIST OF FIGURES vii LIST OF TABLES xii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Formulation 3 1.3 Contributions 6 1.4 Organization of the Thesis 7 Chapter 2 Background and Literature Survey 8 2.1 Bin Picking 8 2.2 Object Pose Estimation for Robotic Grasping 11 2.3 Grasp Detection for Robotic Grasping 14 Chapter 3 Related Algorithms 18 3.1 Pinhole Camera Model 18 3.2 Hand-Eye Calibration 20 3.3 Surface Normal Estimation 24 3.3.1 Surface Normal Estimation Using Covariance Techniques 24 3.3.2 Surface Normal Estimation Using an Integral Image 27 Chapter 4 Object Detection and Pose Estimation 33 4.1 Object Detection 35 4.1.1 Conversion of Depth images into Point Clouds 36 4.1.2 Point Cloud Segmentation 39 4.2 Global Model Description 45 4.2.1 Point Pair Feature 47 4.2.2 Creation of Global Model Description 48 4.3 Object Pose Estimation 51 4.3.1 Voting Scheme 54 4.3.2 Pose Clustering 57 4.4 Validation 61 4.4.1 Rate of Inliers Validation Function 61 4.4.2 Reachable Points 63 Chapter 5 Grasp Detection and Planning 65 5.1 Grasp Detection 66 5.1.1 Antipodal Points for Grasping with a Parallel Jaw Gripper 73 5.1.2 Calculation of Possible Grasp Poses 76 5.1.3 Stability Measurement Based on Local Surface Geometry 81 5.1.4 Evaluation of Possible Grasp Poses 85 5.2 Grasp Planning 89 Chapter 6 Experimental Results and Analysis 95 6.1 System Overview 95 6.1.1 Hardware Setup 95 6.1.2 Software Platform 98 6.1.3 System Architecture 99 6.2 Hand-Eye Calibration and Evaluation 103 6.2.1 Rotation Error 106 6.2.2 Translation Error 109 6.2.3 Reprojection Root Mean Squared Error 111 6.3 Evaluation of Object Pose Estimation 117 6.3.1 Sampling Rate 121 6.3.2 Number of Discrete Angles 129 6.4 Processing Time of Grasp Detection 137 6.5 Evaluation of Robotic Random Picking System 144 6.5.1 Picking Objects Placed Separately 147 6.5.2 Picking Objects Placed Closely 155 6.5.3 Picking Objects Piled Together 163 6.6 Summary 171 6.6.1 Hand-Eye Calibration 171 6.6.2 Object Pose Estimation 172 6.6.3 Grasp Detection 173 6.6.4 Robotic Random Picking System 174 Chapter 7 Conclusions and Future Works 176 7.1 Conclusions 176 7.2 Future Works 178 References 180 | |
dc.language.iso | en | |
dc.title | 基於幾何之物體姿態估測與抓取偵測應用於備有三維視覺感測器之工業機器人隨機取物系統 | zh_TW |
dc.title | Geometry-Based Object Pose Estimation and Grasp Detection for Industrial Robotic Random Picking Systems Equipped with a 3D Vision Sensor | en |
dc.type | Thesis | |
dc.date.schoolyear | 109-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 李後燦(Hou-Tsan Lee),黃正民(Cheng-Ming Huang),許志明(Chih-Ming Hsu) | |
dc.subject.keyword | 隨機取物,立體視覺,幾何,表面法向量,物體姿態估測,點對特徵,投票,抓取偵測,局部表面幾何, | zh_TW |
dc.subject.keyword | Random Picking,Stereo Vision,Geometry,Surface Normal,Object Pose Estimation,Point Pair Feature,Voting,Grasp Detection,Local Surface Geometry, | en |
dc.relation.page | 186 | |
dc.identifier.doi | 10.6342/NTU202100608 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2021-02-08 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-0502202117100400.pdf 目前未授權公開取用 | 9.23 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。