請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53709
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 羅仁權 | |
dc.contributor.author | Chia-Wen Kuo | en |
dc.contributor.author | 郭佳文 | zh_TW |
dc.date.accessioned | 2021-06-16T02:28:03Z | - |
dc.date.available | 2015-08-06 | |
dc.date.copyright | 2015-08-06 | |
dc.date.issued | 2015 | |
dc.date.submitted | 2015-08-03 | |
dc.identifier.citation | [1] Kinova JACO2 is available at http://www.kinovarobotics.com
[2] KUKA LBR iiwa is available at http://www.kuka-robotics.com/en/products/industrial_robots/sensitiv/ [3] R. Bischoff, J. Kurth, G. Schreiber, R. Koeppe, A. Albu-Schäffer, A. Beyer, et al., 'The KUKA-DLR Lightweight Robot arm-a new reference platform for robotics re-search and manufacturing,' in Robotics (ISR), 2010 41st international symposium on and 2010 6th German conference on robotics (ROBOTIK), 2010, pp. 1-8. [4] C. Loughlin, A. Albu-Schäffer, S. Haddadin, C. Ott, A. Stemmer, T. Wimböck, et al., 'The DLR lightweight robot: design and control concepts for robots in human environments,' Industrial Robot: an international journal, vol. 34, pp. 376-385, 2007. [5] Rethink Robotics Baxter is available at http://www.rethinkrobotics.com/baxter/ [6] C. Fitzgerald, 'Developing baxter,' in Technologies for Practical Robot Applica-tions (TePRA), 2013 IEEE International Conference on, 2013, pp. 1-6. [7] ABB YuMi is available at http://new.abb.com/products/robotics/yumi [8] Industrial Robots: 5 Most Popular Applications: http://blog.robotiq.com/bid/52886/Industrial-robots-5-most-popular-applications [9] A. T. Miller and P. K. Allen, 'Graspit! a versatile simulator for robotic grasping,' Robotics & Automation Magazine, IEEE, vol. 11, pp. 110-122, 2004. [10] A. T. Miller, S. Knoop, H. Christensen, and P. K. Allen, 'Automatic grasp planning using shape primitives,' in Robotics and Automation, 2003. Proceedings. ICRA'03. IEEE International Conference on, 2003, pp. 1824-1829. [11] K. B. Shimoga, 'Robot grasp synthesis algorithms: A survey,' The International Journal of Robotics Research, vol. 15, pp. 230-266, 1996. [12] Z. Zhang, 'Microsoft kinect sensor and its effect,' MultiMedia, IEEE, vol. 19, pp. 4-10, 2012. [13] Microsoft Kinect specification is available at https://msdn.microsoft.com/en-us//library/hh438998.aspx [14] Reflexxes Online Trajectory Generation is available at http://reflexxes.ws [15] Yun-Hsuan Tsai, “7-dof redundant robot manipulator with multimodal intuitive teach and play system,” Master’s Thesis, Electrical Engineering, National Taiwan Uni-versity, 2014. [16] R. S. Hartenberg and J. Denavit, Kinematic synthesis of linkages: McGraw-Hill, 1964. [17] Robotiq gripper 3-finger gripper is available at http://robotiq.com/products/industrial-robot-hand/ [18] PREEMPT_RT realtime Linux patch is available at https://rt.wiki.kernel.org/index.php/Main_Page [19] Ubuntu Linux is available at http://www.ubuntu.com/index_roadshow [20] Linux kernel is available at https://www.kernel.org [21] PISO-DA8U is available at http://www.icpdas.com/root/product/solutions/pc_based_io_board/pci/pio-da4.html [22] PISO-Encoder600 is available at http://www.icpdas.com/root/product/solutions/pc_based_io_board/motion_control_boards/piso_encoder600u.html [23] EtherCAT is available at http://www.ethercat.org/default.htm [24] D. Jansen and H. Buttner, 'Real-time Ethernet: the EtherCAT solution,' Computing and Control Engineering, vol. 15, pp. 16-21, 2004. [25] VxWorks is available at http://www.windriver.com/products/vxworks/ [26] Windows RTX is available at http://www.intervalzero.com [27] RTAI is available at https://www.rtai.org [28] P. Mantegazza, E. Dozio, and S. Papacharalambous, 'RTAI: Real time application interface,' Linux Journal, vol. 2000, p. 10, 2000. [29] Xenomai is available at http://xenomai.org [30] PREEMPT_RT kernel configuration is available at https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO [31] cyclictest is available at https://rt.wiki.kernel.org/index.php/Cyclictest [32] LinuxCNC is available at http://www.linuxcnc.org [33] G. Grunwald, G. Schreiber, A. Albu-Schäffer, and G. Hirzinger, 'Programming by touch: the different way of human-robot interaction,' Industrial Electronics, IEEE Transactions on, vol. 50, pp. 659-666, 2003. [34] F. Libera, T. Minato, I. Fasel, H. Ishiguro, E. Menegatti, and E. Pagello, 'Teaching by touching: an intuitive method for development of humanoid robot motions,' in Hu-manoid Robots, 2007 7th IEEE-RAS International Conference on, 2007, pp. 352-359. [35] T. Kröger, On-line Trajectory Generation in Robotic Systems: Basic Concepts for Instantaneous Reactions to Unforeseen (sensor) Events vol. 58: Springer, 2010. [36] T. Kröger, 'Online trajectory generation: straight-line trajectories,' Robotics, IEEE Transactions on, vol. 27, pp. 1010-1016, 2011. [37] T. Kröger, 'On-line trajectory generation: Nonconstant motion constraints,' in Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, pp. 2048-2054. [38] T. Kröger, 'Opening the door to new sensor-based robot applications—The Reflexxes Motion Libraries,' in Robotics and Automation (ICRA), 2011 IEEE Interna-tional Conference on, 2011, pp. 1-4. [39] PointCloud Library (PCL) is available at http://pointclouds.org [40] R. B. Rusu and S. Cousins, '3d is here: Point cloud library (pcl),' in Robotics and Automation (ICRA), 2011 IEEE International Conference on, 2011, pp. 1-4. [41] A. Aldoma, Z.-C. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, et al., 'Point cloud library,' IEEE Robotics & Automation Magazine, vol. 1070, 2012. [42] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, '3dnet: Large-scale object class recognition from cad models,' in Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, pp. 5384-5391. [43] A. Aldoma, M. Vincze, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu, et al., 'CAD-model recognition and 6DOF pose estimation using 3D cues,' in Computer Vi-sion Workshops (ICCV Workshops), 2011 IEEE International Conference on, 2011, pp. 585-592. [44] OpenGL is available at https://www.opengl.org [45] D. Shreiner and B. T. K. O. A. W. Group, OpenGL programming guide: the official guide to learning OpenGL, versions 3.0 and 3.1: Pearson Education, 2009. [46] PCL pass through filter is available at http://pointclouds.org/documentation/tutorials/passthrough.php [47] PCL down-sampling is available at http://pointclouds.org/documentation/tutorials/voxel_grid.php [48] PCL statistical outlier removal is available at http://pointclouds.org/documentation/tutorials/statistical_outlier.php [49] PCL RANSAC plane segmentation is available at http://pointclouds.org/documentation/tutorials/planar_segmentation.php [50] PCL Euclidean clustering is available at http://www.pointclouds.org/documentation/tutorials/cluster_extraction.php [51] R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, 'Fast 3d recognition and pose us-ing the viewpoint feature histogram,' in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, 2010, pp. 2155-2162. [52] R. B. Rusu, Z. C. Marton, N. Blodow, and M. Beetz, 'Persistent point feature his-tograms for 3D point clouds,' in Proc 10th Int Conf Intel Autonomous Syst (IAS-10), Baden-Baden, Germany, 2008, pp. 119-128. [53] A. Aldoma, M. Vincze, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu, et al., 'CAD-model recognition and 6DOF pose estimation using 3D cues,' in Computer Vi-sion Workshops (ICCV Workshops), 2011 IEEE International Conference on, 2011, pp. 585-592. [54] A. Aldoma, F. Tombari, R. B. Rusu, and M. Vincze, OUR-CVFH–Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation: Springer, 2012. [55] F. Tombari, S. Salti, and L. Di Stefano, 'Unique signatures of histograms for local surface description,' in Computer Vision–ECCV 2010, ed: Springer, 2010, pp. 356-369. [56] W. Wohlkinger and M. Vincze, 'Ensemble of shape functions for 3d object classi-fication,' in Robotics and Biomimetics (ROBIO), 2011 IEEE International Conference on, 2011, pp. 2987-2992. [57] M. Muja and D. G. Lowe, 'Scalable nearest neighbor algorithms for high dimen-sional data,' Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 36, pp. 2227-2240, 2014. [58] P. J. Besl and N. D. McKay, 'Method for registration of 3-D shapes,' in Robot-ics-DL tentative, 1992, pp. 586-606. [59] Z. Zhang, 'Iterative point matching for registration of free-form curves and surfaces,' International journal of computer vision, vol. 13, pp. 119-152, 1994. [60] C. Papazov and D. Burschka, 'An efficient ransac for 3d object recognition in noisy and occluded scenes,' in Computer Vision–ACCV 2010, ed: Springer, 2011, pp. 135-148. [61] A. Aldoma, F. Tombari, L. Di Stefano, and M. Vincze, 'A global hypotheses veri-fication method for 3D object recognition,' in Computer Vision–ECCV 2012, ed: Springer, 2012, pp. 511-524. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/53709 | - |
dc.description.abstract | 目前工廠自動化發展一個重大的瓶頸,在於機器視覺,機器人沒辦法像產線上的工人一樣,能夠快速且精確的辨識出從輸送帶運輸過來,任意擺放的複雜物件,既然無法辨認出物件,那就更遑論如何將物體抓取起來,進行後續的操作像是組裝、焊接、塗膠等等操作。
為了解決這個問題,目前大多數的廠商是想辦法將物件的種類和位置固定,捨去辨識場景中物體的步驟,轉而依賴機器手臂高精準度的特性,在已知位置之間來回運動和操作。這樣一來一但物件位置有偏差,就會導致災難性的後果,而且也白白花費了高額成本在精準地擺放物體。 因此本研究的主題,在於如何將三維物體辨識整合到這個系統中,讓機器手臂可以自動辨識場景中要操作的物體,並且結合直覺性教導的功能,來事先教導手臂如何正確且穩定的抓取辨識出來的物件,當中最重要的兩個模組,分別是物體辨識,以及機器手臂本身的控制。 在本研究中,我們成功的實作了一個整合式的系統,來達成物體自動辨識與抓取,我們也對物體辨識和抓取進行大量的測試,並識別出整個系統的瓶頸,以利後續研究者能夠基於此系統繼續發展。 | zh_TW |
dc.description.abstract | One of the bottlenecks for manufacturing automation is machine vision. Robots are not able to recognize randomly oriented components coming from the assembly line quickly and accurately just like human operators do. Once this very first step fails, any other subsequent operations such as picking up the component, assembling, welding, painting, etc, are impossible.
Currently, manufacturers solve this problem by fixing the component. The robot arm then performs the task and manipulates the component based on this precondition. This approach totally omits the fragile object recognition step and relies solely on the precision and repeatability of the robot arm. Once there are pose error setting up the component, a disastrous consequence may occur and the whole manufacturing process might shutdown simply because of this minor fault. The research objective is to integrate 3D model-based object recognition into the system for the capability of the robot arm to recognize the component in the scene. Furthermore, teaching by touching is integrated to let human operators teach the robot how to pick up the components stably. Two of the most important modules for the success of this integrated system are 3D object recognition system and the manipulator itself. In this research, we successfully implement an integrated system for recognizing and fetching the randomly oriented objects. We also evaluate the system extensively and identify the bottleneck of this system, hoping that this could open up a road for robot-integrated manufacturing automation and become the basis for future research. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T02:28:03Z (GMT). No. of bitstreams: 1 ntu-104-R02921005-1.pdf: 8196254 bytes, checksum: 0e8f3c2fb2b279964287c2cf4a780b9e (MD5) Previous issue date: 2015 | en |
dc.description.tableofcontents | 口試委員會審定書 #
誌謝 i 中文摘要 i ABSTRACT i TABLE OF CONTENTS ii LIST OF FIGURES i LIST OF TABLES i Chapter 1 Introduction 1 1.1 History 1 1.1.1 Traditional industrial robot arms 1 1.1.2 Lightweight collaborative robot 2 1.1.3 Dual arm robot 3 1.2 Industrial Applications 4 1.2.1 Material handling 5 1.2.2 Welding 6 1.2.3 Assembly 6 1.3 Challenges 7 1.3.1 Machine vision 7 1.3.2 Object fetching 8 1.4 Research Objectives 8 1.5 Thesis Structure 9 Chapter 2 Scenario 10 2.1 Experimental Setup 10 2.1.1 Scene 10 2.1.2 Tested parts 10 2.2 Procedures 11 2.3 Preconditions 11 2.3.1 Structured environment 12 2.3.2 Finite set of known objects 12 2.3.3 Reliable object segmentation 13 2.3.4 No mutual occlusion 13 2.3.5 Rigid bodies 14 Chapter 3 System Architecture 15 3.1 Generalized Robot Fetching Architecture 15 3.2 Proposed Robot Fetching Architecture 16 3.3 Key Modules 17 3.3.1 2.5D sensors 17 3.3.2 Object recognition 21 3.3.3 Operation database 22 3.3.4 Trajectory interpolator 22 Chapter 4 Manipulator 23 4.1 Mechanism 23 4.1.1 D-H parameters 24 4.1.2 Transmission and actuator 26 4.1.3 Gripper 28 4.2 Control Architecture 30 4.3 Software Architecture 32 4.3.1 Motivation 33 4.3.2 Hardware layer 33 4.3.3 Basic utility 34 4.3.4 Application layer 34 4.3.5 Timer 35 4.4 Realtime Linux 36 4.4.1 Introduction 36 4.4.2 Dual Kernel 37 4.4.3 Kernel Patch 38 4.4.4 Kernel Configuration 39 4.4.5 Test Results 40 4.5 Manipulator Functionalities 42 4.5.1 Teaching by Touching 42 4.5.2 Online Trajectory Generation 46 Chapter 5 Object Recognition 50 5.1 Point Cloud Library (PCL) 50 5.2 Global Recognition Pipeline 51 5.3 Database Generation 52 5.4 Pre-Processing 54 5.4.1 ROI segmentation 55 5.4.2 Down-sampling 55 5.4.3 Denoising 56 5.4.4 Plane segmentation 57 5.4.5 Clustering 57 5.5 Object type recognition 58 5.5.1 Global descriptors 58 5.5.2 Descriptor estimation 59 5.5.3 Matching 64 5.6 Object pose estimation 65 5.7 Post-Processing 66 5.7.1 Iterative closest points (ICP) 66 5.7.2 Hypothesis verification 67 Chapter 6 Operation Database 68 6.1 Database Generation 68 6.1.1 Grasps 68 6.1.2 Trajectory via-points 71 6.2 Database Structure 72 6.3 Operation Selection 73 Chapter 7 Experiments 74 7.1 Object Recognition 74 7.1.1 Precision 74 7.1.2 Success rate 75 7.1.3 Time consumption 75 7.2 Object Fetching 75 7.2.1 Grasp selection 76 7.2.2 Grasp precision 76 Chapter 8 Results and Discussions 77 8.1 Object Recognition 77 8.1.1 Results 77 8.1.2 Precision 77 8.1.3 Success rate 78 8.1.4 Time consumption 79 8.2 Object Fetching 80 8.2.1 Results 80 8.2.2 Grasp selection 81 8.2.3 Grasp precision 81 Chapter 9 Conclusion and Future Works 84 REFERENCE 85 VITA 91 | |
dc.language.iso | en | |
dc.title | 機器人整合3D物體辨識與夾取系統應用於工廠自動化 | zh_TW |
dc.title | Robot Integrated 3D Object Recognition and Fetching System for Factory Automation | en |
dc.type | Thesis | |
dc.date.schoolyear | 103-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 張帆人,陳金聖 | |
dc.subject.keyword | 工廠自動化,生產自動化,機器手臂,三維物體辨識,物件抓取, | zh_TW |
dc.subject.keyword | factory automation,manufacturing automation,3D object recognition,object fetching, | en |
dc.relation.page | 91 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2015-08-03 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-104-1.pdf 目前未授權公開取用 | 8 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。