請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2294
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 羅仁權(Ren C. Luo) | |
dc.contributor.author | Chun-Hao Liao | en |
dc.contributor.author | 廖俊豪 | zh_TW |
dc.date.accessioned | 2021-05-13T06:39:00Z | - |
dc.date.available | 2018-08-24 | |
dc.date.available | 2021-05-13T06:39:00Z | - |
dc.date.copyright | 2017-08-24 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-08-17 | |
dc.identifier.citation | [1] M. Perrollaz, S. Khorbotly, A. Cool, J. Yoder and E. Baumgartner, “Teachless teach-repeat: Toward Vision-based Programming of Industrial Robots,” IEEE International Conference on Robotics and Automation, pp.409 - 414, 2012.
[2] Man vs machine table tennis show down that went viral: http://www.dailymail.co.uk/sciencetech/article-2578633/The-Man-vs-Machine-table-tennis-showdown-went-viral-actual-cleverly-shot-TV-ad.html [3] L. Zhang, S. Lyu, and J. Trinkle, ' A Dynamic Bayesian Approach to Real-Time Estimation and Filtering in Grasp Acquisition,” in IEEE International Conference on Robotics and Automation (ICRA), may 2013, pp.85-92. [4] D. Katz, M. Kazemi, J. A. Bagnell and A. Stentz, ' Clearing a Pile of Unknown Objects using Interactive Perception,' in IEEE International Conference on Robotics and Automation (ICRA), may 2013, pp.154-161. [5] N. Chavan Dafle, A. Rodriguez, R. Paolini, B. Tang, S. S. Srinivasa, M. Erdmann, M. T. Mason, I. Lundberg, H. Staab and T. Fuhlbrigge, ' in IEEE International Conference on Robotics and Automation (ICRA), may 2014, pp.1578-1585. [6] N. A. Vien and M. Toussaint, ' POMDP Manipulation via Trajectory Optimization,' IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 242-249, 2015. [7] A. Ajoudani, C. Fang, N. G. Tsagarakis, and A. Bicchi, ' A Reduced-Complexity Description of Arm Endpoint Stiffness with Applications to Teleimpedance Control,' IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4636 - 4641, 2013. [8] Y. Suzuki, K. Koyama, A. Ming and M. Shimojo, ' Grasping Strategy for Moving Object using Net-Structure Proximity Sensor and Vision Sensor,' in IEEE International Conference on Robotics and Automation (ICRA), may 2015, pp.1403-1409. [9] M. Mukadam, X. Yan, and B. Boots, ' Gaussian Process Motion Planning,' in IEEE International Conference on Robotics and Automation (ICRA), may 2016, pp.9-15. [10] J. H. and D. D. Lee, ' Learning High-Dimensional Mixture Models for Fast Collision Detection in Rapidly-Exploring Random Trees,' in IEEE International Conference on Robotics and Automation (ICRA), may 2016, pp.63-69. [11] K. Shimonomura, H. Nakashima, and K. Nozu, ' Robotic grasp control with high-resolution combined tactile and proximity sensing,' in IEEE International Conference on Robotics and Automation (ICRA), may 2016, pp.138-143. [12] M. Ewerton, G. Maeda, G. Neumann, V. Kisner, G. Kollegger, J. Wiemeyer and J. Peters,' Movement Primitives with Multiple Phase Parameters,' in IEEE International Conference on Robotics and Automation (ICRA), may 2016, pp.201-206. [13] N. Wakisaka, R. Kikuuwe and T. Sugihara, ' Fast Forward Dynamics Simulation of Robot Manipulators with Highly Frictional Gears,' in IEEE International Conference on Robotics and Automation (ICRA), may 2016, pp.2096-2101. [14] Y. H. Tsai, “7-dof redundant robot manipulator with multimodal intuitive teach and play system,” Master’s Thesis, Electrical Engineering, National Taiwan University, 2014. [15] R. S. Hartenberg and J. Denavit, Kinematic synthesis of linkages: McGraw-Hill, 1964. [16] Robotiq gripper 3-finger gripper is available at http://robotiq.com/products/industrial-robot-hand/ [17] PISO-DA8U is available at http://www.icpdas.com/root/product/solutions/pc_based_io_board/pci/pio-da4.html [18] PISO-Encoder600 is available at http://www.icpdas.com/root/product/solutions/pc_based_io_board/motion_control_boards/piso_encoder600u.html [19] G. Grunwald, G. Schreiber, A. Albu-Schäffer, and G. Hirzinger, 'Programming by touch: the different way of human-robot interaction,' Industrial Electronics, IEEE Transactions on, vol. 50, pp. 659-666, 2003. [20] F. Libera, T. Minato, I. Fasel, H. Ishiguro, E. Menegatti, and E. Pagello, 'Teaching by touching: an intuitive method for development of humanoid robot motions,' in Humanoid Robots, 2007 7th IEEE-RAS International Conference on, 2007, pp. 352-359. [21] Z. Li, L. Z. Ma, et al., 'Rational quadratic B-spline curves with monotone curvature,' Journal of Information and Computational Science, vol. 4, pp. 119-127, 2007. [22] J. Yongqiao, et al., 'Research on consecutive micro-line interpolation algorithm with local cubic B-spline fitting for high speed machining,' in International Conference on Mechatronics and Automation (ICMA ’10), 2010, pp. 1675-1680. [23] Q. G. Zhang, et al., 'Development and implementation of a NURBS curve motion interpolator,' Robotics and Computer-Integrated Manufacturing, vol. 14, pp. 27-36, 1998. [24] Torsten Kroger, Adam Tomiczek, and Friedrich M. Wahl, “Towards On-Line Trajectory Computation,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, October 9-15, 2006. [25] Torsten Kroger, and Friedrich M. Wahl, “Online Trajectory Generation: Basic Concepts for Instantaneous Reactions to Unforeseen Events,” IEEE Transactions on Robotics, Vol. 26, No. 1, February 2010. [26] Torsten Kroger, and Jose Padial, “Simple and Robust Visual Servo Control of Robot Arms Using an On-Line Trajectory Generator,” IEEE International Conference on Robotics and Automation, RiverCentre, Saint Paul, Minnesota, USA, May 14-18, 2012. [27] Fabrizio Flacco, Torsten Kroger, Alessandro De Luca, Oussama Khatib, “A Depth Space Approach to Human-Robot Collision Avoidance,” IEEE International Conference on Robotics and Automation, RiverCentre, Saint Paul, Minnesota, USA, May 14-18, 2012. [28] PointCloud Library (PCL) is available at http://pointclouds.org/ [29] R. C. Luo and C. W. Kuo, ' A Scalable Modular Architecture of 3D Object Acquisition for Manufacturing Automation,' in IEEE 13th International Conference of Industrial Informatics (INDIN), 2015, pp.269-274. [30] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, '3dnet: Large-scale object class recognition from cad models,' in Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, pp. 5384-5391. [31] A. Aldoma, M. Vincze, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu, et al., 'CAD-model recognition and 6DOF pose estimation using 3D cues,' in Computer Vi- sion Workshops (ICCV Workshops), 2011 IEEE International Conference on, 2011, pp. 585-592. [32] OpenGL is available at https://www.opengl.org [33] D. Shreiner and B. T. K. O. A. W. Group, OpenGL programming guide: the official guide to learning OpenGL, versions 3.0 and 3.1: Pearson Education, 2009. [34] Microsoft Corporation, 1 Microsoft Way, Redmond, WA 98052-7329, USA, “Microsoft kinect homepage. http://xbox.com/Kinect (Mar. 28, 2011), Internet, 2011. [35] Zhang, Z. 'A Flexible New Technique for Camera Calibration.' IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 22, No. 11, 2000, pp. 1330–1334. [36] R.B. Rusu and S.Cousins, ' 3D is here: Point Cloud Library (PCL)', IEEE International Conference on Robotics and Automation, pp.1-4, 2011. [37] SVM introduction is at http://www.cmlab.csie.ntu.edu.tw/~cyy/learning/tutorials/SVM2.pdf [38] N. Dalal and B. Triggs, ' Histograms of Oriented Gradients for Human Detection', IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.1-8, 2005. [39] M. Dong, N. Wang, “Adaptive network-based fuzzy inference system with leave-one-out cross-validation approach for prediction of surface roughness,” Applied Mathematical Modelling, Vol. 35, Issue 3, pp.1024-1035, March 2011. [40] A. VehtariEmail, A. Gelman, J. Gabry “Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC,” Statistics and Computing, Vol. 27, Issue 5, pp.1413-1432, Sep. 2017. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2294 | - |
dc.description.abstract | 目前工廠自動化發展的瓶頸是執行任務間,人與機器的互動模式。機器人為了可以在生產線上快速地進行辨識和抓取,像人一樣具備高準確率之外,也要能夠感知外在環境發生的變化。在現行的機器人研究領域,物件追蹤技術在生產線裝配上已經屬於機械手臂的基本技術。傳統而言,機器人執行任務都是步驟進行,一旦第一個步驟失敗,將會導致接續的子步驟,難以進行。
現況而言,要在工業上解決這個問題,多半是依靠視覺系統的輔助為主。這是其中一種機器人的感知系統,機器人可以藉著他們的輔助做好事前準備,能更完美的完成任務。然後,每當無預期的事件發生時,傳統固定形式的步驟,由於不具備動態更新動作,不僅造成任務突然終止,也容易對環境造成不良的影響。即使依靠了視覺輔助,也僅能解決部分的突發狀況。 要解決這個問題的方法,一樣還是要依靠機器人視覺。在傳統情境中,輸送帶上的物體是靜止不動,機器人會根據命令進行物件抓取。當目標物開始在生產線上移動,任務的複雜性也隨之提升。這時使用視覺回授系統會是絕佳的解決方法。 因此本研究主題,提供了一個視覺辨識抓取兼追蹤系統,每次抓取姿態都是由回授系統來決定。這個系統架構分兩塊,各自擁有核心的演算法來進行物件追蹤。因為受限於環境的條件,整體實驗操作情形會有細節描述。此外對於追蹤時的抓取姿態,也有改良演算法去記錄補償值,有利於下一次的追蹤。 | zh_TW |
dc.description.abstract | One of the bottlenecks for manufacturing automation is the interaction between robot and humans among tasks. Robots are not able to recognize the element from the assembly line quickly and accurately just like human operators do. Besides, in recent robotic field, conveyor tracking is one of fundamental function in the robot manipulator. Once this very first step fails in the production line, the latter subsequent operations are hardly to complete.
Currently, the manufacturers solve this problem by using the vision in the environment. The robot arm now can perform the task and manipulate the work based on the precondition. Afterwards, something that may occur will be regarded as a kind of unpredictable circumstance among the task. Due to traditional static process, the condition may not only cancel the work but also have bad impacts on the environment. The problem is also about vision. In order to solve the dilemma, the practicable way is also based on robot vision system. In traditional scenario, the object keep stable pose on the assembly line. The manipulator follows commands to grasp the object. While the target is moving on the production line, the task becomes sophisticated problems. Thus, a distinct grasping method under visual control system is definitely one of the essential solutions. In this thesis, we propose a tracking strategy on moving objects for a robot arm object fetching system combined with distinct recognition algorithm. In addition, the grasping pose of robot arm is corrected by visual feedback system. The system is separated into two parts: eye to hand and eye in hand, and discussed in detail. Each part owns its core algorithm to complete industrial tracking and fetching tasks. Because of limitation from the environment, the working conditions will also be illustrated. Grasping pose for each type of element is adjusted by tracking and modification algorithms. | en |
dc.description.provenance | Made available in DSpace on 2021-05-13T06:39:00Z (GMT). No. of bitstreams: 1 ntu-106-R03921101-1.pdf: 4500448 bytes, checksum: c00e05344b146eae3003e08ad1645b8f (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 誌謝 i
中文摘要 ii ABSTRACT iii CONTENTS v LIST OF FIGURES viii LIST OF TABLES xi Chapter 1 Introduction 1 1.1 History 1 1.1.1 Traditional industrial robot arms 1 1.1.2 Lightweight payload robot arms 2 1.2 Industrial Applications 4 1.2.1 Object fetching 5 1.2.2 Assembly 7 1.3 Challenges 7 1.3.1 Object recognition 7 1.3.2 Robot vision system 8 1.4 Thesis Structure 10 Chapter 2 Scenario 11 2.1 Experimental Setup 11 2.1.1 Scene 11 2.1.2 Faced problems 12 2.2 Procedures 12 2.3 Preconditions 14 2.3.1 Structured environment 15 2.3.2 Objects description 15 Chapter 3 System Architecture 16 3.1 Generalized Robot Fetching Architecture 16 3.2 Specialized Robot Fetching Architecture 17 3.3 Modified Robot Fetching Architecture 19 Chapter 4 Manipulator 21 4.1 Mechanism 21 4.1.1 D-H parameters 22 4.1.2 Transmission and actuator 25 4.1.3 Gripper 27 4.2 Control Architecture 30 4.3 Software Architecture 31 4.3.1 Motivation 32 4.3.2 Hardware Structure 32 4.3.3 Basic Utility 33 4.3.4 Application Layer 33 4.3.5 Timer 34 4.4 Manipulator Functionalities 35 4.4.1 Intuitive teaching by touching 35 4.4.2 Online trajectory generation 37 Chapter 5 Object Recognition 41 5.1 Point Cloud Library 41 5.2 Database Generation 42 5.3 Kinect Calibration 43 5.4 Object type and pose recognition 45 Chapter 6 Object Tracking Strategy 48 6.1 Problem Statement 48 6.2 Tracking Algorithm 49 6.2.1 Image segmentation 50 6.2.2 Position localization 51 6.2.3 Position calibration 52 6.3 Modified Grasping 52 6.3.1 Moving calibration 53 6.3.2 Pose modification 54 6.3.3 SVM database 56 Chapter 7 Experimental Results and Discussion 66 7.1 Object Recognition and Fetching 66 7.2 Object Tracking 68 7.3 Modified Grasping 72 7.3.1 Object pose modification 72 7.3.2 Classifier results 73 Chapter 8 Conclusions, Contributions and Future Works 77 REFERENCE 78 VITA 84 | |
dc.language.iso | en | |
dc.title | 基於視覺伺服控制之機器人輸送帶追蹤技術及3D多異種物件抓取技術 | zh_TW |
dc.title | Visual Servo Control Based Robotic Conveyor Tracking and Dynamic 3D Heterogeneous Object Fetching | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 張帆人(Fan-Ren Chang),顏炳郎(Ping-Lang Yen) | |
dc.subject.keyword | 機器人視覺系統,物件抓取,追蹤系統,工廠自動化, | zh_TW |
dc.subject.keyword | robotic vision system,object fetching,tracking system,factory automation, | en |
dc.relation.page | 84 | |
dc.identifier.doi | 10.6342/NTU201703382 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2017-08-17 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf | 4.39 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。