Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/61208
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成
dc.contributor.authorPei-Wen Wuen
dc.contributor.author吳佩文zh_TW
dc.date.accessioned2021-06-16T10:53:12Z-
dc.date.available2016-08-16
dc.date.copyright2013-08-16
dc.date.issued2012
dc.date.submitted2013-08-09
dc.identifier.citation[1] James J. Kuffner Jr and S. LaValle, “RRT-connect: An Efficient Approach to Single-Query Path Planning,” in Proc. IEEE Int. Conf. on Robotics and Automation, Vol. 2. 2000.
[2] L.E. Kavraki, P. Svestka, J.-C. Latombe, and M.H. Overmars, “Probabilistic Roadmaps for Path Planning in High-Ddimensional Configuration Spaces,” in IEEE Trans. on Robotics and Automation, vol.12, no.4, pp.566,580, Aug 1996.
[3] N. Ratliff, M. Zucker, J.A. Bagnell, and S. Srinivasa, “CHOMP: Gradient Optimization Techniques for Efficient Motion Planning,” in IEEE Int. Conf. on Robotics and Automation, pp.489,494, 12-17 May 2009
[4] M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal, “STOMP: Stochastic Trajectory Optimization for Motion Planning,” in IEEE Int. Conf. on Robotics and Automation, pp.4569,4574, 9-13 May 2011
[5] C. Park, J. Pan, and D. Manocha, “ITOMP: Incremental Trajectory Optimization for Real-time Replanning in Dynamic environments,” in Proc. of the Int. Conf. on Automated Planning and Scheduling, 2012.
[6] S. Lindemann and S. LaValle, “Current Issues in Sampling-based Motion Planning,” in Proc. of the Eighth Int. Symp, Springer, pp. 36-54, 2004.
[7] O. Brock and O. Khatib, “Elastic strips: A Framework for Motion Generation in Human Environments,” in Int. J. of Robotics Research, vol. 21, no. 12, pp.1031-1052, 2002.
[8] S. Haddadin, H. Urbanek, S. Parusel, D. Burschka, J. Rossmann, A. Albu-Schaffer, and G. Hirzinger, “Real-time Reactive Motion Generation Based on Variable Attractor Dynamics and Shaped Velocities,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 3109-3116, 2010.
[9] J. Vannoy, and X. Jing, “Real-time Adaptive Motion Planning (RAMP) of Mobile Manipulators in Dynamic Environments with Unforeseen Changes,” in IEEE Trans. Robotics, vol. 24, no. 5, pp. 1199-1212, 2008.
[10] E. Yoshida, and F. Kanehiro, “Reactive Robot Motion Using Path Replanning and Deformation,” in IEEE Int. Conf. on Robotics and Automation, pp.5456,5462, 9-13 May 2011
[11] C. Park, J. Pan, and D. Manocha, “Real-time Optimization-based Planning in Dynamic Environments Using GPUs”, in IEEE Int. Conf. on Robotics and Automation, 6-10 May 2013
[12] F. Flacco, T. Kroger, A. De Luca, and O. Khatib, “A Depth Space Approach to Human-Robot Collision Avoidance,” in IEEE Int. Conf. on Robotics and Automation, pp. 338-345, 2012.
[13] S. M. Khansari-Zadeh, Seyed Mohammad, and Aude Billard, “A Dynamical System Approach to Realtime Obstacle Avoidance,” in Autonomous Robots, pp. 433-454, 2012.
[14] S. M. Khansari-Zadeh and Aude Billard, “Realtime Avoidance of Fast Moving Objects: A Dynamical System-based Approach”, in Electronic proc. of the Workshop on Robot Motion Planning: Online, Reactive, and in Real-Time, IROS, 2012
[15] J. Bohg and D. Kragic, “Learning Grasping Points with Shape Context”, in Robotics and Autonomous Systems, Volume 58, Issue 4, pp 362-377,2010
[16] B. Huang, S. El-Khoury, M. Li, J. J. Bryson and A. Billard, “Learning a Real Time Grasping Strategy,” in IEEE Int. Conf. on Robotics and Automation, 2013
[17] Y. Jiang, S. Moseson, and A. Saxena, “Efficient Grasping from RGBD images: Learning using a new Rectangle Representation,” in IEEE Int. Conf. on Robotics and Automation, 2011
[18] D. Fischinger, M. Vincze, and Y. Jiang, “Learning Grasps for Unknown Objects in Cluttered Scenes,” in IEEE Int. Conf. on Robotics and Automation, 2013
[19] K. Hsiao, S. Chitta, M. Ciocarlie, and E. Gil Jones, “Contact-Reactive Grasping of Objects with Partial Shape Information,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2010
[20] E. Klingbeil, D. Rao, B. Carpenter, V. Ganapathi, A. Y. Ng, and O. Khtib, “Grasping with Application to an Autonomous Checkout Robot,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2012
[21] J. Stuckler, R. Steffens, D. Holz, and S. Behnke, “Efficient 3D Object Perception and Grasp Planning for Mobile Manipulation in Domestic Environments,” in Robotics and Autonomous Systems, 2012.
[22] F. Chaumette and S. Hutchinson, “Visual Servo Control, Part I: Basic Approaches,” in IEEE Robotics and Automation Magazine, pp. 82-90, 2006
[23] M. W. Spong, S. Hutchinson and M. Vidyasagar, “Robot Dynamics and Control,” 2nd ed., chap. 3-chap. 5, chap. 7, 2004.
[24] P.E. Hart, N.J. Nilsson, and B. Raphael, “A Formal Basis for the Heuristic Determination of Minimum Cost Paths,” in IEEE Trans. on Systems Science and Cybernetics, vol.4, no.2, pp.100,107, July 1968.
[25] G.R. Bradski, “Computer Vision Face Tracking for Use in a Perceptual User Interface,” Intel, 1998.
[26] OpenCV http://docs.opencv.org/, May,2013
[27] W. Kaplan, “Green's Theorem,” Section 5.5 in Advanced Calculus, 4th ed, Reading. MA: Addison-Wesley, pp. 286-291, 1991.
[28] Softkinetic http://www.softkinetic.com/, April, 2013
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/61208-
dc.description.abstractNowadays, techniques in robotics are getting mature and people expect robots to handle more complex tasks in our daily life. In high-level services, the ability of the robot to grasp objects is essential. In this thesis, to increase e the robot’s ability of interaction with human and environments, we proposed a novel manipulator grasping planner which emphasize the cooperation between two RGB-D cameras and can adapt to the dynamic environments.
We assume that the object to grasp and obstacles are both dynamic. Our motion planner doesn’t need whole environmental model previously, and only requires the object information that the robot is demanded to grasp. The concept of our motion planner is based on the potential field and composed of the attractive and repulsive vectors which are generated by the distances from the manipulator to the target and obstacles respectively. Then the motion planner determines the potential vector according to the attractive and repulsive vectors in a variety of situations. Furthermore, an approach to deal with local minima is contained in the algorithm. For robot control, we take multiple control points into account and apply the potential vector with joint-level control. The mobility and the kinematic constraints of the robot are evaluated in the controller to modify the joint velocities. The experiment platform is a wheeled mobile robot with a 5-DOF manipulator which possesses a close range and a normal range RGB-D cameras as our sensors. Through several experiments, the results show that our framework is valid to accommodate the environmental changes and to grasp the object under various situations.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T10:53:12Z (GMT). No. of bitstreams: 1
ntu-101-R00921006-1.pdf: 3424535 bytes, checksum: d243ab97ede29d470b5a06a98626cf62 (MD5)
Previous issue date: 2012
en
dc.description.tableofcontentsCONTENTS
口試委員會審定書 #
誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS iv
TABLE OF NOTATIONS vii
LIST OF FIGURES ix
LIST OF TABLES xi
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Background and Related Works 3
1.3 Objectives 5
1.4 Contributions 6
1.5 Thesis Organization 7
Chapter 2 Preliminaries 8
2.1 System Overview 8
2.1.1 Manipulator Motion Planner with RGB-D cameras 10
2.1.2 Robot Motion Controller 10
2.2 Kinematics 12
2.2.1 Forward Kinematics 12
2.2.2 Velocity Kinematics 13
2.3 Pseudo Inverse Matrix by Singular Value Decomposition (SVD) 15
2.4 Camera Perspective Model of RGB-D Camera 18
2.5 Artificial Potential Field 19
Chapter 3 Mobile Manipulator Grasping in Dynamic Environment 22
3.1 Attractive Action 24
3.1.1 Attractive Vector 25
3.1.2 Target Velocity Estimation 29
3.2 Repulsive Action 31
3.2.1 Repulsive Vector 32
3.2.2 Exclusion of the Target/Robot Body from the Collision-free Space 34
3.2.3 Obstacle Velocity Estimation 37
3.3 Potential Action 38
3.3.1 Escape from the Local Minimum 38
3.3.2 Potential Vector 42
3.3.3 Motion Decision 44
3.4 Motion Control of the Wheeled Mobile Manipulator 47
3.4.1 Agile Robot In Office (ARIO) 47
3.4.1.1 Kinematic Model of the Mobile Platform 48
3.4.1.2 Kinematic Model of the Manipulator 49
3.4.2 Motion Controller 51
Chapter 4 Experimental Setting and Results 54
4.1 Experimental Settings 54
4.1.1 Hardware 54
4.1.2 Set up 55
4.2 Experiment I (Grasping the Target on the Desk) 56
4.3 Experiment II (Obstacles Avoidance with Keeping Pose) 59
4.4 Experiment III (Local Minima Escape Strategy) 62
4.4.1 Gap existence 62
4.4.2 Wait 65
4.4.3 Others 68
4.5 Experiment IV (External Command) 70
4.6 Experiment V (Overall Test) 72
Chapter 5 Conclusions and Future Work 76
REFERENCES 78
dc.language.isoen
dc.subject移動平台zh_TW
dc.subject抓取zh_TW
dc.subject動態環境zh_TW
dc.subject彩色深度相機zh_TW
dc.subjectmobile manipulator graspingen
dc.subjectRGB-D cameraen
dc.subjectdynamic environmentsen
dc.title利用彩色深度相機建立移動型機械手臂於動態環境中抓取物體系統zh_TW
dc.titleManipulator Grasping on a Mobile Platform with Help from RGB-D Cameras in Dynamic Environmentsen
dc.typeThesis
dc.date.schoolyear101-2
dc.description.degree碩士
dc.contributor.oralexamcommittee范欽雄,羅仁權,周瑞仁,黃正民
dc.subject.keyword抓取,移動平台,彩色深度相機,動態環境,zh_TW
dc.subject.keywordmobile manipulator grasping,RGB-D camera,dynamic environments,en
dc.relation.page81
dc.rights.note有償授權
dc.date.accepted2013-08-09
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  未授權公開取用
3.34 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved