Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 生物資源暨農學院
  3. 生物機電工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97391
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor顏炳郎zh_TW
dc.contributor.advisorPing-Lang Yenen
dc.contributor.author程偉軒zh_TW
dc.contributor.authorWei-Hsuan Chengen
dc.date.accessioned2025-05-22T16:11:24Z-
dc.date.available2025-05-23-
dc.date.copyright2025-05-22-
dc.date.issued2025-
dc.date.submitted2025-05-07-
dc.identifier.citation[1] S. R. Ahmadzadeh, P. Kormushev, R. S. Jamisola, and D. G. Caldwell. Learning reactive robot behavior for autonomous valve turning. In 2014 IEEE International Conference on Humanoid Robots (Humanoids), 2014.
[2] ANYbotics. Anybotics introduces end-to-end robotic inspection solution. https://www.anybotics.com/news/anybotics-introduces-end-to-end-robotic-inspection-solution/, 2021.
[3] O. Baumann, A. Lenz, J. Hartl, L. Bernhard, and A. C. Knoll. Intuitive teaching of medical device operation to clinical assistance robots. International Journal of Computer Assisted Radiology and Surgery, 18(5):865–870, 2023.
[4] P. J. Besl and N. D. McKay. A method for registration of 3d shapes. Transactions on Pattern Analysis and Machine Intelligence, 1992.
[5] A. Billard and D. Kragic. Trends and challenges in robot manipulation. Science, 364, 2019.
[6] Boston Dynamics. The next step in safe autonomous robotic inspection. https://bostondynamics.com/blog/the-next-step-in-safe-autonomous-robotic-inspection/, 2023.
[7] F. Chaumette and S. Hutchinson. Visual servo control, part 1: Basic approaches. In IEEE Robotics and Automation Magazine, volume 13, pages 82–90, 2006.
[8] F. Chaumette and S. Hutchinson. Visual servoing and visual tracking. In Springer, 2008.
[9] C. Dune, E. Marchand, and C. Leroux. One-click focus with eye-in-hand/ eye-to-hand cooperation. In Proc. IEEE International Conference on Robotics and Automation, pages 2471–2476, 2007.
[10] R. Edlinger, C. Föls, et al. Ai supported multi-functional gripping system for dexterous manipulation tasks. In 2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2022.
[11] P. Fankhauser and M. Hutter. Anymal: A unique quadruped robot conquering harsh environments. Research Features, (126):54–57, 2018.
[12] C. Gehring, P. Fankhauser, L. Isler, R. Diethelm, S. Bachmann, M. Potz, et al. Anymal in the field: Solving industrial inspection of an offshore hvdc platform with a quadrupedal robot. In 12th Conference on Field and Service Robotics (FSR 2019), 2019.
[13] M. Hutter, C. Gehring, A. Lauber, F. Gunther, C. D. Bellicoso, V. Tsounis, P. Fankhauser, R. Diethelm, S. Bachmann, M. Blösch, et al. Anymal—toward legged robots for harsh environments. Advanced Robotics, 31(17):918–931, 2017.
[14] A. Kailath and B. Hassibi. Linear Estimation. Prentice Hall, 2000.
[15] R. E. Kalman, P. L. Falb, and M. Arbib. Contributions to the theory of optimal control. Boletín de la Sociedad Matemática Mexicana, 5(2):102–119, 1960.
[16] K. D. Katyal et al. Approaches to robotic teleoperation in a disaster scenario: From supervised autonomy to direct control. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1874–1881, September 2014.
[17] W. Khalil and E. Dombre. Modeling, Identification and Control of Robots. Hermes Penton Science, 2002.
[18] D. Kragic, H. I. Christensen, et al. Survey on visual servoing for manipulation, 2002. Technical Report, Computational Vision and Active Perception Laboratory, 2002.
[19] S. Lee. Intelligent sensing and control for advanced teleoperators. June 1993.
[20] Y. Lin, J. Tremblay, S. Tyree, P. A. Vela, and S. Birchfield. Single-stage keypoint-based category-level object pose estimation from an RGB image. In IEEE International Conference on Robotics and Automation (ICRA), 2022.
[21] L. Lipson, Z. Teed, A. Goyal, and J. Deng. Coupled iterative refinement for 6d multi-object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
[22] J. Liu, W. Sun, et al. Deep learning-based object pose estimation: A comprehensive survey. arXiv preprint arXiv:2405.07801, 2024.
[23] X. Liu, P. Huang, and Z. Liu. A novel contact state estimation method for robot manipulation skill learning via environment dynamics and constraints modeling. IEEE Transactions on Automation Science and Engineering, 19(4):3903–3913, 2022.
[24] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004.
[25] E. Malis, F. Chaumette, and S. Boudet. 2 1/2-d visual servoing with respect to unknown objects through a new estimation scheme of camera displacement. In International Journal of Computer Vision, volume 37, pages 79–97, 2000.
[26] M. T. Mason. Compliance and force control for computer controlled manipulators. 1979. MIT Artificial Intelligence Laboratory Memo 515.
[27] S. Müller, B. Stephan, T. Müller, and H. M. Gross. Robust perception skills for autonomous elevator operation by mobile robots. In 2023 European Conference on Mobile Robots (ECMR), 2023.
[28] R. Newbury, M. Gu, L. Chumbley, A. Mousavian, C. Eppner, J. Leitner, J. Bohg, A. Morales, T. Asfour, D. Kragic, et al. Deep learning approaches to grasp synthesis: A review. IEEE Transactions on Robotics, 2023.
[29] NVIDIA. Isaac sim. https://developer.nvidia.com/isaac/sim.
[30] A. Okamura, N. Smaby, and M. Cutkosky. An overview of dexterous manipulation. In Proceedings (or Report), pages 255–262, 2000.
[31] V. Perdereau and M. Drouin. A new scheme for hybrid force-position control. volume 11, pages 453–464, 1993.
[32] M. Raibert and J. Craig. Hybrid position/force control of manipulators. June 1981.
[33] E. Robotics Systems Lab. Aira challenge: Teleoperated mobile manipulation for industrial inspection. https://www.youtube.com/watch?v=LLD3BFS-qms, 2024.
[34] M. Selvaggio, M. Cognetti, S. Nikolaidis, S. Ivaldi, and B. Siciliano. Autonomy in physical human-robot interaction: A brief survey. volume 6, pages 7989–7996, October 2021.
[35] H. Seraji. Adaptive admittance control: An approach to explicit force control in compliant motion. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, pages 2705–2712, 1994.
[36] K.-T. Song, S.-Y. Jiang, and M.-H. Lin. Interactive teleoperation of a mobile manipulator using a shared-control approach. volume 46, pages 834–845, December 2016.
[37] H. Sun, H. Miao, P. Ni, X. Zhu, and Q. Cao. A cockpit panel detection visual system with panel localization and button state recognition. In 2019 IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), 2022.
[38] H. Sun and P. Ni. Panelpose: A 6d pose estimation of highly-variable panel object for robotic robust cockpit panel inspection. In Proceedings of IROS, 2023.
[39] R. Tsai and R. Lenz. A new technique for fully autonomous and efficient 3d robotics hand/eye calibration. 1989.
[40] E. Valassakis et al. Coarse-to-fine for sim-to-real: Sub-millimetre precision across wide task spaces. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
[41] L. Villani and J. D. Schutter. Force control. In B. Siciliano and O. Khatib, editors, Springer Handbook of Robotics, pages 161–185, 2008.
[42] S. Vougioukas. Bias estimation and gravity compensation for force-torque sensors. In Proceedings of the 3rd WSEAS Symposium on Mathematical Methods and Computational Techniques in Electrical Engineering, pages 82–85, 2001.
[43] F. Wang, G. Chen, and K. Hauser. Robot button pressing in human environments. In Proc. IEEE International Conference on Robotics and Automation (ICRA), pages 7173–7180, 2018.
[44] S. Wang, M. Lambeta, P. W. Chou, and R. Calandra. Tacto: A fast, flexible, and open-source simulator for high-resolution vision-based tactile sensors. volume 7, pages 3930–3937, 2022.
[45] B. Wen, W. Yang, J. Kautz, and S. Birchfield. FoundationPose: Unified 6d pose estimation and tracking of novel objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
[46] D. E. Whitney. Force feedback control of manipulator fine motions. pages 91–97, June 1977.
[47] D. Zhu, Z. Min, T. Zhou, T. Li, and M. Q.-H. Meng. An autonomous eye-in-hand robotic system for elevator button operation based on deep recognition network. IEEE Transactions on Instrumentation and Measurement, 70:1–13, 2021.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97391-
dc.description.abstract於工業廠房中,需維護的設備種類繁多,傳統倚賴人工巡檢之方式不僅耗時,亦易受限且缺乏效率。因此,自主式機器人系統被視為可行的解決方案。本論文建立一套監督式遠端操作系統,以執行工業巡檢場景中的靈巧操作任務,例如按鈕與閥門之操作。為解決此類問題,本論文提出一種結合監督式控制 (supervisory control) 與局部自主性 (local autonomy) 的控制架構,兼具全遠端操作與全自主系統之優勢。使用者透過無線通訊控制遠端機器人執行任務,並可監督系統之自主運作流程,必要時亦可即時介入。此架構有助於減少人力監督負擔,同時提升任務之成功率。於局部自主性流程中,系統採用六自由度物件姿態估測與追蹤,以實現漸進式 (coarse-to-fine) 的視覺控制策略。在操作物件前,透過力導引自動對位(force-guided self-alignment)機制,進一步調整最終夾持姿態。實驗結果顯示,本論文所提出結合視覺與力覺感知模態之控制架構,能有效修正靈巧操作過程中的夾持姿態誤差,並於特定軸向達成小於 1.5 [mm] 及 2.6 [deg] 的誤差範圍。zh_TW
dc.description.abstractIn industrial plants, a wide variety of equipment is to be maintained, and hence manual surveillance tours are scheduled, which is not only time-consuming but also limited and effectless. Autonomous robot system is a viable solution to address this limitation. In this thesis, a supervisory teleoperation system is set up to perform dexterous manipulation tasks in industrial surveillance scenarios, such as buttons and valves operations. A supervisory control scheme with local autonomy is proposed to address this problem, which combines the advantages of full teleoperation and full autonomy. The user controls the remote robot to perform tasks through wireless communication link, and supervises the local autonomy and can intervene the tasks anytime if needed. This helps reduce human supervision efforts and at the same time increases task success rates. In the pipeline of local autonomy, the object 6D pose estimation and tracking are employed for a coarse-to-fine vision-based control framework. A force-guided self-alignment scheme is used to adjust the final grasp pose before operating the object. Experimental results show that the proposed control scheme with vision and force-torque (F-T) modality is able to adjust the grasp pose for the dexterous manipulation task with errors less than 1.5 [mm] and 2.6 [deg] in the selected axes.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-05-22T16:11:24Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-05-22T16:11:24Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements iii

摘要 v

Abstract vii

Contents ix

List of figures xiii

List of tables xvii

Nomenclature xix


Chapter 1 Introduction 1

1.1 Background and overview . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2.1 Current practice of industrial surveillance scenarios for robotic systems 2

1.2.2 Robotic dexterous manipulations . . . . . . . . . . . . . . . . . . . 4

1.3 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Thesis assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5.1 Assumption 1: Prior knowledge of target objects . . . . . . . . . . 6

1.5.2 Assumption 2: Static environment . . . . . . . . . . . . . . . . . . 6

1.5.3 Assumption 3: Static wrench measurement . . . . . . . . . . . . . . 6

1.5.4 Assumption 4: Known direction of the gravitational force . . . . . . 7


Chapter 2 Related work 9

2.1 Robotic buttons and valves operation . . . . . . . . . . . . . . . . . 9

2.2 Robotic dexterous manipulations . . . . . . . . . . . . . . . . . . . . 10

2.3 Vision-based control and coarse-to-fine framework . . . . . . . . . . 11

2.3.1 Object 6D pose estimation . . . . . . . . . . . . . . . . . . . . . . 11

2.3.2 Vision-based control and visual servoing . . . . . . . . . . . . . . . 14

2.3.2.1 The eye-in-hand (EIH) and eye-to-hand (ETH) configurations . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.3.2.2 Open-loop control and visual servoing . . . . . . . . . 15

2.3.3 Coarse-to-fine framework . . . . . . . . . . . . . . . . . . . . . . . 16

2.3.4 Limitation of dexterous manipulation relying on pure vision . . . . 18

2.4 Force control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.4.1 Concept of force (admittance) control . . . . . . . . . . . . . . . . 19

2.4.2 Damping and stiffness control . . . . . . . . . . . . . . . . . . . . 19

2.5 Supervisory control and local autonomy . . . . . . . . . . . . . . . . 21


Chapter 3 Preliminaries 25

3.1 Spatial representations of robots . . . . . . . . . . . . . . . . . . . . 25

3.2 Robot Denavit– Hartenberg parameters and kinematics . . . . . . . . 26

3.3 Twist and wrench representations . . . . . . . . . . . . . . . . . . . 28

3.4 Estimators and controllers . . . . . . . . . . . . . . . . . . . . . . . 29

3.4.1 Kalman fiter (KF) . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.4.2 Position-based visual servoing (PBVS) . . . . . . . . . . . . . . . . 31

3.4.3 Damping and stiffness control . . . . . . . . . . . . . . . . . . . . 31


Chapter 4 Experimental setup 33

4.1 Task environment and robot equipment setup . . . . . . . . . . . . . 33

4.1.1 Sensors and actuators . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 System architecture (hardware and software) . . . . . . . . . . . . . 34


Chapter 5 Calibration of sensors 37

5.1 Camera calibration and hand-eye calibration . . . . . . . . . . . . . 37

5.1.1 Camera calibration . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.1.2 Hand-eye calibration . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.2 Force-torque (F-T) sensor calibration . . . . . . . . . . . . . . . . . 39

5.2.1 Estimation of static wrench bias . . . . . . . . . . . . . . . . . . . 41

5.2.2 Gravity compensation . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.2.3 Transformation of wrench representation . . . . . . . . . . . . . . . 44


Chapter 6 Supervisory control 47

6.1 Coarse-to-fine vision-based control and force-guided self-alignment . 47

6.2 Teleoperation phase . . . . . . . . . . . . . . . . . . . . . . . . . . 49

6.3 Detection phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.4 Localisation & approaching phase . . . . . . . . . . . . . . . . . . . 50

6.4.1 Pose estimation and coarse control . . . . . . . . . . . . . . . . . . 50

6.4.2 Pose tracking and fine control . . . . . . . . . . . . . . . . . . . . . 51

6.4.3 Involving robot kinematics and Kalman filtering . . . . . . . . . . . 52

6.5 Operating phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.5.1 Force-guided self-alignment . . . . . . . . . . . . . . . . . . . . . 56

6.5.2 Planned motion for operation . . . . . . . . . . . . . . . . . . . . . 59


Chapter 7 Experimental validation 65

7.1 Specifications of robot, gripper, sensors, estimations, and controls . . 65

7.1.1 Specifications of robot, gripper, and sensors . . . . . . . . . . . . . 65

7.1.2 Specifications of the estimation and control algorithms . . . . . . . 66

7.2 Qualitative analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

7.3 Quantitative analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 69


Chapter 8 Discussion and conclusion 75

8.1 Limitations of current methodologies . . . . . . . . . . . . . . . . . 75

8.1.1 Limitation of dexterous manipulation relying on vision and F-T modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

8.1.2 Static wrench measurement . . . . . . . . . . . . . . . . . . . . . . 75

8.1.3 Limitation of object operating through planned motions . . . . . . . 76

8.2 Conclusion and future work . . . . . . . . . . . . . . . . . . . . . . 76


References 79
-
dc.language.isoen-
dc.subject機器人按鈕與閥門操作zh_TW
dc.subject力控制zh_TW
dc.subject漸進式架構zh_TW
dc.subject視覺伺服zh_TW
dc.subject視覺式控制zh_TW
dc.subject六自由度姿態估測zh_TW
dc.subject局部自主性zh_TW
dc.subject監督式控制zh_TW
dc.subject無線通訊zh_TW
dc.subject遠端遙控機器人zh_TW
dc.subject靈巧操作zh_TW
dc.subject工業巡檢zh_TW
dc.subjectForce Controlen
dc.subjectIndustrial Surveillanceen
dc.subjectRobotic Buttons and Valves Operationen
dc.subjectDexterous Manipulationen
dc.subjectTeleoperation Roboten
dc.subjectWireless Communicationen
dc.subjectSupervisory Controlen
dc.subjectLocal Autonomyen
dc.subject6D Pose Estimationen
dc.subjectVision-Based Controlen
dc.subjectVisual Servoingen
dc.subjectCoarse-to-Fine Frameworken
dc.title應用基於視覺與力覺之監督式遠端遙控於工業巡檢中的機器人靈巧操作任務zh_TW
dc.titleApplication of Vision- and Force-Based Supervisory Teleoperation for Robotic Dexterous Manipulation Tasks in Industrial Surveillanceen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee林沛群;連豊力zh_TW
dc.contributor.oralexamcommitteePei-Chun Lin;Feng-Li Lianen
dc.subject.keyword工業巡檢,機器人按鈕與閥門操作,靈巧操作,遠端遙控機器人,無線通訊,監督式控制,局部自主性,六自由度姿態估測,視覺式控制,視覺伺服,漸進式架構,力控制,zh_TW
dc.subject.keywordIndustrial Surveillance,Robotic Buttons and Valves Operation,Dexterous Manipulation,Teleoperation Robot,Wireless Communication,Supervisory Control,Local Autonomy,6D Pose Estimation,Vision-Based Control,Visual Servoing,Coarse-to-Fine Framework,Force Control,en
dc.relation.page84-
dc.identifier.doi10.6342/NTU202500897-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-05-08-
dc.contributor.author-college生物資源暨農學院-
dc.contributor.author-dept生物機電工程學系-
dc.date.embargo-lift2030-05-06-
顯示於系所單位:生物機電工程學系

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  此日期後於網路公開 2030-05-06
74.32 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved