請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86159
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 徐宏民(Winston H. Hsu) | |
dc.contributor.author | Bo-Siang Lu | en |
dc.contributor.author | 盧柏翔 | zh_TW |
dc.date.accessioned | 2023-03-19T23:39:40Z | - |
dc.date.copyright | 2022-09-26 | |
dc.date.issued | 2022 | |
dc.date.submitted | 2022-09-05 | |
dc.identifier.citation | [1] C. C. Beltran-Hernandez, D. Petit, I. G. Ramirez-Alpizar, and K. Harada. Variable compliance control for robotic peg-in-hole assembly: A deep-reinforcement-learning approach. Applied Sciences, 10(19):6923, 2020. [2] P. J. Besl and N. D. McKay. Method for registration of 3-d shapes. In Sensor fusion IV: control paradigms and data structures, volume 1611, pages 586–606. Spie, 1992. [3] D. Bogunowicz, A. Rybnikov, K. Vendidandi, and F. Chervinskii. Sim2real for peg-hole insertion with eye-in-hand camera. arXiv preprint arXiv:2005.14401, 2020. [4] S. R. Chhatpar and M. S. Branicky. Search strategies for peg-in-hole assemblies with position uncertainty. In Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180), volume 3, pages 1465–1470. IEEE, 2001. [5] Y. Fei and X. Zhao. An assembly process modeling and analysis for robotic multiple peg-in-hole. Journal of Intelligent and Robotic Systems, 36(2):175–189, 2003. [6] R. L. Haugaard, J. Langaa, C. Sloth, and A. G. Buch. Fast robust peg-in-hole insertion with continuous visual servoing. arXiv preprint arXiv:2011.06399, 2020. [7] Y. He, W. Sun, H. Huang, J. Liu, H. Fan, and J. Sun. Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11632–11641, 2020. [8] T. Inoue, G. De Magistris, A. Munawar, T. Yokoya, and R. Tachibana. Deep reinforcement learning for high precision assembly tasks. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 819–825. IEEE, 2017. [9] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine qattention: Efficient learning for visual robotic manipulation via discretisation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13739–13748, 2022. [10] S. Jin, X. Zhu, C. Wang, and M. Tomizuka. Contact pose identification for peg-in-hole assembly under uncertainties. In 2021 American Control Conference (ACC), pages 48–53. IEEE, 2021. [11] L. Johannsmeier, M. Gerchow, and S. Haddadin. A framework for robot manipulation: Skill formalism, meta learning and adaptive control. In 2019 International Conference on Robotics and Automation (ICRA), pages 5844–5850. IEEE, 2019. [12] E. Johns. Coarse-to-fine imitation learning: Robot manipulation from a single demonstration. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4613–4619. IEEE, 2021. [13] Y.L. Kim, H.C. Song, and J.B. Song. Hole detection algorithm for chamferless square peg-in-hole based on shape recognition using f/t sensor. International journal of precision engineering and manufacturing, 15(3):425–432, 2014. [14] M. A. Lee, C. Florensa, J. Tremblay, N. Ratliff, A. Garg, F. Ramos, and D. Fox. Guided uncertainty-aware policy optimization: Combining learning and model-based strategies for sample-efficient policy learning. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 7505–7512. IEEE, 2020. [15] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA), pages 8943–8950. IEEE, 2019. [16] Z. Liu, L. Song, Z. Hou, K. Chen, S. Liu, and J. Xu. Screw insertion method in peg-in-hole assembly for axial friction reduction. IEEE Access, 7:148313–148325, 2019. [17] J. Luo, E. Solowjow, C. Wen, J. A. Ojea, and A. M. Agogino. Deep reinforce-ment learning for robotic assembly of mixed deformable and rigid objects. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2062–2069. IEEE, 2018. [18] M. Nigro, M. Sileo, F. Pierri, K. Genovese, D. D. Bloisi, and F. Caccavale. Peg-in-hole using 3d workpiece reconstruction and cnn-based hole detection. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4235–4240. IEEE, 2020. [19] E. Y. Puang, K. P. Tee, and W. Jing. Kovis: Keypoint-based visual servoing with zero-shot sim-to-real transfer for robotics manipulation. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7527– 7533. IEEE, 2020. [20] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017. [21] E. Rohmer, S. P. Singh, and M. Freese. V-rep: A versatile and scalable robot simu-lation framework. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1321–1326. IEEE, 2013. [22] G. Schoettler, A. Nair, J. A. Ojea, S. Levine, and E. Solowjow. Meta-reinforcement learning for robotic industrial insertion tasks. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 9728–9735. IEEE, 2020. [23] T. Tang, H.C. Lin, Y. Zhao, W. Chen, and M. Tomizuka. Autonomous alignment of peg and hole by force/torque measurement for robotic assembly. In 2016 IEEE international conference on automation science and engineering (CASE), pages 162– 167. IEEE, 2016. [24] T. Tang, H.C. Lin, Y. Zhao, Y. Fan, W. Chen, and M. Tomizuka. Teach industrial robots peg-hole-insertion by human demonstration. In 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pages 488–494. IEEE, 2016. [25] J. C. Triyonoputro, W. Wan, and K. Harada. Quickly inserting pegs into uncertain holes using multi-view images and deep network trained on synthetic data. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5792–5799. IEEE, 2019. [26] E. Valassakis, N. Di Palo, and E. Johns. Coarse-to-fine for sim-to-real: Sub-millimetre precision across wide task spaces. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5989–5996. IEEE, 2021. [27] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns. Demonstrate once, imitate immediately (dome): Learning visual servoing for one-shot imitation learning. arXiv preprint arXiv:2204.02863, 2022. [28] K.VanWyk,M.Culleton,J.Falco,andK.Kelly.Comparative peg-in-hole testing of a force-based manipulation controlled robotic hand. IEEE Transactions on Robotics, 34(2):542–549, 2018. [29] L. Xie, H. Yu, Y. Zhao, H. Zhang, Z. Zhou, M. Wang, Y. Wang, and R. Xiong. Learning to fill the seam by vision: Sub-millimeter peg-in-hole on unseen shapes in real world. arXiv preprint arXiv:2204.07776, 2022. [30] J. Xu, Z. Hou, Z. Liu, and H. Qiao. Compare contact model-based control and contact model-free learning: A survey of robotic peg-in-hole assembly strategies. arXiv preprint arXiv:1904.05240, 2019. [31] P. Zou, Q. Zhu, J. Wu, and R. Xiong. Learning-based optimization algorithms combining force control strategies for peg-in-hole assembly. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7403– 7410. IEEE, 2020. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86159 | - |
dc.description.abstract | 插件組裝是一項具有挑戰性的機器操作任務,因為它對估計誤差的容忍度很低。先前的方法依賴於力-扭矩控制或端到端視覺伺服,難以實現六自由度插入任務和處理較大的初始對準誤差。此外,在沒有進行高成本的重新訓練下,缺乏泛化能力使他們無法處理沒看過的目標。為此,我們提出漸進式視覺伺服(CFVS)插件組裝方法,該方法利用三維點雲訊息實現了能插入任意傾斜角的六自由度插件組裝。此外,CFVS 能夠透過快速姿態估計來處理較大的初始對準誤差,然後再經過細化調整。再者,通過引入置信度熱圖,CFVS 對各種形狀的目標都具有穩健性。大量實驗表明 CFVS 優於最先進的方法,在 3-DoF、4-DoF 和 6-DoF 插件中的平均成功率分別為 100%、91% 和 82%。 | zh_TW |
dc.description.abstract | Peg-in-hole is a challenging robotic manipulation task due to its low tolerance against estimation error. Prior methods rely on force-torque control or end-to-end visual servoing, having difficulty achieving 6-DoF insertion and handling large initial alignment errors. Moreover, the lack of generalization ability prevents them from dealing with unseen target objects without costly re-training. To this end, we propose a Coarse-to-Fine Visual Servoing (CFVS) peg-in-hole assembly method, which first achieves the 6-DoF peg-in-hole assembly with arbitrary tilt angle by exploiting 3D point-cloud information. Also, CFVS is capable of handling large initial alignment errors through a fast pose estimation before refinement. Furthermore, by introducing a confidence heatmap, CFVS is robust against various shapes of targets. Extensive experiments show that CFVS outperforms state-of-the-art methods and obtains 100%, 91%, and 82% average success rates in 3-DoF, 4-DoF, and 6-DoF peg-in-hole, respectively. | en |
dc.description.provenance | Made available in DSpace on 2023-03-19T23:39:40Z (GMT). No. of bitstreams: 1 U0001-2808202215342800.pdf: 4707675 bytes, checksum: 127c85bfc06135c11805bc7d386ed3b0 (MD5) Previous issue date: 2022 | en |
dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i Acknowledgements iii 摘要 v Abstract vii Contents ix List of Figures xi List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 Peg-In-Hole Assembly Task ...................... 5 2.2 Visual Servo Control with Coarse-to-Fine Manner . . . . . . . . . . 7 Chapter 3 Method 9 3.1 Open-Loop Control with Object-Agnostic Keypoint Network . . . . 9 3.1.1 3D Keypoint Offsets ......................... 10 3.1.2 Heatmap................................ 10 3.1.3 Loss.................................. 11 3.1.4 Open-Loop Control.......................... 12 3.2 Visual Servoing with Offsets Prediction Network . . . . . . . . . . . 13 3.2.1 Loss.................................. 13 3.2.2 Closed-Loop Control ......................... 14 3.3 Data generation............................. 15 3.3.1 Coarse Dataset ............................ 15 3.3.2 Fine Dataset.............................. 15 3.3.3 Data augmentation .......................... 16 Chapter 4 Experiments 17 4.1 Experimental Settings ......................... 17 4.2 Tasks.................................. 18 4.3 Evaluation Metrics ........................... 19 4.4 Baselines ................................ 19 4.5 Results ................................. 20 4.6 Ablations................................ 22 Chapter 5 Conclusion 23 References 25 | |
dc.language.iso | en | |
dc.title | 漸進式視覺伺服於六自由度跨物體插件組裝任務 | zh_TW |
dc.title | CFVS: Coarse-to-Fine Visual Servoing for 6-DoF Object-Agnostic Peg-In-Hole Assembly | en |
dc.type | Thesis | |
dc.date.schoolyear | 110-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳文進(Wen-Chin Chen),陳奕廷(Yi-Ting Chen),葉梅珍(Mei-Chen Yeh) | |
dc.subject.keyword | 深度學習,插件組裝,視覺伺服,六自由度,漸進式,跨物體, | zh_TW |
dc.subject.keyword | Deep Learning,Peg-in-hole assembly,Visual Servoing,6-DoF,Coarse-to-fine, | en |
dc.relation.page | 29 | |
dc.identifier.doi | 10.6342/NTU202202894 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2022-09-06 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
dc.date.embargo-lift | 2022-09-26 | - |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-2808202215342800.pdf | 4.6 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。