請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99276完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 黃漢邦 | zh_TW |
| dc.contributor.advisor | Han-Pang Huang | en |
| dc.contributor.author | 郭晟銘 | zh_TW |
| dc.contributor.author | Cheng-Ming Kuo | en |
| dc.date.accessioned | 2025-08-21T17:05:24Z | - |
| dc.date.available | 2025-08-22 | - |
| dc.date.copyright | 2025-08-21 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-31 | - |
| dc.identifier.citation | [1] I.A. Adeyanju, S.O. Alabi, A.O. Esan, B.A. Omodunbi, O.O. Bello, and S. Fanijo, "Design and Prototyping of a Robotic Hand for Sign Language Using Locally-Sourced Materials," Scientific African, vol. 19, p. 1533, 2023.
[2] I.A. Adeyanju, O.O. Bello, and M.A. Adegboye, "Machine Learning Methods for Sign Language Recognition: A Critical Review and Analysis," Intelligent Systems with Applications, vol. 12, Art no. 200056, 2021. [3] D.A. Alabbad, N.O. Alsaleh, N.A. Alaqeel, Y.A. Alshehri, N.A. Alzahrani, and M.K. Alhobaishi, "A Robot-Based Arabic Sign Language Translating System," 2022 7th International Conference on Data Science and Machine Learning Applications (CDMA), 2022. [4] S. Alyami, H. Luqman, and M. Hammoudeh, "Isolated Arabic Sign Language Recognition Using a Transformer-Based Model and Landmark Keypoints," Association for Computing Machinery (ACM) Transactions on Asian and Low-Resource Language Information Processing, vol. 23, no. 1, p. 3, 2024. [5] M. Andtfolk, L. Nyholm, H. Eide, and L. Fagerström, "Humanoid Robots in the Care of Older Persons: A Scoping Review," Assistive Technology, vol. 34, no. 5, pp. 518-526, 2022. [6] T. Asfour, F. Gyarfas, P. Azad, and R. Dillmann, "Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots," 2006 6th IEEE-RAS International Conference on Humanoid Robots, Genova, Italy, pp. 40-47, 2006. [7] S.G. Azar and H. Seyedarabi, "Continuous Hidden Markov Model Based Dynamic Persian Sign Language Recognition," 2016 24th Iranian Conference on Electrical Engineering (ICEE), Shiraz, Iran, 2016. [8] J. Bora, S. Dehingia, A. Boruah, A.A. Chetia, and D. Gogoi, "Real-Time Assamese Sign Language Recognition Using Mediapipe and Deep Learning," Procedia Computer Science, vol. 218, pp. 1384-1393, 2023. [9] A. Bulgarelli, G. Toscana, L.O. Russo, G.A. Farulla, M. Indaco, and B. Bona, "A Low-Cost Open Source 3d-Printable Dexterous Anthropomorphic Robotic Hand with a Parallel Spherical Joint Wrist for Sign Languages Reproduction," International Journal of Advanced Robotic Systems, vol. 13, no. 3, 2016. [10] C.M. Chao, Introduction: Linguistics of Sign Language. Taiwan: Chinese Deaf Association, 2011, pp. 95-103. [11] J.H. Chen, "Optimal Contact Wrench Controller for Humanoid Robots Based on Floating Base Kinematics," Master Thesis, Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan, 2016. [12] Y.C. Chen, "Sign Language Robots and Taiwanese Sign Language Recognition Systems," Master Thesis, Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan, 2024. [13] M.J. Cheok, Z. Omar, and M.H. Jaward, "A Review of Hand Gesture and Sign Language Recognition Techniques," International Journal of Machine Learning and Cybernetics, vol. 10, no. 1, pp. 131-153, 2019. [14] L.P. Chou, "Design of Intelligent Robotic Hand," Master Thesis, Department of Mechanical Engineering, Nation Taiwan University, Taipei, Taiwan, 2009. [15] J.J. Craig, Introduction to Robotics: Mechanics and Control, 3rd ed. New York: Prentice Hall, 2005. [16] J.H. Davis, "Nursing in the Age of Artificial Intelligence," Comput Inform Nurs, vol. 43, no. 1, 2025. [17] J. Devlin, M.W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding," Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, 2019. [18] B. Fang, D. Guo, F. Sun, H. Liu, and Y. Wu, "A Robotic Hand-Arm Teleoperation System Using Human Arm/Hand with a Novel Data Glove," 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, pp. 2483-2488, 2015. [19] D. Fathy and E. Elsayed, "Semantic Deep Learning to Translate Dynamic Sign Language," International Journal of Intelligent Engineering and Systems, vol. 14, no. 1, pp. 316-325, 2021. [20] F. Ficuciello, "Hand-Arm Autonomous Grasping: Synergistic Motions to Enhance the Learning Process," Intelligent Service Robotics, vol. 12, no. 1, pp. 17-25, 2019. [21] J. Fox and A. Gambino, "Relationship Development with Humanoid Social Robots: Applying Interpersonal Theories to Human–Robot Interaction," Cyberpsychology, Behavior, and Social Networking, vol. 24, no. 5, pp. 294-299, 2021. [22] J.J. Gago, J.G. Victores, and C. Balaguer, "Sign Language Representation by Teo Humanoid Robot: End-User Interest, Comprehension and Satisfaction," Electronics, vol. 8, no. 1, p. 57, 2019. [23] B. Gupta, P. Shukla, and A. Mittal, "K-Nearest Correlated Neighbor Classification for Indian Sign Language Gesture Recognition Using Feature Fusion," 2016 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 2016. [24] A. Hamed, N.A. Belal, and K.M. Mahar, "Arabic Sign Language Alphabet Recognition Based on Hog-Pca Using Microsoft Kinect in Complex Backgrounds," 2016 IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, 2016. [25] A. Henschel, R. Hortensius, and E.S. Cross, "Social Cognition in the Age of Human-Robot Interaction," Trends in Neurosciences, vol. 43, no. 6, pp. 373-384, 2020. [26] S.R. Hosseini, A. Taheri, M. Alemi, and A. Meghdari, "One-Shot Learning from Demonstration Approach toward a Reciprocal Sign Language-Based Hri," International Journal of Social Robotics, vol. 16, no. 4, pp. 645-657, 2021. [27] K. Hsiao and T. Lozano-Perez, "Imitation Learning of Whole-Body Grasps," 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, pp. 5657-5662, 2006. [28] L. Hu, L. Gao, Z. Liu, and W. Feng, "Self-Emphasizing Network for Continuous Sign Language Recognition," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 1, pp. 854-862, 2023. [29] M. Jasim and M. Hasanuzzaman, "Sign Language Interpretation Using Linear Discriminant Analysis and Local Binary Patterns," 2014 International Conference on Informatics, Electronics & Vision (ICIEV), Dhaka, Bangladesh, pp. 1-5, 2014. [30] J.H. Kim, B.H. Koo, S.U. Kim, and J.Y. Kim, "Measurement of 3d Wrist Angles by Combining Textile Stretch Sensors and Ai Algorithm," Sensors (Basel), vol. 24, no. 5, pp. 3-4, 2024. [31] T. Kondo, S. Narumi, Z. He, D. Shin, and Y. Kang, "A Performance Comparison of Japanese Sign Language Recognition with Vit and Cnn Using Angular Features," Applied Sciences, vol. 14, no. 8, Art no. 3228, 2024. [32] D. Konstantinidis, K. Dimitropoulos, and P. Daras, "A Deep Learning Approach for Analyzing Video and Skeletal Features in Sign Language Recognition," 2018 IEEE International Conference on Imaging Systems and Techniques (IST), Krakow, Poland, pp. 1-6, 2018. [33] D. Konstantinidis, K. Dimitropoulos, and P. Daras, "Sign Language Recognition Based on Hand and Body Skeletal Data," 2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Helsinki, Finland, pp. 1-4, 2018. [34] A. Kumar, K. Thankachan, and M.M. Dominic, "Sign Language Recognition," 2016 3rd International Conference on Recent Advances in Information Technology (RAIT), pp. 422-428, 2016. [35] D. Kumari and R.S. Anand, "Isolated Video-Based Sign Language Recognition Using a Hybrid Cnn-Lstm Framework Based on Attention Mechanism," Electronics, vol. 13, no. 7, pp. 1232-1236, 2024. [36] M.I. Kurpath, P.K. Adwai, J. Bodireddy, C. K, and N.S. K, "An Imus and Potentiometer-Based Controller for Robotic Arm-Hand Teleoperation," Sensors and Actuators A: Physical, vol. 367, Art no. 115019, 2024. [37] I. Lee, K.K. Lee, O. Sim, K.S. Woo, C. Buyoun, and J.H. Oh, "Collision Detection System for the Practical Use of the Humanoid Robot," 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea (South): IEEE, pp. 972-976, 2015. [38] C.H. Li, "The Study on the Difference between "Natural Sign Language" and "Written Sign Language" among Deaf Sign Language Users," Master Thesis, Department of Education, National Taichung University of Education, 2025. [39] W.Y. Lin, "Mediapipe Hands and Inflated 3dcnn for Taiwanese Sign Language Recognition," Master Thesis, Department of Multimedia Design, National Taichung University of Science and Technology, Taichung, Taiwan, 2022. [40] E. Liu, J.Y. Lim, V. Johnson, B. MacDonald, and H.S. Ahn, "Signpepper: Multimodal Social Robot for Sign Language Teaching," 2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Melbourne, Australia, 2025. [41] S.C. Lo, "Research of Taiwanese Sign Language Recognition Based on Deep Learning," Master Thesis, Department of Computer Science and Information Engineering, National Dong Hwa University, Hualien, Taiwan, 2022. [42] S.Y. Lo, "Visually-Guided Control for Safe Human-Robot Interaction," Doctoral Dissertation, Department of Mechanical Engineering, National Taiwan University, 2016. [43] M. Marais, D. Brown, J. Connan, A. Boby, and L. Kuhlane, "Investigating Signer-Independent Sign Language Recognition on the Lsa64 Dataset," Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2022, George, Western Cape, South Africa, 2022. [44] G. McCartney and A. McCartney, "Rise of the Machines: Towards a Conceptual Service-Robot Research Framework for the Hospitality and Tourism Industry," International Journal of Contemporary Hospitality Management, vol. 32, no. 12, pp. 3835-3851, 2020. [45] A. Meghdari, M. Alemi, M. Zakipour, and S.A. Kashanian, "Design and Realization of a Sign Language Educational Humanoid Robot," Journal of Intelligent & Robotic Systems, vol. 95, no. 1, pp. 3-17, 2019. [46] M. Mistry, J. Nakanishi, G. Cheng, and S. Schaal, "Inverse Kinematics with Floating Base and Constraints for Full Body Humanoid Robot Control," Humanoids 2008 - 8th IEEE Robotics and Automation: International Conference on Humanoid Robots, Daejeon, Korea (South): IEEE, pp. 22-27, 2008. [47] S. Mohsin, B.W. Salim, A.K. Mohamedsaeed, B.F. Ibrahim, and S.R.M. Zeebaree, "American Sign Language Recognition Based on Transfer Learning Algorithms," International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 5s, pp. 390-399, 2023. [48] R.A. Nihal, N.M. Broti, S.A. Deowan, and S. Rahman, "Design and Development of a Humanoid Robot for Sign Language Interpretation," SN Computer Science, vol. 2, no. 3, p. 220, 2021. [49] G. Palli, C. Melchiorri, G. Vassura, U. Scarcia, L. Moriello, G. Berselli, A. Cavallo, G. De Maria, C. Natale, S. Pirozzi, C. May, F. Ficuciello, and B. Siciliano, "The Dexmart Hand: Mechatronic Design and Experimental Evaluation of Synergy-Based Control for Human-Like Grasping," The International Journal of Robotics Research, vol. 33, no. 5, pp. 799-824, 2014. [50] R. Parent, "Chapter 6 - Motion Capture," Computer Animation (Third Edition), R. Parent Ed. Boston: Morgan Kaufmann, pp. 187-198, 2012. [51] S.K. Paul, M.A.A. Walid, R.R. Paul, M.J. Uddin, M.S. Rana, M.K. Devnath, I.R. Dipu, and M.M. Haque, "An Adam Based Cnn and Lstm Approach for Sign Language Recognition in Real Time for Deaf People," Bulletin of Electrical Engineering and Informatics, vol. 13, no. 1, pp. 502-504, 2024. [52] Y.Z. Qin, W. Yang, B.H. Huang, K.V. Wyk, H. Su, X.L. Wang, Y.W. Chao, and D. Fox, "Anyteleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System," Proceedings of Robotics: Science and System XIX, Daegu, Republic of Korea, 2023. [53] R. Rastgoo, K. Kiani, and S. Escalera, "Sign Language Recognition: A Deep Survey," Expert Systems with Applications, vol. 164, Art no. 113794, 2021. [54] G.M. Rayan and E. Akelman, The Hand: Anatomy, Examination, and Diagnosis. Philadelphia: Lippincott Williams & Wilkins, 2012. [55] N. Reimers and I. Gurevych, "Sentence-Bert: Sentence Embeddings Using Siamese Bert-Networks," Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 2019. [56] Google Research, Dec. 2020, "Mediapipe Holistic — Simultaneous Face, Hand and Pose Prediction, on Device," accessed Oct. 2022. <https://research.google/blog/mediapipe-holistic-simultaneous-face-hand-and-pose-prediction-on-device/> [57] F. Romanelli, "Advanced Methods for Robot‐Environment Interaction Towards an Industrial Robot Aware of Its Volume," Journal of Robotics, vol. 2011, no. 389158, 2011. [58] F. Ronchetti, F.M. Quiroga, C. Estrebou, L. Lanzarini, and A. Rosete, "Lsa64: An Argentinian Sign Language Dataset," arXiv preprint arXiv:2310.17429, 2023. [59] N. Sarhan, Y. El-Sonbaty, and S.M. Youssef, "Hmm-Based Arabic Sign Language Recognition Using Kinect," 2015 Tenth International Conference on Digital Information Management (ICDIM), Jeju, Korea (South), 2015. [60] L. Sentis and O. Khatib, "A Whole-Body Control Framework for Humanoids Operating in Human Environments," Proceedings 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA: IEEE, pp. 2641-2648, 2006. [61] W.D. Smith and N.C. Wormald, "Geometric Separator Theorems and Applications," Proceedings 39th Annual Symposium on Foundations of Computer Science, pp. 232-243, 1998. [62] W.H. Smith and L.F. Ting, Hands Can Make Bridges. Taiwan: National Association of the Deaf in the Republic of China, 1997. [63] B. Song, X. Dai, X. Fan, and H. Gu, "Wearable Multifunctional Organohydrogel-Based Electronic Skin for Sign Language Recognition under Complex Environments," Journal of Materials Science & Technology, vol. 181, pp. 91-103, 2024. [64] R. Sreemathy, M. Turuk, I. Kulkarni, and S. Khurana, "Sign Language Recognition Using Artificial Intelligence," Education and Information Technologies, vol. 28, no. 5, pp. 5259-5278, 2023. [65] Y.-P. Su, X.-Q. Chen, T. Zhou, C. Pretty, and G. Chase, "Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System," Applied Sciences, vol. 12, no. 9, 2022. [66] Y.W. Su and Y.E. bi, Language and Cognition. Taiwan: National Taiwan University Press, 2009, pp. 126-134. [67] W. Takano, Y. Murakami, and Y. Nakamura, "Representation and Classification of Whole-Body Motion Integrated with Finger Motion," Robotics and Autonomous Systems, vol. 124, 2019. [68] Y. Tong, H. Liu, and Z. Zhang, "Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects," IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 2, pp. 301-328, 2024. [69] L.W. Tsai, Robot Analysis: The Mechanics of Serial and Parallel Manipulators, 1st ed. New Jersey: John Wiley & Sons, 1999. [70] J. Tsay, "Taiwan Sign Language Online Dictionary: Construction and Expansion," the 2nd ILAS Annual Linguistics Forum- National Language Corpora: Design and Construction, Taipei, 2019. [71] M.A. Uddin and S.A. Chowdhury, "Hand Sign Language Recognition for Bangla Alphabet Using Support Vector Machine," 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET), Dhaka, Bangladesh: IEEE, pp. 1-4, 2016. [72] A. Wadhawan and P. Kumar, "Sign Language Recognition Systems: A Decade Systematic Literature Review," Archives of Computational Methods in Engineering, vol. 28, no. 3, pp. 785-813, 2021. [73] F. Wang, Y. Du, G. Wang, Z. Zeng, and L. Zhao, "(2+1)D-Slr: An Efficient Network for Video Sign Language Recognition," Neural Computing and Applications, vol. 34, no. 3, pp. 2413-2423, 2022. [74] S.H. Wang, "Motion Planning and Obstacle Avoidance of Dual Robotic Arms," Master Thesis, Department of Mechanical Engineering, National Taiwan University, 2012. [75] G. Welch and G. Bishop, "An Introduction to the Kalman Filter," SIGGRAPH 2001: 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, California, USA, 2001. [76] C.H. Yu, "The Real-Time Sign Language Translation System Based on Transformer and Pose Estimation," Master Thesis, Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan, 2022. [77] Z.F. Zhan, "Imitation System of Humanoid Robots and Its Applications," Master Thesis, Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan, 2022. [78] Z. Zhou, K. Chen, X. Li, S. Zhang, Y. Wu, Y. Zhou, K. Meng, C. Sun, Q. He, W. Fan, E. Fan, Z. Lin, X. Tan, W. Deng, J. Yang, and J. Chen, "Sign-to-Speech Translation Using Machine-Learning-Assisted Stretchable Sensor Arrays," Nature Electronics, vol. 3, no. 9, pp. 571-578, 2020. [79] Q. Zhu, J. Li, F. Yuan, and Q. Gan, "Multiscale Temporal Network for Continuous Sign Language Recognition," Journal of Electronic Imaging, vol. 33, no. 2, Art no. 23059, 2024. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99276 | - |
| dc.description.abstract | 隨著智慧機器人技術的持續進步,其應用逐步從工業任務延伸至人機互動領域。而機器人與使用者之間的溝通能力亦為重要研究議題。雖然語音與文字已廣泛應用於人機界面設計,但對於以手語作為主要溝通媒介的聾人而言,這類方式未必合適。因此,本研究旨在開發一套具備台灣手語溝通能力的機器人系統,透過手語辨識、動作模仿與展示手語語句動作,實現跟聾人之間的互動能力。
在辨識系統方面,本研究採用視覺骨架分析技術,利用 Google MediaPipe 架構萃取影像中人物之關節資訊,以降低原始影像特徵的處理複雜度。搭配機器學習辨識模型,進行動態手語詞彙與句子的辨識訓練。訓練資料來自臺灣大學機器人實驗室所建立的台灣手語影片資料集以及公開的資料集,來進行模型的訓練與測試。最終,此機器學習的方法也在台灣手語辨識研究中表現出不錯的辨識率。 在手語動作生成與模仿方面,本研究提出一套整合式手語模仿與控制系統,將辨識結果轉換為手語人型機器人手臂手掌的控制指令。該系統透過角度映射與運動解算,實現多關節的協調控制。此外,亦設計自碰撞避免機制與軌跡後處理方法,以提升模仿動作的穩定性與流暢性。最終結合雙臂的規劃與控制系統,實現機器人的手語動作展示。 整體實驗結果表示本系統能準確呈現複雜的台灣手語動作,並具備基本的手語辨識能力,為未來手語機器人在輔助聾人溝通和手語教學等應用令譽奠定基礎。 | zh_TW |
| dc.description.abstract | With the continuous advancement of intelligent robotics technology, applications have gradually expanded from industrial tasks to the domain of human–robot interaction. Communication between robots and users has thus become a critical area of research. While voice and text-based interfaces are widely adopted in human–computer interaction, such methods may not be suitable for individuals who rely primarily on sign language. Therefore, this study aims to develop a humanoid robot system capable of communicating using Taiwanese Sign Language (TSL), enabling interaction with the deaf community through sign language recognition, motion imitation, and full-sentence sign expression.
For the recognition component, this study adopts a visual skeleton-based analysis approach using Google MediaPipe to extract joint information from video frames, thereby reducing the complexity of processing raw image features. A machine learning model, trained on both a self-built TSL video dataset from the NTU Robotics Laboratory and publicly available datasets, is employed to recognize dynamic TSL words and sentences. The proposed approach demonstrates promising accuracy and efficiency in TSL recognition tasks. In terms of sign language generation and imitation, this research presents an integrated control system that translates recognition results into joint control commands for the robot’s arms and hands. By mapping joint angles and solving inverse kinematics, the system achieves coordinated multi-joint control. Additionally, a self-collision avoidance mechanism and post-processing methods are implemented to enhance the stability and smoothness of imitated motions. The system ultimately integrates dual-arm planning and control to enable the humanoid robot to perform complete TSL sentence gestures. Experimental results confirm that the proposed system can accurately reproduce complex TSL movements and possesses fundamental recognition capabilities. This work establishes a solid foundation for future applications of sign language robots in communication assistance, language education, and accessible human–robot interaction. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-21T17:05:24Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-08-21T17:05:24Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 中文口試委員會審定書 iii
英文口試委員會審定書 v 誌謝 vii 摘要 ix Abstract xi List of Tables xvii List of Figures xix Chapter 1 Introduction 1 1.1 Motivations 1 1.2 Contributions 5 1.3 Organization 7 Chapter 2 Literature Review 9 2.1 Sign Language Robot 9 2.2 Sign Language Recognition 13 2.3 Imitation of Human Motion by Robotic Arm-Hand 16 2.4 Sign Language Robot Palms Design 21 2.5 Summary 23 Chapter 3 Kinematics and Dynamics of Sign Language Robot 25 3.1 Kinematics of Robotics Systems 26 3.1.1 Forward Kinematics 26 3.1.2 Inverse Kinematics 28 3.2 Dynamics of the Robotics System 30 3.3 Robot Hand Kinematics Analysis 32 3.3.1 Jacobian Matrix Analysis 32 3.3.2 Inverse Kinematics 34 3.4 Floating Based Kinematics 35 3.5 Summary 40 Chapter 4 Taiwanese Sign Language Recognition System 41 4.1 Taiwanese Sign Language Linguistics 41 4.2 The Five Elements of Taiwanese Sign Language Gestures 43 4.3 Human Skeleton Detection 47 4.4 Machine Learning Method 51 4.4.1 Manual Feature Extraction 51 4.4.2 Non-Manual Feature Extraction 53 4.4.3 Models 55 4.4.4 Datasets 58 4.5 Word processing system 60 4.5.1 Word Processing 61 4.5.2 Segmentation and Part-of-Speech Classification 62 4.5.3 Word Sequence and Database Matching Judgment 63 4.6 Summary 65 Chapter 5 Motion Imitation and Control System 67 5.1 3D Pose Estimation Method 67 5.2 Human Motion to Sign Language Robot Motion 71 5.2.1 Upper Limb Mapping Method 71 5.2.2 Mapping of Other Parts 79 5.3 Trajectory Post-Processing 80 5.3.1 Joint Limit Avoidance 81 5.3.2 Self-Collision Avoidance 81 5.3.3 Trajectory Smoothing 84 5.3.4 Motion Data 84 5.4 Planning and Control of Sign Language Robot 85 5.4.1 Planning of the Sign Language Robot 85 5.4.2 Control of the Sign Language Robot 88 5.4.3 Real-Time Control System 90 5.5 Summary 94 Chapter 6 Simulations and Experiments 97 6.1 Specification of the NTU Humanoid Robot 97 6.1.1 Hardware 97 6.1.2 Software 102 6.1.3 Simulation Environment 102 6.2 Results of Taiwanese Sign Language Recognition 104 6.2.1 Recognition Effectiveness of Words 104 6.2.2 Sentence Recognition 107 6.2.3 Discussion 111 6.3 Results of Sign Language Performance 111 6.3.1 Performance 50 Sign Language Handshapes 111 6.3.2 Results of Self-Collision Avoidance 116 6.3.3 Performance Sign Language Sentences 120 6.3.4 Verifying the meaning of the sign language robot's movements 126 6.3.5 Discussion 139 6.4 Summary 140 Chapter 7 Conclusions and Future Work 143 7.1 Conclusions 143 7.2 Future Work 144 References 147 Appendices 153 A. 305 Signs in the TSL Dataset 153 B. 50 Sentences in the TSL Dataset 167 Biography 173 | - |
| dc.language.iso | en | - |
| dc.subject | 臺灣手語 | zh_TW |
| dc.subject | 視覺辨識 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 人體骨架辨識模型 | zh_TW |
| dc.subject | 手掌手臂系統 | zh_TW |
| dc.subject | 動作模仿 | zh_TW |
| dc.subject | 手語機器人 | zh_TW |
| dc.subject | 人型機器人 | zh_TW |
| dc.subject | hand-arm system | en |
| dc.subject | motion imitation system | en |
| dc.subject | machine learning | en |
| dc.subject | visual recognition | en |
| dc.subject | MediaPipe | en |
| dc.subject | Taiwanese Sign Language | en |
| dc.subject | humanoid robot | en |
| dc.subject | sign language robot | en |
| dc.title | 手語機器人之軌跡規劃與即時控制 | zh_TW |
| dc.title | Trajectory Planning and Real-Time Control of the Sign Language Robot | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 郭重顯;林峻永;蔡清元 | zh_TW |
| dc.contributor.oralexamcommittee | Chung-Hsien Kuo;Chun-Yeon Lin;Tsing-Iuan Tsay | en |
| dc.subject.keyword | 手語機器人,人型機器人,臺灣手語,人體骨架辨識模型,視覺辨識,機器學習,動作模仿,手掌手臂系統, | zh_TW |
| dc.subject.keyword | sign language robot,humanoid robot,Taiwanese Sign Language,MediaPipe,visual recognition,machine learning,motion imitation system,hand-arm system, | en |
| dc.relation.page | 173 | - |
| dc.identifier.doi | 10.6342/NTU202502705 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2025-08-02 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 機械工程學系 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 9.77 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
