Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78687
Title: | 使移動機器人執行動態多社交任務之最佳化導航系統 Optimal Navigation System for a Mobile Robot to Execute Dynamical Multiple Social Tasks |
Authors: | Shao-Hung Chan 詹少宏 |
Advisor: | 傅立成(Li-Chen Fu) |
Keyword: | 任務導向導航系統,動態任務與動作規劃,機器人感知,人機互動, Task-oriented Navigation System,Dynamic Task and Motion Planning,Robot Perception,Human Robot Interaction, |
Publication Year : | 2019 |
Degree: | 碩士 |
Abstract: | 近年來,由於人口老化與少子化等因素,老人長照以及居家陪伴等需求日顯重要,與之相對的社交與陪伴型機器人的相關研究隨之增加。這些機器人更展現了在未來高齡化社會中的潛在應用能力。為了能使機器人輔助家庭成員與年長者的生活起居,基本的功能包括強健的定位能力、導航能力、與感測能力。此外,機器人亦應該具備能基於影像及語音等感測資訊產生對環境的即時認知或推論。換言之,機器人要能評估使用者的狀態與語言指示並進而完成人機互動領域中的社交與服務的任務。因此,一個動態、長時間的決策系統能夠使社交陪伴機器人自動產生合適的動態任務與動作規劃 (Task And Motion Planning, TAMP)。另一方面,為了使社交機器人能夠趨向實際應用的階段甚至更加地普及於未來的居家環境當中,該決策系統必須將有限的運算資源以及有效率的運算列入考量。
在本篇研究當中,基於動態任務與動作規劃,我們透過機器人感知提出了一個以任務導向為主之導航決策系統來令機器人完成複雜的動態多社交任務。為了組織這些社交任務,我們提出了一個具有隨時間遞減獎勵機制的指令架構。此外,我們將室內環境模擬成圖以定位指令,並提出一個相對應之動態任務規劃演算法。該演算法藉由最佳化累積獎勵使得機器人能同時考量指令優先度以及總執行時間。至於感知部分,視覺上除了人物定位及辨識之外,我們提出一個階層式子系統來辨識人類行為,並在聽覺上設計一個結合語音與情緒辨識的子系統。在有限運算資源之下,本系統致力於結合深度學習框架與啟發式演算法以同步處理感知與決策資訊。得力於本系統,社交型機器人有能力滿足每位使用者的需求,並在多人環境中充分展現出有效率的人機互動。 In recent years, researches related to social and companion robots have gradually increased, showing its importance in the field of daily healthcare and human companion. Those robots also demonstrate potential applications especially in the society where elderly people growing year by year. In order for robots to provide assistance toward family members and elders in a household environment, the prerequisite capabilities are to perform robust localization, navigation, and sensing ability. In addition to that, the robots should also be capable of perceiving the environment and human beings based on the visual and audio sensor data. In other words, robots should know how to estimate human status and understand his/her verbal commands so as to complete social and service tasks in the area of intelligent human robot interaction. More practically, a dynamic, any-time decision making system is necessary for social and companion robots to generate adequate task and motion planning (TAMP) over a long period of time. On the other hand, with the purpose of making robots widely deployed in the future, efficient calculation under limited computation resource should be taken into consideration while designing the overall system. In this thesis, inspired from the Dynamic TAMP framework, we propose a novel task-oriented navigation system for robots to achieve social interaction tasks with the help of perceptions. To organize these social tasks, we propose an instruction structure consisting decaying reward with regard to priorities and time. Moreover, we model the indoor scenario into a graph structure to allocate instructions, and propose a task planning algorithm that considers not only the priorities among multiple tasks but also time efficiency through optimizing the accumulative reward. As for the perceptions that help assign priorities of instructions, we propose a sub-system for human localization, identification, and framewise hierarchical activity recognition in the visual aspect. As for verbal perception, we design a sub-system to understand human words as well as sentiments. Note that under the limited computational speed and resource, the system aims to simultaneously perform perception and decision making using both deep learning modules and heuristic algorithms. With the help of our system, the social robot is able to not only meet human requirements but also interact with people in a multiple-human environment efficiently, achieving sophisticated human robot interaction (HRI). |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78687 |
DOI: | 10.6342/NTU201902426 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 電機工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-108-R06921017-1.pdf Restricted Access | 5.67 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.