請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78687
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 傅立成(Li-Chen Fu) | |
dc.contributor.author | Shao-Hung Chan | en |
dc.contributor.author | 詹少宏 | zh_TW |
dc.date.accessioned | 2021-07-11T15:12:16Z | - |
dc.date.available | 2022-08-14 | |
dc.date.copyright | 2019-08-14 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-08-02 | |
dc.identifier.citation | [1] C. Feichtenhofer, A. Pinz, and R. P. Wildes, “Spatiotemporal Multiplier Networks for Video Action Recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4768–4777.
[2] Y. Zhu, Z. Lan, S. Newsam, and A. Hauptmann, “Hidden Two-Stream Convolutional Networks for Action Recognition,” in Computer Vision - ACCV 2018, Lecture Notes in Computer Science, vol 11363, 2018, pp. 363–378. [3] K. Soomro, A. R. Zamir, and M. Shah, “UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild,” CoRR, vol. abs/1212.0, no. November, 2012. [4] D. Feillet, P. Dejax, and M. Gendreau, “Taveling Salesman Problems with Profits: An Overview,” Transp. Sci., vol. 39, no. 2, pp. 188–205, 2001. [5] E. Angelelli, C. Bazgan, M. G. Speranza, and Z. Tuza, “Complexity and approximation for Traveling Salesman Problems with profits,” Theor. Comput. Sci., vol. 531, pp. 54–65, 2014. [6] S. Vasudevan, S. Gächter, V. Nguyen, and R. Siegwart, “Cognitive maps for mobile robots-an object based approach,” Rob. Auton. Syst., vol. 55, no. 5, pp. 359–371, 2007. [7] P. Corke, Robotics, Vision and Control: Fundamental Algorithms in MATLAB, 1st ed., vol. 73. Springer Publishing Company, Incorporated, 2013. [8] R. R. Murphy, Introduction to AI Robotics, 1st ed. Cambridge, MA, USA: MIT Press, 2000. [9] W. Burgard, C. Stachniss, G. Grisetti, B. Steder, R. Kummerle, C. Dornhege, M. Ruhnke, A. Kleiner, and J. D. Tard´os, “A Comparison of SLAM Algorithms Based on a Graph of Relations,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2009, pp. 2089–2095. [10] J. Engel, T. Sch, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Computer Vision – ECCV 2014. Lecture Notes in Computer Science, vol 8690, 2014, pp. 834–849. [11] R. Mur-Artal, J. M. M. Montiel, and J. D. Tard´os, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, 2015. [12] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE Trans. Robot., vol. 33, no. 5, pp. 1255–1262, 2017. [13] J. Tang, Y. Chen, A. Jaakkola, J. Liu, J. Hyyppä, and H. Hyyppä, “NAVIS-An UGV Indoor Positioning System Using Laser Scan Matching for Large-Area Real-Time Applications,” Sensors (Basel)., vol. 14, no. 7, pp. 11805–11824, 2014. [14] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press, 1999. [15] G. Grisettiyz, C. Stachniss, and W. Burgard, “Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2005, vol. 17, no. 1, pp. 2432–2437. [16] G. Grisetti, C. Stachniss, and W. Burgard, “Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters,” IEEE Trans. Robot., vol. 23, no. 1, pp. 34–46, 2007. [17] H. Kretzschmar and C. Stachniss, “Information-theoretic compression of pose graphs for laser-based SLAM,” Int. J. Rob. Res., vol. 31, no. 11, pp. 1219–1230, 2012. [18] G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard, “A Tutorial on Graph-Based SLAM,” IEEE Intell. Transp. Syst. Mag., 2010. [19] R. Girshick, “Fast R-CNN,” in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440–1448. [20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788. [21] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6517–6525. [22] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” CoRR, vol. abs/1804.0, 2018. [23] J. J. Gibson, The Senses Considered as Perceptual Systems, no. c. 2012. [24] V. Dutta and T. Zielinska, “Action Prediction Based on Physically Grounded Object Affordances in Human-Object Interactions,” in International Workshop on Robot Motion and Control (RoMoCo), 2017, pp. 47–52. [25] H. S. Koppula and A. Saxena, “Anticipating Human Activities Using Object Affordances for Reactive Robotic Response,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 14–29, 2016. [26] H. Choset, K. M. Lynch, S. Hutchinson, G. A. Kantor, W. Burgard, L. E. Kavraki, and S. Thrun, Principles of Robot Motion: Theory, Algorithms, and Implementations. . [27] P. E. Hart, N. J. Nilsson, and B. Raphael, “A Formal Basis for the Heuristic Determination of Minimum Cost Paths,” IEEE Trans. Syst. Sci. Cybern., vol. SSC-4, no. 2, pp. 100–107, 1968. [28] S. Koenig and M. Likhachev, “D* Lite,” in Proceedings of the National Conference on Artificial Intelligence, 2002, pp. 476–483. [29] S. M. LaValle, “Rapidly-Exploring Random Trees: A New Tool for Path Planning,” TR 98-11, 1998. [30] M. Ghallab, D. Nau, and P. Traverso, Automated Planning: Theory and Practice, 3rd ed. Morgan Kaufmann, 2004. [31] T. Lozano-Pérez, J. L. Jones, E. Mazer, and P. A. O’Donnell, “Task-Level Planning of Pick-and-Place Robot Motions,” Computer (Long. Beach. Calif)., vol. 22, no. 3, pp. 21–29, 1989. [32] N. T. Dantam, S. Chaudhuri, and L. E. Kavraki, “The Task-Motion Kit: An Open Source, General-Purpose Task and Motion-Planning Framework,” IEEE Robot. Autom. Mag., vol. 25, no. 3, pp. 61–70, 2018. [33] W. Jacak, “Robot Task And Movement Planning,” in AI, Simulation and Planning in High Autonomy Systems, 1990, pp. 168–173. [34] L. P. Kaelbling and T. Lozano-Pérez, “Hierarchical Task and Motion Planning in the Now,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 1470–1477. [35] N. T. Dantam, Z. K. Kingston, S. Chaudhuri, and L. E. Kavraki, “Incremental Task and Motion Planning: A Constraint-Based Approach,” Robot. Sci. Syst., 2016. [36] R. Chitnis, D. Hadfield-Menel, A. Gupta, S. Srivastava, E. Groshev, C. Lin, and P. Abbeel, “Guided Search for Task and Motion Plans Using Learned Heuristics,” in Proceeding of IEEE International Conference on Robotics and Automation (ICRA), 2016, vol. 2016-June, pp. 447–454. [37] K. Baizid, A. Yousnadj, A. Meddahi, R. Chellali, and J. Iqbal, “Time scheduling and optimization of industrial robotized tasks based on genetic algorithms,” Robot. Comput. Integr. Manuf., vol. 34, pp. 140–150, 2015. [38] B. Kim, L. P. Kaelbling, and T. Lozano-Perez, “Learning to Guide Task and Motion Planning using Score-Space Representation,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 2810–2817. [39] P. Muñoz, M. D. R-Moreno, and D. F. Barrero, “Unified framework for path-planning and task-planning for autonomous robots,” Rob. Auton. Syst., vol. 82, pp. 1–14, 2016. [40] C. Wong, E. Yang, X. Yan, and D. Gu, “Dynamic Anytime Task and Path Planning for Mobile Robots,” in UKRAS19 Conference on Embedded Intelligence, 2019. [41] Y. Jiang, F. Yang, S. Zhang, and P. Stone, “Integrating Task-Motion Planning with Reinforcement Learning for Robust Decision Making in Mobile Robots,” in Proceedings of the International Conference on Autonomous Agents and MultiagentSystems (AAMAS), 2019. [42] R. Alami, A. Clodic, V. Montreuil, E. A. Sisbot, and R. Chatila, “Toward Human-Aware Robot Task Planning,” in Proceedings of the AAAI Spring Symposium To Boldly Go Where No Human-Robot Team Has Gone Before., 2006. [43] V. V. Unhelkar , P. A. Lasota , Q. Tyroller , R. Buhai , L. Marceau , B. Deml, and J. A. Shah, “Human-Aware Robotic Assistant for Collaborative Assembly: Integrating Human Motion Prediction With Planning in Time,” IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 2394–2401, 2018. [44] W. Y. G. Louie, T. Vaquero, G. Nejat, and J. C. Beck, “An autonomous assistive robot for planning, scheduling and facilitating multi-user activities,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 5292–5298. [45] K. E. C. Booth, T. T. Tran, G. Nejat, and J. C. Beck, “Mixed-Integer and Constraint Programming Techniques for Mobile Robot Task Planning,” IEEE Robot. Autom. Lett., vol. 1, no. 1, pp. 500–507, 2016. [46] Z. Cao, G. Hidalgo, T. Simon, S. E. Wei, and Y. Sheikh, “OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields,” in arXiv preprint arXiv:1812.08008, 2018. [47] Z. Cao, T. Simon, S. -E. Wei, and Y. Sheikh, “Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1302–1310. [48] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand Keypoint Detection in Single Images Using Multiview Bootstrapping,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1145–1153. [49] S. -E.Wei, V. Ramakrishna, T. Kanada, and Y. Sheikh, “Convolutional Pose Machines,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4724–4732. [50] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional Architecture for Fast Feature Embedding,” arXiv Prepr. arXiv1408.5093, 2014. [51] G. Hidalgo, Z. Cao, T. Simon, S. -E. Wei, H. Joo, and Y. Sheikh, “CMU-Perceptual-Computing-Lab / openpose,” 2019. [Online]. Available: https://github.com/CMU-Perceptual-Computing-Lab/openpose. [52] “JSK Ros Packages for Smartphones,” 2019. [Online]. Available: https://github.com/jsk-ros-pkg/jsk_smart_apps. [53] C. J. Hutto and E. E. Gilbert, “VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text,” in Proceedings of the International AAAI Conference on Web and Social Media (ICWSM), 2014, pp. 216–225. [54] A. Fusiello, Elements of Geometric Computer Vision. 2006. [55] T. Riemersma, “Colour metric,” 2010. [Online]. Available: http://www.compuphase.com/cmetric.htm. [56] K. M. Varadarajan and M. Vincze, “AfNet: The Affordance Network,” in Computer Vision – ACCV 2012. Lecture Notes in Computer Science, 2012, vol. 7724, pp. 512–523. [57] “vaderSentiment: VADER Sentiment Analysis,” 2019. [Online]. Available: https://github.com/cjhutto/vaderSentiment. [58] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng, “ROS: an open-source Robot Operating System Morgan,” in ICRA Workshop on Open Source Software, 2009. [59] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots,” in Proceedings of the National Conference on Artificial Intelligence (AAAI), 1999. [60] D. Fox, “KLD-Sampling: Adaptive Particle Filters,” in Proceedings of the International Conference on Neural Information Processing Systems: Natural and Synthetic (NIPS), 2001, vol. 14, no. 1, pp. 713–720. [61] D. Fox, “Adapting the Sample Size in Particle Filters Through KLD-Sampling,” Int. J. Robot. Res., 2003. [62] B. P. Gerkey, “amcl - ROS Wiki,” 2018. [Online]. Available: http://wiki.ros.org/amcl. [63] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Third Edition, 3rd ed., no. 2. The MIT Press, 2009. [64] D. V. Lu, D. Hershberger, and W. D. Smart, “Layered costmaps for context-sensitive navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 709–715. [65] D. Fox, W. Burgard, and S. Thrun, “The Dynamic Window Approach to Collision Avoidance,” IEEE Robot. Autom. Mag., vol. 4, pp. 23–33, 1997. [66] C. Rösmann, W. Feiten, T. Wösch, F. Hoffmann, and T. Bertram, “Trajectory Modification Considering Dynamic Constraints of Autonomous Robots,” in Proceedings of the 7th German Conference on Robotics, 2012, pp. 74–79. [67] C. Rösmann, W. Feiten, T. Wösch, F. Hoffmann, and T. Bertram, “Efficient Trajectory Optimization using a Sparse Model,” in Proceedings of the European Conference on Mobile Robots (ECMR), 2013, pp. 138–143. [68] C. Rösmann, F. Hoffmann, and T. Bertram, “Planning of Multiple Robot Trajectories in Distinctive Topologies,” in Proceedings of the European Conference on Mobile Robots (ECMR), 2015, pp. 1–6. [69] C. Rösmann, F. Hoffmann, and T. Bertram, “Integrated online trajectory planning and optimization in distinctive topologies,” Rob. Auton. Syst., vol. 88, pp. 142–153, 2017. [70] C. Rösmann, F. Hoffmann, and T. Bertram, “Kinodynamic Trajectory Optimization and Control for Car-Like Robots,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 5681–5686. [71] K. Chatterjee and T. A. Henzinger, 25 Years of Model Checking. Berlin, Heidelberg: Springer-Verlag, 2008. [72] S. J. Russell and P. Norvig, Artificial Intelligence A Modern Approach, 3rd ed. Upper Saddle River, N.J. :Prentice Hall, 2010. [73] D. L. Poole and A. K. Mackworth, Artificial Intelligence: Foundations of Computational Agents, 2nd ed. Cambridge University Press, 2017. [74] P. -T. Wu, C. -A. Yu, S. -H. Chan, M. -L. Chiang, and L. -C. Fu, “Multi-Layer Environmental Affordance Map for Robust Indoor Localization, Event Detection and Social Friendly Navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019. [75] S. -H. Chan, X. Xu, P. -T. Wu, M. -L. Chiang, and L. -C. Fu, “Real-time Obstacle Avoidance using Supervised Recurrent Neural Network with Automatic Data Collection and Labeling,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2019. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78687 | - |
dc.description.abstract | 近年來,由於人口老化與少子化等因素,老人長照以及居家陪伴等需求日顯重要,與之相對的社交與陪伴型機器人的相關研究隨之增加。這些機器人更展現了在未來高齡化社會中的潛在應用能力。為了能使機器人輔助家庭成員與年長者的生活起居,基本的功能包括強健的定位能力、導航能力、與感測能力。此外,機器人亦應該具備能基於影像及語音等感測資訊產生對環境的即時認知或推論。換言之,機器人要能評估使用者的狀態與語言指示並進而完成人機互動領域中的社交與服務的任務。因此,一個動態、長時間的決策系統能夠使社交陪伴機器人自動產生合適的動態任務與動作規劃 (Task And Motion Planning, TAMP)。另一方面,為了使社交機器人能夠趨向實際應用的階段甚至更加地普及於未來的居家環境當中,該決策系統必須將有限的運算資源以及有效率的運算列入考量。
在本篇研究當中,基於動態任務與動作規劃,我們透過機器人感知提出了一個以任務導向為主之導航決策系統來令機器人完成複雜的動態多社交任務。為了組織這些社交任務,我們提出了一個具有隨時間遞減獎勵機制的指令架構。此外,我們將室內環境模擬成圖以定位指令,並提出一個相對應之動態任務規劃演算法。該演算法藉由最佳化累積獎勵使得機器人能同時考量指令優先度以及總執行時間。至於感知部分,視覺上除了人物定位及辨識之外,我們提出一個階層式子系統來辨識人類行為,並在聽覺上設計一個結合語音與情緒辨識的子系統。在有限運算資源之下,本系統致力於結合深度學習框架與啟發式演算法以同步處理感知與決策資訊。得力於本系統,社交型機器人有能力滿足每位使用者的需求,並在多人環境中充分展現出有效率的人機互動。 | zh_TW |
dc.description.abstract | In recent years, researches related to social and companion robots have gradually increased, showing its importance in the field of daily healthcare and human companion. Those robots also demonstrate potential applications especially in the society where elderly people growing year by year. In order for robots to provide assistance toward family members and elders in a household environment, the prerequisite capabilities are to perform robust localization, navigation, and sensing ability. In addition to that, the robots should also be capable of perceiving the environment and human beings based on the visual and audio sensor data. In other words, robots should know how to estimate human status and understand his/her verbal commands so as to complete social and service tasks in the area of intelligent human robot interaction. More practically, a dynamic, any-time decision making system is necessary for social and companion robots to generate adequate task and motion planning (TAMP) over a long period of time. On the other hand, with the purpose of making robots widely deployed in the future, efficient calculation under limited computation resource should be taken into consideration while designing the overall system.
In this thesis, inspired from the Dynamic TAMP framework, we propose a novel task-oriented navigation system for robots to achieve social interaction tasks with the help of perceptions. To organize these social tasks, we propose an instruction structure consisting decaying reward with regard to priorities and time. Moreover, we model the indoor scenario into a graph structure to allocate instructions, and propose a task planning algorithm that considers not only the priorities among multiple tasks but also time efficiency through optimizing the accumulative reward. As for the perceptions that help assign priorities of instructions, we propose a sub-system for human localization, identification, and framewise hierarchical activity recognition in the visual aspect. As for verbal perception, we design a sub-system to understand human words as well as sentiments. Note that under the limited computational speed and resource, the system aims to simultaneously perform perception and decision making using both deep learning modules and heuristic algorithms. With the help of our system, the social robot is able to not only meet human requirements but also interact with people in a multiple-human environment efficiently, achieving sophisticated human robot interaction (HRI). | en |
dc.description.provenance | Made available in DSpace on 2021-07-11T15:12:16Z (GMT). No. of bitstreams: 1 ntu-108-R06921017-1.pdf: 5804044 bytes, checksum: ecfb28fe63b3a8a161b43383a842ef60 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 口試委員會審定書 #
誌謝 I 摘要 II ABSTRACT III TABLE OF CONTENTS VI LIST OF FIGURES IX LIST OF TABLES XII Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Challenges 3 1.3 Contributions 3 1.4 Thesis Overview 4 Chapter 2 Background and Related Works 6 2.1 Robot Perceptions 6 2.1.1 Introduction to Robot Perceptions 6 2.1.2 Laser Perceptions: Simultaneous Localization and Mapping 7 2.1.3 Visual Perceptions: Object Detection and Object Affordance 8 2.1.4 Summary of Robot Perceptions 9 2.2 Task and Motion Planning (TAMP) 9 2.2.1 Introduction to TAMP 10 2.2.2 TAMP for Mobile Robots 12 2.2.3 TAMP for Human Robot Interaction (HRI) 13 2.2.4 Summary of TAMP 14 Chapter 3 Visual and Verbal Perceptions 15 3.1 Preliminary 16 3.1.1 You Only Look Once (YOLO) 16 3.1.2 OpenPose: Human Anatomic Skeleton Detection 18 3.1.3 Speech to Text and Emotion Recognition 21 3.2 Methodology for Visual Perception 22 3.2.1 Image Stitching 23 3.2.2 Human Localization 24 3.2.3 Human Identification 27 3.2.4 Framewise Hierarchical Human Activity Recognition 30 3.3 Methodology for Verbal Perception 36 3.4 Human ID Database Organization 37 Chapter 4 Dynamic Multi-Task Social Navigation 38 4.1 Preliminary 39 4.1.1 Laser-based SLAM: GMapping, AMCL, and Navigation Stack 39 4.1.2 Fundamentals of Complexity 41 4.1.3 Introduction to Value Iteration 43 4.2 Methodology 45 4.2.1 Transformation from Perceptions into Instructions 45 4.2.2 Problem Formulation for Task Planning 47 4.2.3 Algorithm for Task Planner 49 4.2.4 Correctness of the Proposed Task Planning Algorithm 55 4.2.5 Integration of Proposed TAMP 61 Chapter 5 Experimental Results 63 5.1 Environment Setup 63 5.2 Experiments: Visual Perception 64 5.2.1 Human Localization Evaluation 65 5.2.2 Pepper Image Testing Dataset 66 5.2.3 Human Identification Evaluation 68 5.2.4 Framewise Hierarchical Human Action Detection Evaluation 69 5.3 Experiments: Dynamic Multi-Task Social Navigation 73 5.3.1 Optimality Comparison of Task Planner Algorithm 73 5.3.2 Efficiency Comparison of Task Planner Algorithm 78 5.3.3 Real World Implementation and Analysis 80 5.3.4 Similarity Comparison of Task Planner Algorithm 82 Chapter 6 Conclusion and Future Works 84 REFERENCE 87 | |
dc.language.iso | en | |
dc.title | 使移動機器人執行動態多社交任務之最佳化導航系統 | zh_TW |
dc.title | Optimal Navigation System for a Mobile Robot to Execute Dynamical Multiple Social Tasks | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 施吉昇(Chi-Sheng Shih),許永真(Yung-Jen Hsu),簡忠漢(Jong-Hann Jean),曾士桓(Shih-Huan Tseng) | |
dc.subject.keyword | 任務導向導航系統,動態任務與動作規劃,機器人感知,人機互動, | zh_TW |
dc.subject.keyword | Task-oriented Navigation System,Dynamic Task and Motion Planning,Robot Perception,Human Robot Interaction, | en |
dc.relation.page | 91 | |
dc.identifier.doi | 10.6342/NTU201902426 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-08-05 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-R06921017-1.pdf 目前未授權公開取用 | 5.67 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。