請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27630完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 黃漢邦 | |
| dc.contributor.author | Cheng-Hsiu Li | en |
| dc.contributor.author | 李政修 | zh_TW |
| dc.date.accessioned | 2021-06-12T18:12:54Z | - |
| dc.date.available | 2013-09-19 | |
| dc.date.copyright | 2011-09-19 | |
| dc.date.issued | 2011 | |
| dc.date.submitted | 2011-08-08 | |
| dc.identifier.citation | [1] S. Ali and M. Shah, “Floor Fields for Tracking in High Density Crowd Scenes,” Proceedings of European Conference on Computer Vision, Marseille, France, Vol. 5303, pp. 1-14, October 2008.
[2] T. Amaoka, H. Laga, S. Saito, and M. Nakajima, “Personal Space Modeling for Human-Computer Interaction,” Proceedings of International. Conference. on Entertainment Computing, Paris, France, pp. 60-72, August 2009. [3] M. Bennewitz, W. Burgard, G. Cielniak, and S. Thrun, “Learning Motion Patterns of People for Compliant Robot Motion,” International Journal of Robotics Research, Vol. 24, No. 1, pp. 31-48, January 2005. [4] S. S. Blackman, “Multiple Hypothesis Tracking for Multiple Target Tracking,” IEEE Aerospace and Electronic Systems Magazine, Vol. 19, No 1, pp. 5-18, 2004. [5] F. Bohnert, I. Zukerman, S. Berkovsky, T. Baldwin, and L. Sonenberg, “Using Interest and Transition Models to Predict Visitor Locations in Museums,” AI Communications,Vol. 21, No. 2-3, pp. 195-202, 2008. [6] M. Boose and R. Zlot, “Map Matching and Data Association for Large-Scale Two-dimensional Laser Scan-based SLAM,” International Journal of Robotics Research, Vol. 27, No. 6, pp. 667–691, June 2008. [7] C. Breazeal, J. Gray, and M. Berlin, “An Embodied Cognition Approach to Mindreading Skills for Socially Intelligent Robots,” International Journal of Robotics Research, Vol. 28, No. 5, pp. 656–680, May 2009. [8] S. Candido and S. Hutchinson, “Minimum uncertainty robot path planning using a POMDP approach,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 1408-1413, October 2010. [9] F. Carreira, T. Canas, A. Silva, and C. Cardeira, “I-Merc: A Mobile Robot to Deliver Meals inside Health Services,” Proceedings of IEEE International Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand, pp. 1-8, June 2006. [10] M.-Y. Cheng, M. I. C. Tsai, and C.-Y. Sun, “Dynamic Visual Tracking Based on Multiple Feature Matching and G-H Filter,” Advanced Robotics, Vol. 20, No. 12, pp. 1401-1423, 2006. [11] H. Choset and J. Burdick, “Sensor Based Planning. I. The Generalized Voronoi Graph,” Proceedings of IEEE International. Conference. on Robotics and Automation, Nagoya, Japan, Vol. 2, pp. 1649-1655, May 1995. [12] S.-Y. Chung and H.-P. Huang, “A Mobile Robot that Understands Pedestrian Spatial Behaviors,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 5861-5866, October 2010. [13] S.-Y. Chung, “Spatial Understanding and Motion Planning for a Mobile Robot,” Doctoral Dissertation, Department of Mechanical Engineering, National Taiwan University, 2010. [14] G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual Categorization with Bags of Keypoints,” Workshop on Statistical Learning in Computer Vision, ECCV, Prague, Czech Republic, pp. 1-22, 2004. [15] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, pp. 886-893, June 2005. [16] G. B. Dantzig, A. Orden, and P. Wolfe, “The Generalized Simplex Method for Minimizing a Linear from Under Linear Inequality Restraints,” Pacific Journal of Mathematics, Vol. 5, No. 2, pp. 183–195, 1955. [17] V. Delaitre, I. Laptev, and J. Sivic, ”Recognizing Human Actions in Still Images: A Study of Bag-of-Features and Part-based Representations,” Proceedings of the British Machhine Vision Conference (BMVC), Aberystwyth, Wales, UK, pp. 1-11, August 2010. [18] D. Feil-Seifer and M. Mataric, “Using Proxemics to Evaluate Human-Robot Interaction,” Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, Osaka, Japan, pp. 143-144, March 2010. [19] A. F. Foka and P. E. Trahanias, “Predictive Autonomous Robot Navigation,” Procedings of IEEE/RSJ International Conference. on Intelligent Robots and System, EPFL, Switzerland, Vol. 1, pp. 490-495, September 2002. [20] S. Gaffney and P. Smyth, 'Trajectory Clustering with Mixtures of Regression Models,' Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, pp. 63-72, August 1999. [21] D. M. Gavrila and S. Munder, “Multi-Cue Pedestrian Detection and Tracking from a Moving Vehicle,” Internationall Journal of Computer Vision, Vol. 73, No. 1, pp. 41-59, June 2007. [22] W. Ge, R. T. Collins, and B. Ruback, “Automatically Detecting the Small Group Structure of a Crowd,” IEEE Workshop on Applications of Computer Vision, Snowbird, Utah, USA, pp. 1-8, December 2009. [23] R. Gockley, J. Forlizzi, and R. Simmons, “Natural Person-Following Behavior for Social Robots,” Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, Arlington, Virginia, USA, pp. 17-24, March 2007. [24] B. Graf, M. Hans, and R. D. Schraft, “Care-O-Bot II - Development of a Next Generation Robotic Home Assistant,” Autonomous Robots, Vol. 16, No. 2, pp. 193-205, March 2004. [25] E. T. Hall, The Hidden Dimension, New York: Anchor Books, 1966. [26] E. T. Hall, “Proxemics,” Current Anthropology, Vol. 9, No. 2-3, pp. 83-108, June 1968. [27] H. Helble and S. Cameron, “3-D Path Planning and Target Trajectory Prediction for the Oxford Aerial Tracking System,” Proceedings of IEEE International Conference on Robotics and Automation, Rome, Italy, pp. 1042-1048, April 2007. [28] S. Huang and G. Dissanayake, “Convergence Analysis for Extended Kalman Filter Based SLAM,” Proceedings of IEEE International Conference on Robotics and Automation, Orlando, Florida, USA, pp. 412-417, May 2006. [29] D. W. Hugh and T. Bailey, “Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms,” IEEE Robotics and Automation Magazine, Vol. 13, No. 2, pp. 99-110, 2006. [30] T. Kanda, D. F. Glas, M. F. Shiomi, and N. F. Hagita, “Abstracting People's Trajectories for Social Robots to Proactively Approach Customers,” IEEE Transactions on Robotics, Vol. 25, No. 6, pp. 1382-1396, 2009. [31] D. Kulic and E. A. Croft, “Affective State Estimation for Human-Robot Interaction,” IEEE Transactions on Robotics, Vol. 23, No. 5, pp. 991-1000, October 2007. [32] I. Laptev and G. Mori, “Statistical and Structural Recognition of Human Actions,” Tutorial on Human Action Recognition, European Conference on Computer Vision (ECCV), Crete, Greece, September 2010. [33] J. C. Latombe, Robot Motion Planning, Norwell, MA: Kluwer Academic Publishers, pp. 317-323, 1991. [34] B. Lau, K. Arras, and W. Burgard, “Multi-Model Hypothesis Group Tracking and Group Size Estimation,” International Journal of Social Robotics, Vol. 2, No. 1, pp. 19-30, 2010. [35] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories,” Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, New York, NY, USA, Vol. 2, pp. 2169-2178, June 2006. [36] L.-J. Li, H. Su, E.P. Xing, and L. Fei-Fei, “Object Bank: A High-Level Image Representation for Scene Classification and Semantic Feature Sparsification,” Proceedings of the Neural Information Processing Systems (NIPS), Vancouver, Canada, December 2010. [37] H. H. Lund, “Modern Artificial Intelligence for Human-Robot Interaction,” Proceedings of the IEEE, Vol. 92, No. 11, pp. 1821-1838, 2004. [38] S. Mehrotra, “On the Implementation of a Primal-Dual Interior Point Method,” SIAM Journal on Optimization, Vol. 2, No. 4, pp. 575–601, 1992. [39] N. Mitsunaga, Z. Miyashita, K. Shinozawa, T. Miyashita, H. Ishiguro, and N. Hagita, “What Makes People Accept a Robot in a Social Environment - Discussion from Six-Week Study in an Office,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, pp. 3336-3343, September 2008. [40] N. Mitsunaga, C. Smith, T. Kanda, H. Ishiguro, and N. Hagita, “Adapting Robot Behavior for Human-Robot Interaction,” IEEE Transactions on Robotics, Vol. 24, No. 4, pp. 911-916, August 2008. [41] N. Najmaei and M. R. Kermani, “Applications of Artificial Intelligence in Safe Human-Robot Interactions,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 41, No. 2, pp. 448-459, April 2011. [42] Y. Nakauchi and R. Simmons, “A Social Robot That Stands in Line,” Autonomous Robots, Vol. 12, No. 3, pp. 313-324, May 2002. [43] M. Nieuwenhuisen, J. Stuckler, and S. Behnke, “Intuitive Multimodal Interaction for Service Robots,” Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, Osaka, Japan, pp. 177-178, March 2010. [44] A. Oliva and A. Torralba, “Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope,” International Journal of Computer Vision, Vol. 42, No. 3, pp. 145-175, May 2001. [45] E. Pacchierotti, H. I. Christensen, and P. Jensfelt, “Evaluation of Passing Distance for Social Robots,” Proceedings of IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, pp. 315-320, September 2006. [46] A. K. Pandey and R. Alami, “A Framework Towards a Socially Aware Mobile Robot Motion in Human-Centered Dynamic Environment,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 5855-5860, October 2010. [47] S. Pellegrini, A. Ess, K. Schindler, and L. v. Gool, “You'll Never Walk Alone: Modeling Social Behavior for Multi-Target Tracking,” Proceedings of IEEE International Conference on Computer Vision, Kyoto, Japan, pp. 261-268, September 2009. [48] A. Pronobis, O. Martinez Mozos, B. Caputo, and P. Jensfelt, “Multi-modal Semantic Place Classification,” International Journal of Robotics Research, Vol. 29, No. 2-3, pp. 298-320, February 2010. [49] D. Reid, “An Algorithm for Tracking Multiple Targets,” IEEE Transactions on Automatic Control, Vol. 24, No. 6, pp. 843-854, 1979. [50] M. Rodriguez, S. Ali, and T. Kanade, “Tracking in Unstructured Crowded Scenes,” IEEE International Conference on Computer Vision, Kyoto, Japan, pp. 1389-1396, September 2009. [51] D. Sakamoto, K. Honda, M. Inami, and T. Igarashi, “Sketch and Run: A Stroke-Based Interface for Home Robots,” Proceedings of Annual SIGCHI Conference on Human Factors in Computing Systems, Boston, USA, pp. 197-200, April 2009. [52] Y. Satoh, T. Okatani, and K. Deguchi, “Binocular Motion Tracking by Gaze Fixation Control and Three-Dimensional Shape Reconstruction,” Advanced Robotics, Vol. 17, No. 10, pp. 1057-1072, 2003. [53] M. Y. Shieh, J. C. Hsieh, and C. P. Cheng, “Design of an Intelligent Hospital Service Robot and Its Applications,” Proceedings of IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, Vol. 5, pp. 4377-4382, October 2004. [54] M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita, “Interactive Humanoid Robots for a Science Museum,” IEEE Intelligent Systems, Vol. 22, No. 2, pp. 25-32, 2007. [55] E. A. Sisbot, L. F. Marin-Urias, R. Alami, and T. Simeon, “A Human Aware Mobile Robot Motion Planner,” IEEE Transactions on Robotics, Vol. 23, No. 5, pp. 874-883, 2007. [56] E. Sisbot, L. Marin-Urias, X. Broquere, D. Sidobre, and R. Alami, “Synthesizing Robot Motions Adapted to Human Presence,” International Journal of Social Robotics, Vol. 2, No. 3, pp. 329-343, September 2010. [57] K. Stubbs, P. J. Hinds, and D. Wettergreen, “Autonomy and Common Ground in Human-Robot Interaction: A Field Study,” IEEE Intelligent Systems, Vol. 22, No. 2, pp. 42-50, 2007. [58] S. Sugiyama, J, Yamada, and T. Yoshikawa, “Path Planning of a Mobile Robot for Avoiding Moving Obstacles with Improved Velocity Control by Using the Hydrodynamic Potential,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 1421-1426, October 2010. [59] L. Takayama and C. Pantofaru, “Influences on Proxemic Behaviors in Human-Robot Interaction,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, pp. 5495-5502, October 2009. [60] Y. Tamura, T. Fukuzawa, and H. Asama, “Smooth collision avoidance in human-robot coexisting environment,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 3887-3892, October 2010. [61] S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, A. B. Cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz, “Probabilistic Algorithms and the Interactive Museum Tour-Guide Robot Minerva,” International Journal of Robotics Research, Vol. 19, No. 11, pp. 972-999, November 2000. [62] P. Viola, M. Jones, and D. Snow, “Detecting Pedestrians Using Patterns of Motion and Appearance,” International Journal of Computer Vision, Vol. 63, No. 2, pp. 153-161, 2005. [63] C. Wo‥hler and J. Anlauf, “An Adaptable Time-Delay Neural Network Algorithm for Image Sequence Analysis,” IEEE Transactions on Neural Networks, Vol. 10, No. 6, pp. 1531-1536, November 1999. [64] J. Xiang, Y. Tazaki, S. Inagaki, and T. Suzuki, “Variable-Resolution Map Building and Real-Time Path Planning of Omni-Directional Mobile Robots,” Proceedings of IEEE International Conference on Robotics and Automation, pp. 3014-3019, Anchorage, Alaska, USA, May 2010. [65] B. Yao, A. Khosla, and L. Fei-Fei, “Classifying Actions and Measuring Action Similarity by Modeling the Mutual Context of Objects and Human Poses,” International Conference on Machine Learning (ICML), Bellevue, WA, USA, June 2011. [66] H. C. Yen, H. P. Huang, and S. Y. Chung, “Goal-Directed Pedestrian Model for Long-Term Motion Prediction with Application to Robot Motion Planning,” Proceedings of IEEE International Conference on Advanced Robotics and its Social Impacts, Taipei, Taiwan, pp. 1-6, August 2008. [67] W.-H. Yun, D.-H. Kim, H.-S. Yoon, and Y.-J. Cho, “Development of a Face Verification System for a Home Service Robot,” Advanced Robotics, Vol. 22, No. 6-7, pp. 749-760, 2008. [68] Y. Zhang, “Solving Large-Scale Linear Programs by Interior-Point Methods Under the MATLAB Environment,” Technical Report TR96-01, Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, July 1995. [69] H. Zhang, J. Zhang, R. Liu, and G. Zong, “Mechanical Design and Dynamics of an Autonomous Climbing Robot for Elliptic Half-Shell Cleaning,” International Journal of Advanced Robotic Systems, Vol. 4, No. 4, pp. 437-446, 2007. [70] “Camera Calibration Toolbox for Matlab,” [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/. [Accessed: May 20, 2011]. [71] “INRIA Person Dataset,” [Online]. Available: http://pascal.inrialpes.fr/data/human/. [Accessed: Jan. 20, 2011]. [72] “OpenCV wiki,” [Online]. Available: http://opencv.willowgarage.com/wiki/. [Accessed: Jan. 10, 2011]. [73] “OpenGL official website,” [Online]. Available: http://www.opengl.org/. [Accessed: Jan. 10, 2011]. [74] “NVIDIA PhysX System,” [Online]. Available: http://www.nvidia.com/object/physx_9.10.0129.html. [Accessed: Jan. 10, 2011]. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27630 | - |
| dc.description.abstract | 機器人不只是實驗室的研究議題或工廠中的生產工具,也是人類生活中的一分子。排隊是人類生活中常見的社交行為之一,為了達到人類和機器人共存的社會,排隊是機器人一個重要的模式。機器人裝備攝影機和雷射測距儀,利用同步定位、建地圖和移動物體追縱(SLAMMOT)和人類偵測(human detection),機器人能對環境做初步的了解。
我們提出三種排隊的模式,不可見限制下的排隊模式,部分可見限制下的排隊模式和在沒有限制環境下的排隊模式。利用曲線擬合(curve fitting),將外插值當作下個隊伍位子的預測,同時考慮個人空間、障礙物避免、隊伍形狀。在部分可見限制環境下,感測器難以偵測出所有限制,可藉由人類的行為了解其限制,並表現出人類可接受的行為。在沒有限制環境下,機器人參考人類的位置,形成一個線性規劃的問題,得到一個合適的地點做為等待依據,最後採用社會導航(social navigation)方式到達目的地。 | zh_TW |
| dc.description.abstract | Robots are studied not only for their uses in laboratories and factories, but with an eye to incorporating them into human daily life. One of the commonest human social behaviors involves becoming a member of a queue, so for robots to integrate and co-exist with humans, queuing is an important skill to acquire. Given a camera and laser rangefinder, the robot will be able to map its environment, and it can then use Simultaneous Localization and Mapping, together with Moving Object Tracking (SLAMMOT), and a human detection algorithm to provide itself with a preliminary understanding of that environment.
Three categories of queuing model are proposed: with visible and invisible constraints, with partially visible constraints, and with no constraints within the environment. To decide the position to take relative to the rest of the queue, the algorithm uses curve fitting to perform the extrapolation, and takes into account invisible constraints such as personal space, as well as visible ones like obstacles and the direction of the queue. Where there are partial visible constraints, like stretch barriers, human behavior provides a cue to identify constraints that are hard for sensors to detect. The robot needs to observe what the humans are doing so that it can understand the constraints and behave in a way acceptable to the surrounding people. In an unconstrained environment, the robot can use the positions of human queue members as a reference, solving a linear programming problem to decide where to stand and the socially acceptable way to move toward it. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-12T18:12:54Z (GMT). No. of bitstreams: 1 ntu-100-R98522809-1.pdf: 7936380 bytes, checksum: cffe83dac05a1eed6bdebe9b4c6c86a0 (MD5) Previous issue date: 2011 | en |
| dc.description.tableofcontents | 摘要 iii
Abstract v Content vii List of Tables ix List of Figures xi Chpater 1 Introduction 1 1.1 Motivation 1 1.2 Related Works 2 1.3 Objectives and Contributions 3 1.4 Thesis Organization 4 Chpater 2 Background Knowledge 7 2.1 SLAMMOT 7 2.1.1 Simultaneous Localization and Mapping 8 2.1.2 Moving Object Tracking 12 2.2 Social Navigation with GSE and SSE 15 2.3 Human Detection 19 2.4 Linear Programming 22 Chpater 3 A Social Robot Stands in Line 27 3.1 Introduction 27 3.2 A Social Robot Queuing in Line with Visible and Invisible Constraints 29 3.2.1 Static Obstacles 29 3.2.2 Personal Space 31 3.2.3 Constant-line Steering 33 3.2.4 Special Case: Pedestrian Route Constraint 34 3.2.5 Using Curve Fitting as a Prediction Tool 35 3.3 A Social Robot Queuing in Line with Partially Visible Constraints 36 3.3.1 Environment Understanding from Pedestrian Behavior 37 3.4 Summary 40 Chpater 4 A Social Robot Waits in an Unconstrained Environment 41 4.1.1 Introduction 41 4.1.2 ROI (Region of Interest) 42 4.1.3 Repulsive Force 43 4.1.4 Attractive Force 45 4.1.5 Planning Cost 46 4.1.6 Optimization with Linear Programming 47 4.2 Data Collection 50 4.3 Summary 54 Chpater 5 Simulations and Experiments 55 5.1 Software Platform 55 5.2 Hardware Platform 56 5.3 Simulations and Experimental Results 56 5.3.1 Experiment I 57 5.3.2 Experiment II 59 5.3.3 Experiment III 64 Chpater 6 Conclusions and Future Works 69 6.1 Conclusions 69 6.2 Future Works 70 References 73 | |
| dc.language.iso | en | |
| dc.subject | 線性規劃 | zh_TW |
| dc.subject | 排隊 | zh_TW |
| dc.subject | 行動式機器人 | zh_TW |
| dc.subject | Linear Programming | en |
| dc.subject | Queuing | en |
| dc.subject | Mobile Robot | en |
| dc.title | 行動式機器人排隊模式 | zh_TW |
| dc.title | Queuing Model for a Mobile Robot | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 99-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 羅仁權,李蔡彥 | |
| dc.subject.keyword | 排隊,行動式機器人,線性規劃, | zh_TW |
| dc.subject.keyword | Queuing,Mobile Robot,Linear Programming, | en |
| dc.relation.page | 77 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2011-08-08 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-100-1.pdf 未授權公開取用 | 7.75 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
