請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78600完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 黃漢邦 | zh_TW |
| dc.contributor.advisor | Han-Pang Huang | en |
| dc.contributor.author | 董士豪 | zh_TW |
| dc.contributor.author | Shi-Hao Dong | en |
| dc.date.accessioned | 2021-07-11T15:06:31Z | - |
| dc.date.available | 2024-08-13 | - |
| dc.date.copyright | 2019-08-23 | - |
| dc.date.issued | 2019 | - |
| dc.date.submitted | 2002-01-01 | - |
| dc.identifier.citation | [1] M. O. Avila and J. G. Arancibia, "Sensor Fusion System for Autonomous Localization of Mobile Robots," Proc. of 2017 Intelligent Systems Conference (IntelliSys), London, England, pp. 988-995, 2017.
[2] N. Bouadjenek, H. Nemmour, and Y. Chibani, "Fuzzy Integrals for Combining Multiple Svm and Histogram Features for Writer's Gender Prediction," IET Biometrics, Vol. 6, No. 6, pp. 429-437, 2017. [3] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime Multi-Person 2d Pose Estimation Using Part Affinity Fields," Proc. of Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 7291-7299, 2017. [4] Y. W. Chao, Y. Liu, X. Liu, H. Zeng, and J. Deng, "Learning to Detect Human-Object Interactions," 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, pp. 381-389, 2018. [5] J. Chen, Y. Li, and F. Ye, "Uncertain Information Fusion for Gearbox Fault Diagnosis Based on Bp Neural Network and Ds Evidence Theory," Proc. of 2016 12th World Congress on Intelligent Control and Automation (WCICA), Guilin, China, pp. 1372-1376, 2016. [6] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, and W. Ouyang, "Hybrid Task Cascade for Instance Segmentation," Proc. of Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, pp. 4974-4983, 2019. [7] T. Chen, I. Goodfellow, and J. Shlens, "Net2net: Accelerating Learning Via Knowledge Transfer," arXiv preprint arXiv:1511.05641,2015. [8] S.-Y. Chung and H.-P. Huang, "A Mobile Robot That Understands Pedestrian Spatial Behaviors," Proc. of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei,Taiwan, pp. 5861-5866, 2010. [9] J. Dai, Y. Li, K. He, and J. Sun, "R-Fcn: Object Detection Via Region-Based Fully Convolutional Networks," Proc. of Advances in neural information processing systems, Barcelona, Spain, pp. 379-387, 2016. [10] J. Deng, W. Dong, R. Socher, L.-J. Li, L. Kai, and F.-F. Li, "Imagenet: A Large-Scale Hierarchical Image Database," 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, pp. 248-255, 2009. [11] Z. Fang, J. Yuan, and N. Magnenat-Thalmann, "Understanding Human-Object Interaction in Rgb-D Videos for Human Robot Interaction," Proceedings of Computer Graphics International 2018, New York, NY, USA, pp. 163-167, 2018. [12] P. Felzenszwalb, D. McAllester, and D. Ramanan, "A Discriminatively Trained, Multiscale, Deformable Part Model," 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, pp. 1-8, 2008. [13] C. Gao, Y. Zou, and J.-B. Huang, "Ican: Instance-Centric Attention Network for Human-Object Interaction Detection," in arXiv preprint arXiv:1808.10437, 2018. [14] P. Gaussier, S. Moga, M. Quoy, and J.-P. Banquet, "From Perception-Action Loops to Imitation Processes: A Bottom-up Approach of Learning by Imitation," Applied Artificial Intelligence, Vol. 12, No. 7-8, pp. 701-727, 1998. [15] R. Girshick, "Fast R-Cnn," Proceedings of the IEEE international conference on computer vision, Washington, DC, USA, pp. 1440-1448, 2015. [16] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition, Columbus, OH, USA, pp. 580-587, 2014. [17] G. Gkioxari, R. Girshick, P. Dollar, and K. He, "Detecting and Recognizing Human-Object Interactions," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 8359-8367, 2018. [18] K. Grauman and T. Darrell, "The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features," Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, Beijing, China, pp. 1458-1465 Vol. 2, 2005. [19] C. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, and R. Sukthankar, "Ava: A Video Dataset of Spatio-Temporally Localized Atomic Visual Actions," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 6047-6056, 2018. [20] A. Gupta, A. Kembhavi, and L. S. Davis, "Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition," IEEE Trans Pattern Anal Mach Intell, Vol. 31, No. 10, pp. 1775-89, Oct 2009. [21] S. Gupta and J. Malik, "Visual Semantic Role Labeling," arXiv preprint arXiv:1505.04474,2015. [22] E. T. Hall, The Hidden Dimension: Man’s Use of Space in Public and Private,Ed., London, UK: 1966. [23] K. He and J. Sun, "Convolutional Neural Networks at Constrained Time Cost," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, pp. 5353-5360, 2015. [24] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, pp. 770-778, 2016. [25] K. He, X. Zhang, S. Ren, and J. Sun, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification," Proc. of Proceedings of the IEEE international conference on computer vision, Santiago, Chile, pp. 1026-1034, 2015. [26] K. He, X. Zhang, S. Ren, and J. Sun, "Identity Mappings in Deep Residual Networks," Proc. of European conference on computer vision, Amsterdam, The Netherlands, pp. 630-645, 2016. [27] K. He, X. Zhang, S. Ren, and J. Sun, "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition," IEEE transactions on pattern analysis and machine intelligence, Vol. 37, No. 9, pp. 1904-1916, 2015. [28] D. Helbing, F. Schweitzer, J. Keltsch, and P. Molnar, "Active Walker Model for the Formation of Human and Animal Trail Systems," Physical review E, Vol. 56, No. 3, p. 2527, 1997. [29] S. Heng and D. Yunfeng, "Research on Cooperative Control of Human-Computer Interaction Tools with High Recognition Rate Based on Neural Network," 2014 International Conference on Virtual Reality and Visualization, Shenyang, China, pp. 350-354, 2014. [30] B. Hillier and J. Hanson, The Social Logic of Space, 3rd Ed., Cambridge, UK: Cambridge University, 1984. [31] S. Hochreiter and J. Schmidhuber, "Long Short-Term Memory," Neural computation, Vol. 9, No. 8, pp. 1735-1780, 1997. [32] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely Connected Convolutional Networks," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp. 4700-4708, 2017. [33] T. Huang, D. Koller, J. Malik, G. Ogasawara, B. Rao, S. J. Russell, and J. Weber, "Automatic Symbolic Traffic Scene Analysis Using Belief Networks," Proceeding of the Twelfth National Conference on Artificial Intelligence, Seattle, Washington, pp. 966–972, 1994. [34] Y. S. Huang and C. Y. Suen, "A Method of Combining Multiple Experts for the Recognition of Unconstrained Handwritten Numerals," IEEE Transactions on Pattern Analysis & Machine Intelligence, Vol. 17, No. 1, pp. 90-94, 1995. [35] S. Ioffe and C. Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," arXiv preprint arXiv:1502.03167, 2015. [36] H. A. Jalab and H. K. Omer, "Human Computer Interface Using Hand Gesture Recognition Based on Neural Network," 2015 5th National Symposium on Information Technology: Towards New Smart World (NSITNSW), Riyadh, Saudi Arabia, pp. 1-6, 2015. [37] M. Janah and Y. Fujimoto, "Performance Analysis of an Indoor Localization and Mapping System Using 2d Laser Range Finder Sensor," IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, pp. 5463-5468, 2018. [38] T. Kanda, D. F. Glas, M. Shiomi, and N. Hagita, "Abstracting People's Trajectories for Social Robots to Proactively Approach Customers," IEEE Transactions on Robotics, Vol. 25, No. 6, pp. 1382-1396, 2009. [39] H. Knight and R. Simmons, "Expressive Motion with X, Y and Theta: Laban Effort Features for Mobile Robots," The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, Scotland, pp. 267-273, 2014. [40] A. Kokkinou and D. A. Cranage, "Modeling Human Behavior in Customer-Based Processes: The Use of Scenario-Based Surveys," Proceedings of the 2011 Winter Simulation Conference (WSC), Phoenix, AZ, pp. 683-689, 2011. [41] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks," Proc. of Advances in neural information processing systems, Stateline, NV, USA, pp. 1097-1105, 2012. [42] P. A. Kumari and G. J. Suma, "An Experimental Study of Feature Reduction Using Pca in Multi-Biometric Systems Based on Feature Level Fusion," Proc. of 2016 International Conference on Advances in Electrical, Electronic and Systems Engineering (ICAEES), Putrajaya, Malaysia, pp. 109-114, 2016. [43] L. U. Laban and Rudolf, The Mastery of Movement, 3rd Ed., Hampshire, UK: Dance Books, 1950. [44] M. Lagarde, P. Andry, P. Gaussier, S. Boucenna, and L. Hafemeister, "Proprioception and Imitation: On the Road to Agent Individuation," Studies in Computational Intelligence, Vol. 264, No. 1, pp. 43-63, 2010. [45] P. Laroque, N. Gaussier, N. Cuperlier, M. Quoy, and P. Gaussier, "Cognitive Map Plasticity and Imitation Strategies to Improve Individual and Social Behaviors of Autonomous Agents," Paladyn, Journal of Behavioral Robotics, Vol. 1, No. 1, pp. 25-36, 2010. [46] S. Lazebnik, C. Schmid, and J. Ponce, "Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories," Proc. of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), New York, NY, USA, Vol. 2, pp. 2169-2178, 2006. [47] J. Liang, E. Meyerson, B. Hodjat, D. Fink, K. Mutch, and R. Miikkulainen, "Evolutionary Neural Automl for Deep Learning," arXiv preprint arXiv:1902.06827,2019. [48] D.-T. Lin and K.-Y. Huang, "Collaborative Pedestrian Tracking and Data Fusion with Multiple Cameras," IEEE Transactions on Information Forensics and Security, Vol. 6, No. 4, pp. 1432-1444, 2011. [49] M. Lin, Q. Chen, and S. Yan, "Network in Network," arXiv preprint arXiv:1312.4400, 2013. [50] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature Pyramid Networks for Object Detection," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 2117-2125, 2017. [51] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft Coco: Common Objects in Context," Proc. of European conference on computer vision, Springer, Cham, Switzerland, pp. 740-755, 2014. [52] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single Shot Multibox Detector," Proc. of European conference on computer vision, Amsterdam, The Netherlands, pp. 21-37, 2016. [53] Z. Liu, W. Wang, D. Chen, and G. von Wichert, "A Coherent Semantic Mapping System Based on Parametric Environment Abstraction and 3d Object Localization," 2013 European Conference on Mobile Robots, Barcelona, Spain, pp. 234-239, 2013. [54] Q. Lv, Y. Qiao, N. Ansari, J. Liu, and J. Yang, "Big Data Driven Hidden Markov Model Based Individual Mobility Prediction at Points of Interest," IEEE Transactions on Vehicular Technology, Vol. 66, No. 6, pp. 5204-5216, 2017. [55] W. S. McCulloch and W. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity," The bulletin of mathematical biophysics, Vol. 5, No. 4, pp. 115-133, 1943. [56] V. Nair and G. E. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," Proc. of Proceedings of the 27th international conference on machine learning (ICML-10), Haifa, Israel, pp. 807-814, 2010. [57] Y. Nesterov, "A Method of Solving a Convex Programming Problem with Convergence Rate O (1/K2)," Soviet Mathematics Doklady, Vol. 27, No. 2, pp. 372-376, 1983. [58] L. Ni and M. A. A. Aziz, "A Robust Deep Belief Network-Based Approach for Recognizing Dynamic Hand Gestures," 2016 13th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, pp. 199-205, 2016. [59] S. Park, T. Schops, and M. Pollefeys, "Illumination Change Robustness in Direct Visual Slam," 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, pp. 4523-4530, 2017. [60] C. Pohl and J. L. Van Genderen, "Review Article Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications," International journal of remote sensing, Vol. 19, No. 5, pp. 823-854, 1998. [61] B. T. Polyak, "Some Methods of Speeding up the Convergence of Iteration Methods," USSR Computational Mathematics and Mathematical Physics, Vol. 4, No. 5, pp. 1-17, 1964. [62] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas NV, USA, pp. 779-788, 2016. [63] J. Redmon and A. Farhadi, "Yolo9000: Better, Faster, Stronger," Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp. 7263-7271, 2017. [64] J. Redmon and A. Farhadi, "Yolov3: An Incremental Improvement," arXiv preprint arXiv:1804.02767, 2018. [65] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-Cnn: Towards Real-Time Object Detection with Region Proposal Networks," Proc. of Advances in neural information processing systems, Montreal, Canada, pp. 91-99, 2015. [66] G. Rizzolatti and L. Craighero, "The Mirror-Neuron System," Annual Review of Neuroscience, Vol. 27, No. 1, pp. 169-192, 2004. [67] R. B. Rusu, Z. C. Marton, N. Blodow, A. Holzbach, and M. Beetz, "Model-Based and Learned Semantic Object Labeling in 3d Point Cloud Maps of Kitchen Environments," 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, pp. 3601-3608, 2009. [68] D. Ruta and B. Gabrys, "An Overview of Classifier Fusion Methods," Computing and Information systems, Vol. 7, No. 1, pp. 1-10, 2000. [69] K. Safi, S. Mohammed, F. Attal, M. Khalil, and Y. Amirat, "Recognition of Different Daily Living Activities Using Hidden Markov Model Regression," 2016 3rd Middle East Conference on Biomedical Engineering (MECBME), Beirut, pp. 16-19, 2016. [70] K. Saha, P. Shah, S. Merchant, and U. Desai, "A Novel Multi-Focus Image Fusion Algorithm Using Edge Information and K-Mean Segmentation," Proc. of 2009 7th International Conference on Information, Communications and Signal Processing (ICICS), Macau, pp. 1-5, 2009. [71] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, "Overfeat: Integrated Recognition, Localization and Detection Using Convolutional Networks," arXiv preprint arXiv:1312.6229, 2013. [72] X. Shao, H. Zhao, K. Nakamura, K. Katabira, R. Shibasaki, and Y. Nakagawa, "Detection and Tracking of Multiple Pedestrians by Using Laser Range Scanners," Proc. of 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, pp. 2174-2179, 2007. [73] J. Shen, W. Yang, and Q. Liao, "Part Template: 3d Representation for Multiview Human Pose Estimation," Pattern Recognition, Vol. 46, No. 7, pp. 1920-1932, 2013. [74] L. Shen, S. Yeung, J. Hoffman, G. Mori, and L. Fei-Fei, "Scaling Human-Object Interaction Recognition through Zero-Shot Learning," 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, pp. 1568-1576, 2018. [75] Z. Shen, Z. Liu, J. Li, Y.-G. Jiang, Y. Chen, and X. Xue, "Dsod: Learning Deeply Supervised Object Detectors from Scratch," Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, pp. 1919-1927, 2017. [76] J. Shotton, A. W. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, "Real-Time Human Pose Recognition in Parts from Single Depth Images," Proceedings of the IEEE conference on computer vision and pattern recognition, Colorado Springs, CO, USA, 2011. [77] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," arXiv preprint arXiv:1409.1556, 2014. [78] S. Sreela and S. M. Idicula, "Action Recognition in Still Images Using Residual Neural Network Features," Procedia computer science, Vol. 143, pp. 563-569, 2018. [79] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: A Simple Way to Prevent Neural Networks from Overfitting," The Journal of Machine Learning Research, Vol. 15, No. 1, pp. 1929-1958, 2014. [80] R. K. Srivastava, K. Greff, and J. Schmidhuber, "Highway Networks," arXiv preprint arXiv:1505.00387, 2015. [81] M. Sugeno, "Theory of Fuzzy Integrals and Its Applications," Tokyo Institute of Technology, 1974. [82] R. Sun, X. Wang, and X. Yan, "Robust Visual Tracking Based on Extreme Learning Machine with Multiple Kernels Features Fusion," 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, pp. 2029-2033, 2017. [83] N. Suzuki, K. Hirasawa, K. Tanaka, Y. Kobayashi, Y. Sato, and Y. Fujino, "Learning Motion Patterns and Anomaly Detection by Human Trajectory Analysis," Proc. of 2007 IEEE International Conference on Systems, Man and Cybernetics, Montreal, Quebec, Canada, pp. 498-503, 2007. [84] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going Deeper with Convolutions," Proc. of Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, pp. 1-9, 2015. [85] M. Takruri and A. Abubakar, "Bayesian Decision Fusion for Enhancing Melanoma Recognition Accuracy," Proc. of 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, pp. 1-4, 2017. [86] H. Tang, B. H. Tan, and R. Yan, "Robot-to-Human Handover with Obstacle Avoidance Via Continuous Time Recurrent Neural Network," 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, pp. 1204-1211, 2016. [87] B. Tong and Y. Liu, "An Speech and Face Fusion Recognition Method Based on Fuzzy Integral," Proc. of 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, pp. 1337-1342, 2016. [88] R. Toufiq and M. R. Islam, "Face Recognition System Using Pca-Ann Technique with Feature Fusion Method," Proc. of 2014 International Conference on Electrical Engineering and Information & Communication Technology, Dhaka, Bangladesh, pp. 1-5, 2014. [89] A. Truong, H. Boujut, and T. Zaharia, "A Gesture Expressive Model Based on Laban Qualities," 2014 IEEE Fourth International Conference on Consumer Electronics Berlin (ICCE-Berlin), Berlin, Germany, pp. 168-172, 2014. [90] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, "Selective Search for Object Recognition," International journal of computer vision, Vol. 104, No. 2, pp. 154-171, 2013. [91] C. Uyulan, T. Erguzel, and E. Arslan, "Mobile Robot Localization Via Sensor Fusion Algorithms," 2017 Intelligent Systems Conference (IntelliSys), London, England, pp. 955-960, 2017. [92] S. H. Wang and H. P. Huang, "Construction of Human Behavior Cognitive Map for Robots," Master Thesis, Graduate Institute of Mechanical Engineering, National Taiwan University, 2018. [93] Z. Wang, P. Jensfelt, and J. Folkesson, "Building a Human Behavior Map from Local Observations," 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, pp. 64-70, 2016. [94] C. Wong, N. Houlsby, Y. Lu, and A. Gesmundo, "Transfer Learning with Neural Automl," Proc. of Advances in Neural Information Processing Systems, Quebec, Canada, pp. 8356-8365, 2018. [95] C. Yang and J. Song, "Research on Hepatitis Auxiliary Diagnosis Model Based on Fuzzy Integral and Ga—Bp Neural Network," Proc. of 2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, pp. 664-667, 2017. [96] Y. Yang and D. Ramanan, "Articulated Pose Estimation with Flexible Mixtures-of-Parts," Proceedings of the IEEE conference on computer vision and pattern recognition, Colorado Springs, CO, USA, pp. 1385-1392, 2011. [97] M. Ye, W. Xianwang, R. Yang, R. Liu, and M. Pollefeys, "Accurate 3d Pose Estimation from a Single Depth Image," 2011 International Conference on Computer Vision, Barcelona, Spain, pp. 731-738, 2011. [98] C. Yingchun, S. Yu, and L. Ou, "Modification Algorithm of Ds Evidence Theory Based on the Evolution Function of Focal Elements' Energy," Proc. of 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, pp. 2200-2205, 2017. [99] C. Zhao, H. Hu, and D. Gu, "Building a Grid-Point Cloud-Semantic Map Based on Graph for the Navigation of Intelligent Wheelchair," 2015 21st International Conference on Automation and Computing (ICAC), Glasgow, Scotland, pp. 1-7, 2015. [100] H. Zhou and M. Jiang, "Content-Based Image Retrieval Based on Multi-Feature Fusion Optimized by Brain Storm Optimization," 2017 International Conference on Computing Intelligence and Information System (CIIS), Nanjing, Chian, pp. 72-78, 2017. [101] S. Zhu, C. Chen, J. Xu, X. Guan, L. Xie, and K. H. Johansson, "Mitigating Quantization Effects on Distributed Sensor Fusion: A Least Squares Approach," IEEE Transactions on Signal Processing, Vol. 66, No. 13, pp. 3459-3474, 2018. [102] B. Zoph and Q. V. Le, "Neural Architecture Search with Reinforcement Learning," arXiv preprint arXiv:1611.01578,2016. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78600 | - |
| dc.description.abstract | 隨著科學及技術的進步,機器人的應用會愈來愈廣泛,且需求會隨之增加。對機器人而言,為了使其更融入人類社會中,必須要理解環境並對某些抽象的規範有所認知,才能夠做出更適當的反應。透過建立環境模型來瞭解人類如何使用空間是認知機器人相當重要的一部分。此外,人會和物體有互動,不同物體含有不同的資訊,倘若機器人能夠理解這些較高階層的行為,人們與其互動會更為自然。本論文致力於人物互動行為構成的環境模型,為了辨識行為,結合彩色影像資訊與身體骨架點資訊進行分類,並提出一個被稱為動態密集連接卷積網路的神經網路架構。另外,應用拉邦動作分析與模糊積分於行為辨識,並使用物體偵測模型和深度資訊判斷有無人物互動行為,最後建立一種描述行為和環境的方法。藉由此環境模型,機器人能理解各個位置適合做哪些行為,並對做出禁止行為的人給予適當的反應。 | zh_TW |
| dc.description.abstract | With the progress of science and technology, the application of robots has become more extensive, and the demand for robots has increased. For robots to become more integrated into society, it is necessary for robots to understand certain abstract norms to be able to interact with people more appropriately. Understanding how humans use space by building an environmental model is one of the principal aspects of cognitive robotics. People interact with objects, and different objects contain different type of information. If robots can understand these high-level behaviors, people can interact with robots more naturally. As a result, this thesis is devoted to constructing an environmental model com-posed of human–object interaction (HOI) information. To identify actions, we combine two action recognition models and propose a novel neural network called Dynamic DenseNet. Also, Laban movement analysis and fuzzy integral are employed for action recognition. Then, object detection models with depth information are used in HOI. Finally, a method for describing behavior and spatial information is established. With the environmental model, the robot can understand what behaviors are appropriate for each location and can respond appropriately to prohibited behaviors. | en |
| dc.description.provenance | Made available in DSpace on 2021-07-11T15:06:31Z (GMT). No. of bitstreams: 1 ntu-108-R06522825-1.pdf: 5632889 bytes, checksum: ea5ba326a53d10f9d28081901f672f31 (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii Abstract v List of Tables ix List of Figures xi Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Contributions 3 1.3 Organization of Thesis 5 Chapter 2 Human Action Recognition 7 2.1 Introduction 7 2.2 Dynamic DenseNet 10 2.2.1 DenseNet 11 2.2.2 Dynamic DenseNet 19 2.3 Skeleton-based Action Recognition 34 2.3.1 Laban Movement Analysis 35 2.3.2 Skeleton-based Classifier 38 2.4 Image-based Action Recognition 41 2.4.1 VCOCO Dataset 41 2.4.2 Image-based Classifier 43 2.5 Summary 46 Chapter 3 HOI Cognitive Map 47 3.1 Introduction 47 3.2 Object Detection 50 3.2.1 Object Detection Models 51 3.2.2 YOLO v3 55 3.3 HOI Cognitive Map 58 3.3.1 Fusion Algorithm 59 3.3.2 Human–Object Interactions 68 3.3.3 HOI Cognitive Map 74 3.4 Summary 81 Chapter 4 Experiments 83 4.1 Software Platform and Hardware Platform 83 4.2 Experiments for Human Action Recognition 85 4.2.1 Data Collection and Training Setup 85 4.2.2 Experimental Results 89 4.3 Experiments for HOI Cognitive Map 95 4.3.1 Human–Object Interactions 95 4.3.2 HOI Cognitive Map 97 4.3.3 Online Testing 100 4.4 Summary 107 Chapter 5 Conclusions and Future Works 109 5.1 Conclusions 109 5.2 Future Works 111 References 115 | - |
| dc.language.iso | en | - |
| dc.subject | 環境辨識模型 | zh_TW |
| dc.subject | 認知地圖 | zh_TW |
| dc.subject | 人物互動 | zh_TW |
| dc.subject | 服務型機器人 | zh_TW |
| dc.subject | Service Robot | en |
| dc.subject | Human Object Interaction | en |
| dc.subject | Environment Recognition System | en |
| dc.subject | Cognitive Map | en |
| dc.title | 機器人的人物互動行為認知地圖建構 | zh_TW |
| dc.title | Construction of Behavior Cognitive Map with Human Object Interactions for Robots | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 107-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 傅楸善;周瑞仁;劉益宏 | zh_TW |
| dc.contributor.oralexamcommittee | Chiou-Shann Fuh;Jui-Jen Chou;Yi-Hung Liu | en |
| dc.subject.keyword | 環境辨識模型,認知地圖,人物互動,服務型機器人, | zh_TW |
| dc.subject.keyword | Environment Recognition System,Cognitive Map,Human Object Interaction,Service Robot, | en |
| dc.relation.page | 124 | - |
| dc.identifier.doi | 10.6342/NTU201903540 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2019-08-14 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 機械工程學系 | - |
| dc.date.embargo-lift | 2024-08-23 | - |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-107-2.pdf 未授權公開取用 | 5.5 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
