Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 機械工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66483
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor黃漢邦(Han-Pang Huang)
dc.contributor.authorChia-Hung Chenen
dc.contributor.author陳嘉宏zh_TW
dc.date.accessioned2021-06-17T00:38:23Z-
dc.date.available2012-02-16
dc.date.copyright2012-02-16
dc.date.issued2012
dc.date.submitted2012-01-30
dc.identifier.citation[1] P. Azad, T. Asfour, and R. Dillmann, “Stereo-based 6D Object Localization for Grasping with Humanoid Robot Systems,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems, pp. 919-924, 2007.
[2] P. Azad, “Submitted Thesis: Visual Perception for Manipulation and Imitiation in Humanoid Robots,” Doctoral Dissertation, University of Karlsruhe, Karlsruhe Germany, 2008.
[3] T. Baier and J. Zhang, “Reusability-based semantics for grasp evaluation in context of service robotics,” IEEE International Conference on Robotics and Biomimetics, pp. 703-708, 2006.
[4] S. Baker and I. Matthews, “Lucas-Kanade 20 Years on: A Unifying Framework,” International Journal of Computer Vision, vol. 56, no. 3, pp. 221-255, 2004.
[5] H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded Up Robust Features,” Proceedings of the ninth European Conference on Computer Vision, 2006.
[6] D. Bertram, J. Kuffner, R. Dillmann, and T. Asfour, “An Integrated Approach to Inverse Kinematics and Path Planning for Redundant Manipulators,” Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2006, pp. 1874-1879, 2006.
[7] K. Blekas, A. Likas, N. Galatsanos, and I. Lagaris, “A spatially constrained mixture model for image segmentation,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 494-498, 2005.
[8] G. R. Bradski, “Computer Vision Face Tracking for Use in a Perceptual User Interface,” Intel Technology Journal, 2nd Quarter, 1998.
[9] M. Brown and D. G. Lowe, “Invariant Features from Interest Point Groups,” British Machine Vision Conference, pp. 656-665, 2002.
[10] C. C. Chang, and C. J. Lin, LIBSVM: A Library for Support Vector Machines, http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2001.
[11] F. Chaumette and S. Hutchinson, “Visual Servo Control. I. Basic Approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82-90, 2006.
[12] F. Chaumette and S. Hutchinson, “Visual Servo Control. II. Advanced Approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109-118, 2007.
[13] X. H. Chen and N. A. Schmid, “Empirical Capacity of a Recognition Channel for Single- and Multipose Object Recognition Under the Constraint of PCA Encoding,” IEEE Transactions on Image Processing, vol. 18, no. 3, pp. 636-651, 2009.
[14] Q. Chen, H. Wu, T. Fukumoto, and M. Yachida, “3D Head Pose Estimation without Feature Tracking,” IEEE International Conference on Automatic Face and Gesture Recognition, pp. 88-93, 1998.
[15] J. Y. Choi, Y. M. Ro, and K. N. Plataniotis, “Boosting Color Feature Selection for Color Face Recognition,” IEEE Transactions on Image Processing, vol. 20, pp. 1425-1434, 2011.
[16] K. Choi, K. A. Toh, and H. Byun, “Realtime Training on Mobile Devices for Face Recognition Applications,” Pattern Recognition, vol. 44, pp. 386-400, 2011.
[17] T. Cootes, G. Edwards, and C. Taylor, “Active Appearance Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681-685, 2001.
[18] R. Detry and J. Piater, “Continuous Surface-Point Distributions for 3D Object Pose Estimation and Recognition,” 10th Asian Conference on Computer Vision, pp. 572-585, 2011.
[19] R. Diankov, N. Ratliff, D. Ferguson, S. Srinivasa, and J. Kuffner, “BiSpace Planning: Concurrent Multi-Space Exploration,” Robotics: Science and Systems, 2008.
[20] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2 Ed., Canada, 2001.
[21] C. Eitner, Y. Mori, K. Okada, and M. Inaba, “Task and Vision based Online Manipulator Trajectory Generation for a Humanoid Robot,” IEEE International Conference on Humanoid Robots, pp. 293-298, 2008.
[22] A. Eleyan and H. Demirel, “Co-occurrence Matrix and its Statistical Features as a New Approach for Face Recognition,” Turkish Journal of Electrical Engineering and Computer Sciences, vol. 19, pp. 97-107, 2011.
[23] V. Ferrari, L. Fevrier, F. Jurie, and C. Schmid, “Groups of Adjacent Contour Segments for Object Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 1, pp. 36-51, 2008.
[24] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communication of the ACM, vol. 24, pp. 381-395, 1981.
[25] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From Few to Many: Illumination Cone Models for Face Recognition unde Variable Lighting and Pose,“ IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 23, pp. 643-660, 2001.
[26] M. Gienger, M. Toussaint, and C. Goerick, “Task Maps in Humanoid Robot Manipulation,” IEEE International Conference on Humanoid Robots, pp. 2758-2764, 2008.
[27] M. Goesele, N. Snavely, B. Curless, H. Hoppe, and S. M. Seitz, “Multi-View Stereo for Community Photo Collections,” International Conference on Computer Vision, pp. 1-8, 2007.
[28] C. Goldfeder, P. K. Allen, C. Lackner, and R. Pelossof, “Grasp Planning via Decomposition Trees,” Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, pp. 4679-4684, 2007.
[29] D. Gottlieb, “Robots and topology,” Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, California, vol. 3, pp. 1689-1691, 1986.
[30] S. Gottschalk, M. C. Lin, and D. Manocha, “OBB-Tree: A Hierarchical Structure for Rapid Interference Detection,” ACM SIGGRAPH, pp. 171-180, 1996.
[31] R. Gross, I. Matthews, and S. Baker, “Active Appearance Models with Occlusion, Image and Vision Computing, vol. 24 no. 6, pp. 593-604, 2006.
[32] R. A. Hamzah, A. M. A. Hamid, and S. I. M. Salim, “The Solution of Stereo Correspondence Problem Using Block Matching Algorithm in Stereo Vision Mobile Robot,” International Conference on Computer Research and Development, pp. 733-737, 2010.
[33] K. Harada, K. Kaneko, and F. Kanehiro, “Fast Grasp Planning for Hand/Arm Systems based on Convex Model,” Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, United states, pp. 1162-1168, 2008.
[34] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proceedings of the 4th Alvey Vision Conference, pp. 147-151, 1988.
[35] K. HoWon, C. JaeSeung, and K. InSo, “A Novel Image-based Control Law for the Visual Servoing System under Large Pose Error,” IEEE International Conference on Intelligent Robots and Systems, vol. 1, pp. 263-268, 2000.
[36] H. P. Huang and C. T. Lin, ”Multi-CAMSHIFT for Multi-View Faces Tracking and Recognition,” IEEE International Conference on Robotics and Biomimetics, pp. 1334-1339, 2006.
[37] P. J. Huber, Robust Statistics, Wiley & Sons, 1981.
[38] K. Huebner, S. Ruthotto, and D. Kragic, “Minimum Volume Bounding Box Decomposition for Shape Approximation in Robot Grasping,” Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, United states, pp. 1628-1633, 2008.
[39] S. Hutchinson, G. Hager, and Peter Corke, “A Tutorial on Visual Servo Control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651-670, 1996.
[40] K. H. Hyun, E. H. Kim, and Y. K. Kwak, “Emotional Feature Extraction Method based on the Concentration of Phoneme Influence for Human-Robot Interaction,” Advanced Robotics, vol. 24, no. 1-2, pp. 47-67, 2010.
[41] M. Imai, T. Ono, and H. Ishiguro, “Physical Relation and Expression: Joint Attention for Human-Robot Interaction,” IEEE Trans. Industrial Electronics, vol.50, no.4, pp. 636-643, Aug. 2003.
[42] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, Mc Graw-Hill, 1995.
[43] Q. Ji and R. Hu, “3D Face Pose Estimation and Tracking from a Monocular Camera”, Image and Vision Computing, vol.20, no.7, pp499-511, 2002.
[44] K. Jung, K. I. Kim, T. Kurata, M. Kourogi, and J. Han, “Text Scanner with Text Detection Technology on Image Sequences,” International Conference on Pattern Recognition, vol. 3, pp. 473-476, 2002.
[45] Y. Ke and R. Sukthankar, “PCA-SIFT: A More Distinctive Representation for Local Image Descriptors,” Computer Vision and Pattern Recognition, 2004.
[46] S. Kim, K. J. Yoon, and I. S. Kweon, “Object Recognition Using a Generalized Robust Invariant Feature and Gestalt’s Law of Proximity and Similarity,” Conference on Computer Vision and Pattern Recognition Workshop, 2006.
[47] J. J. Koenderink, “The Structure of Images,” Biological Cybernetics, vol. 50, pp.363-396, 1984.
[48] I. Kokkinos and P. Maragos, “Synergy between Object Recognition and Image Segmentation Using the Expectation-Maximization Algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 8, pp. 1486-1501, 2009.
[49] D. Kragic and H. I. Christensen, “Survey on Visual Servoing for Manipulation,” IEEE Transaction on Robotics & Automation, vol. 15, no. 2, pp. 238-250, 1999.
[50] J. J. Kuffner and Jr. S. M. LaValle, “RRT-connect: An Efficient Approach to Single-query Path Planning,” Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2, pp. 995-1001, 2000.
[51] J. Kuehnle, Z. Xue, T. Grundmann, A. Verl, S. Ruehl, R. Eidenberger, J. M. Zoellner, R. D. Zoellner, and R. Dillmann, “6d object localization and obstacle detection for collision-free manipulation with a mobile service robot,” 14th International Conference on Advanced Robotics, pp. 1-6, 2009.
[52] M. P. Kumar and D. Koller, “Efficiently selecting regions for scene understanding,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3217-3224, 2010.
[53] C. Kyrkou and T. Theocharides, “A Flexible Parallel Hardware Architecture for AdaBoost-Based Real-Time Object Detection,” IEEE Transactions on Very Large Scale Integration Systems, vol. 19, no. 6, pp. 1034-1047, 2011.
[54] E. Larsen, S. Gottschalk, M. C. Lin, and D. Manocha, “Fast Proximity Queries with Swept Sphere Volumes,” Robotics and Automation, pp. 3719-3726, 2000.
[55] S. M. LaValle and J. J. Kuffner, “Randomized Kinodynamic Planning,” International Journal of Robotics Research, vol. 20, no. 5, pp. 378-400, 2001.
[56] S. Lazebnik, C. Schmid, and J. Ponce, “Semi-Local Affine Parts for Object Recognition,” Proceedings of the British Machine Vision Conference, 2004.
[57] H. S. Lee and D. Kim, “Tensor-Based AAM with Continuous Estimation: Application to Variation-Robust Face Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 1102-1116, 2009.
[58] L. J. Li, R. Socher, and F. F. Li, “Towards total scene understanding: Classification, annotation and segmentation in an automatic framework,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 2036-2043, 2009.
[59] A. Liegeois, “Automatic Supervisory Control of the Configuration and Behavior of Multibody Mechanisms,” IEEE Transactions Systems, Man and Cybernetics, vol. 7, pp. 868-871, 1977.
[60] T. Lindeberg, “Scale-space Theory: A Basic Tool for Analysing Structures at Different Scales,” Journal of Applied Statistics, vol. 21, no. 2, pp.224-270, 1994.
[61] G. C. Littlewort, M. S. Bartlett, J. Chenu, I. Fasel, T. Kanda, H. Ishiguro, and J. R. Movellan, “Towards Social Robots: Automatic Evaluation of Human-Robot Interaction by Face Detection and Expression Classification,“ Advances in Neural Information Processing Systems, vol. 16, pp. 1563-1570, 2004.
[62] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[63] D. G. Lowe, “Object Recognition from Local Scale-Invariant Features,” IEEE International Conference on Computer Vision, pp. 1150-1157, 1999.
[64] F. Lu and E. Milios, “Robot Pose Estimation in Unknown Environments by Matching 2d Range Scans,” Journal of Intelligent and Robotic Systems, vol. 18, no. 3, pp. 249-275, 1997.
[65] E. Malis, E Chaumette, and S. Boudet, “2 1/2 d Visual Servoing,” IEEE Transaction on Robotics and Automation, vol. 15, no. 2, pp. 234-246, 1999.
[66] I. Matthews and S. Baker. “Active Appearance Models revisited,” International Journal of Computer Vision, vol. 60, no. 2, pp. 135-164, 2004.
[67] S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K. R. Muller, “Fisher Discriminant Analysis with Kernels,” IEEE International Workshop on Neural Networks for Signal Processing, vol. 9, pp. 41-48, 1999.
[68] K. Mikolajczyk and C. Schmid, “A Performance Evaluation of Local Descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615-1630, 2005.
[69] A. T. Miller, S. Knoop, H. I. Christensen, and P. K. Allen, “Automatic Grasp Planning using Shape Primitives,” Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, Taiwan, vol. 2, pp. 1824-1829, 2003.
[70] J. M. Morel and G. Yu, “ASIFT: A New Framework for fully Affine Invariant Image Comparison,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 438-469, 2009.
[71] A. Morita, Y. Yoshikawa, K. Hosoda, and M. Asada, “Joint Attention with Strangers based on Generalization through Joint Attention with Caregivers,” Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3744-3749, Sep. 2004.
[72] E. Murphy-Chutorian and J. Triesch, “Shared Features for Scalable Appearance-based Object Recognition,” Proceedings of IEEE Workshop on Applications of Computer Vision, pp. 16-21, 2005.
[73] Y. Nakamura and H. Hanafusa, “Inverse Kinematics Solutions with Singularity Robustness for Robot Manipulator Control,” ASME Journal of Dynamic Systems, Measurement and Control, vol. 108, pp. 163-171, 1986.
[74] A. Nakhaei and F. Lamiraux, “Motion Planning for Humanoid Robots in Environments modeled by Vision,” IEEE International Conference on Humanoid Robots, pp. 197-204, 2008.
[75] D. Neilson and Y. H. Yang, “Evaluation of constructable match cost measures for stereo correspondence using cluster ranking,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[76] C. Nikou, A. C. Likas, and N. P. Galatsanos, “A Bayesian Framework for Image Segmentation With Spatially Varying Mixtures,” IEEE Transactions on Image Processing, vol. 19, pp. 2278-2289, 2010.
[77] S. Obdrzalek and J. Matas, “Object Recognition using Local Affine Frames on Distinguished Regions,” in Proceedings British Machine Vision Conference, vol. 1, pp. 113-122, 2002.
[78] B. Ommer and J. M. Buhmann, “Learning the Compositional Nature of Visual Object Categories for Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 3, pp. 501-516, 2010.
[79] G. Panin and A. Knoll, “Fully Automatic Real-Time 3D Object Tracking using Active Contour and Appearance Models,” Journal of Multimedia, vol. 1, no. 7, pp.62-70, 2006.
[80] M. Paulin, “Feature Planning for Robust Execution of General Robot Tasks using Visual Servoing,” Proceedings of the 2nd Canadian Conference on Computer and Robot Vision, pp. 200-209, 2005.
[81] M. Pressigout and E. Marchand, “Real-time hybrid tracking using edge and texture information,” International Journal of Robotics Research, vol. 26, no. 7, pp.689-713, 2007.
[82] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman, “Using Multiple Segmentations to Discover Objects and their Extent in Image Collections,” Proceedings of Computer Vision and Pattern Recognition, 2006.
[83] R. Sandhu, S. Dambreville, A. Yezzi, and A. Tannenbaum, “A Nonrigid Kernel-based Framework for 2D-3D Pose Estimation and 2D Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, pp. 1098-1115, 2011.
[84] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic Grasping of Novel Objects using Vision,” International Journal of Robotics Research, vol. 27, no. 2, pp. 157-173, 2008.
[85] A. Saxena, L. Wong, and A.Y. Ng, “Learning Grasp Strategies with Partial Shape Information,” the 23rd AAAI Conference on Artificial Intelligence, pp. 1491-1494, 2008.
[86] A. Saxena, J. Driemeyer, J. Kearns, C. Osondu, and A. Y. Ng, “Learning to Grasp Novel Objects using Vision,” Proceedings of the 10th International Symposium of Experimental Robotics, vol. 39, pp. 33-42, 2006.
[87] A. Saxena, L. Wong, M. Quigley, and A. Y. Ng. “A Vision-based System for Grasping Novel Objects in Cluttered Environments,” International Symposium of Robotics Research, 2007.
[88] A. Saxena, J. Driemeyer, J. Kearns, and A.Y. Ng, “Robotic Grasping of Novel Objects,” Proceedings of the 19th Annual Neural Information Processing Systems Conference, vol. 19 , pp. 1209-1216, 2007.
[89] A. Saxena, S. Min, and A. Y. Ng, “Make3D: Learning 3D Scene Structure from a Single Still Image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 824-840, 2009.
[90] K. Sfikas, T. Theoharis, and L. Pratikakis, “ROSy+: 3D Object Pose Normalization based on PCA and Reflective Object Symmetry with Application in 3D Object Retrieval,” International Journal of Computer Vision, vol. 91, pp. 262-279, 2011.
[91] G. Sfikas, C. Nikou , N. Galatsanos and C. Heinrich, “Spatially varying mixtures incorporating line processes for image segmentation,” Journal of Mathematical Imaging and Vision, vol. 36, no. 2, pp. 91-110 , 2010.
[92] J. Shi and C. Tomasi, “Good Features to Track,” IEEE International Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994.
[93] H. Sidenbladh and M. J. Black, “Learning the Statistics of People in Images and Video,” International Journal of Computer Vision, vol. 54, no. 1-3, pp. 183-209, 2003.
[94] N. Snavely, S. M. Seitz, R. Szeliski. “Photo Tourism: Exploring photo Collections in 3D,” ACM Transactions on Graphics (SIGGRAPH Proceedings), vol. 25, no. 3, 2006, pp.835-846.
[95] M. B. Stegmann, “Object Tracking using Active Appearance Models,“ Proceedings 10th Danish Conference on Pattern Recognition and Image Analysis, vol. 1, pp. 54-60, 2001.
[96] M. E. Stivanello, E. S. Leal, N. Palluat, and M. R. Stemmer, “Dense Correspondence with Regional Support for Stereo Vision Systems,” 23rd SIBGRAPI Conference on Graphics, Patterns and Images, pp. 368-375, 2010.
[97] Y. Su, H. Ai, T. Yamashita, and S. Lao, “Human Pose Estimation using Exemplars and Part based Refinement,” 10th Asian Conference on Computer Vision, pp. 174-185, 2010.
[98] O. O. Sushkov and C. Sammut, “Local image feature matching for object recognition,” 11th International Conference on Control Automation Robotics and Vision, pp. 1598-1604, 2010.
[99] R. Szeliski, Computer Vision: Algorithms and Applications, Springer, 2010.
[100] F. Tang and B. Deng, “Facial Expression Recognition using AAM and Local Facial Features,” Third International Conference on Natural Computation, vol. 2, pp. 632-635, 2007.
[101] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, “Position based Visual Servoing: Keeping the Object in the Field of Vision,” IEEE International Conference on Robotics and Automation, pp. 1624-1629, 2002.
[102] Y. C. Tsai, “Motion Planning of a Dual-Arm Mobile Robot in Complex Environments,” Master Thesis, Graduate Institute of Mechanical Engineering, National Taiwan University, 2008.
[103] L. W. Tsai, The Mechanics of Serial and Parallel Manipulators, New York, John Wiley & Sons, 1999.
[104] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Cognitive Science, pp. 71-86, 1991.
[105] D. D. Van Pol, R. H. Cuijpers, and J. F. Juola, “Head Pose Estimation for a Domestic Robot,” IEEE International Conference on Human-Robot Interaction, pp. 277-278, 2011.
[106] J. M. Vandeweghe, D. Ferguson, and S. Srinivasa, “Randomized Path Planning for Redundant Manipulators without Inverse Kinematics,” IEEE-RAS International Conference on Humanoid Robots, 2007.
[107] V. Vilaplana, F. Marques, and P. Salembier, “Binary Partition Trees for Object Detection,” IEEE Transactions on Image Processing, vol. 17, no. 11, pp. 2201-2216, 2008.
[108] P. Viola and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features,” Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 511-518, 2001.
[109] L. Wang, L. Ding, X. Ding, and C. Fang, “2D Face Fitting-Assisted 3D Face Reconstruction for Pose-Robust Face Recognition,” Soft Computing, vol. 15, pp. 417-428, 2011.
[110] Y. Wang, H. Ai, B. Wu, and C. Huang, “Real Time Facial Expression Recognition with Adaboost,” Proceedings of IEEE International Conference on Pattern Recognition, vol. 3, pp. 926-929, 2004.
[111] W. J. Wang, “Motion Planning of a Mobile Robotic Manipulator for Grasping Tasks in Complex Environments,” Master Thesis, Graduate Institute of Mechanical Engineering, National Taiwan University, 2009.
[112] F. C. Wang, H. W. Wang, H. M. Lin, P. K. Chen, and K. C. Fan, “Sense and Control of a Companion Robot,” Journal of the Chinese Society of Mechanical Engineers, vol. 29, no. 6, pp. 483-489, 2008.
[113] G. I. Webb, Data Mining and Knowledge Discovery, vol. 2, pp. 121-167, 1998.
[114] Z. Wei, Q. M. J. Wu, W. Guanghui, and Y. Haibing, “An Adaptive Computational Model for Salient Object Detection,” IEEE Transactions on Multimedia, vol. 12, no. 4, pp. 300-316, 2010.
[115] D. S. Wokes and P. L. Palmer, “Heuristic Pose Estimation of a Passive Target using a Global Model,” Journal of Guidance, Control, and Dynamics, vol. 34, pp. 293-299, 2011.
[116] M. Xie, Fundamentals of Robotics: Linking Perception to Action (Machine Perception Artificial Intelligence), World Scientific, 2003.
[117] Z. Xue, J. M. Zoellner, and R. Dillmann, “Planning regrasp operations for a multifingered robotic hand,” IEEE Conference on Automation Science and Engineering, pp. 778-783, 2008.
[118] Z. Xue, J. M. Zoellner, and R. Dillmann, “Automatic optimal grasp planning based on found contact points,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 1053-1058, 2008.
[119] W. S. Yambor, B. A. Draper, and J. R. Beveridge, “Analyzing PCA-based Face Recognition Algorithms: Eigenvector Selection and Distance Measures,” Empirical Evaluation Methods in Computer Vision, pp. 39-60, 2002.
[120] R. Yang and M. Pollefeys, “Multi-Resolution Real-Time Stereo on Commodity Graphics Hardware,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 211-218, 2003.
[121] Y. Yang, J. Liu, and M. Shah, “Video Scene Understanding Using Multi-scale Analysis,” IEEE International Conference on Computer Vision, pp. 1669-1676, 2009.
[122] M. H. Yang, “Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods,” IEEE International Conference on Automatic Face and Gesture Recognition, pp. 215-220, 2002.
[123] G. Yu and J. M. Morel, “ASIFT: An Algorithm for fully Affine Invariant Comparison,” Image Processing On Line, 2011.
[124] H. Zghal, R. V. Dubey, and J. A. Euler, “Efficient gradient projection optimization for manipulators with multiple degrees of redundancy,” Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2, pp. 1006-1011, 1990.
[125] Y. Zhang and L. Wu, “Face Pose Estimation by Chaotic Artificial Bee Colony,” International Journal of Digital Content Technology and its Applications, vol. 5, pp. 55-63, 2011.
[126] W. Zhao, R. Chellappa, and P. J. Phillips, “Subspace Linear Discriminant Analysis for Face Recognition,” Technical Report CAR-TR-914, Center for Automation Research, University of Maryland, 1999.
[127] X. Zhixing, A. Kasper, J.M. Zoellner, R. Dillmann, “An automatic grasp planning system for service robots,” International Conference on Advanced Robotics, pp. 1-6, 2009.
[128] L. Zhu, Y. Chen, C. Lin, and A. Yuille, “Max Margin Learning of Hierarchical Configural Deformable Templates (HCDTs) for Efficient Object Parsing and Pose Estimation,” International Journal of Computer Vision, vol. 93, pp. 1-21, 2011.
[129] “Yahoo! Groups,” [Online]. Available: http://tech.groups.yahoo.com/group/OpenCV/. [Accessed: May 11, 2011].
[130] “OpenCV Wiki,” [Online]. Available: http://opencv.willowgarage.com/wiki. [Accessed: May 11, 2011].
[131] “SIFT Library,” [Online]. Available: http://blogs.oregonstate.edu/hess/. [Accessed: May 11, 2011].
[132] “NeHe Tutorials: OpenGL tutorials,” [Online]. Available: http://nehe.gamedev.net/. [Accessed: May 11, 2011].
[133] “OpenGL official website,” [Online]. Available: http://www.opengl.org/. [Accessed: May 11, 2011].
[134] “NewMat C++ Matrix Class,” [Online]. Available: http://ideas.repec.org/c/cod/ccplus/newmat.html. [Accessed: May 11, 2011].
[135] “Videre Design,” [Online]. Available: http://www.videredesign.com/. [Accessed: May 23, 2011]
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66483-
dc.description.abstract近幾年來,從工業用的機器到商業化的娛樂產品,機器人的科技逐漸地影響我們的生活。不管是為了家庭用的、醫學用的或工業用的目的而發展機器人,各財團公司或大學的研究實驗室都一直持續不斷的進行開發中。為了使機器人更智慧化,變得更有認知能力,機器人能藉由機器視覺系統獲取更多有用的資訊,包括對周圍場景的了解、物件的辨識以及空間中的關係。
本論文目的旨在發展機器人對於人臉辨識的方法以及對於物件姿態估測的演算法以利於機器人自主抓取,所提出的人臉辨識方法是利用AAM來取出人臉特徵,再利用人臉的形狀描述子來辨識人臉。並且在假設已知物件的3維幾何形狀前提下,藉由SIFT所追蹤到物件上的點以及立體視覺估測到物體的3維點雲,我們所提出的姿態估測演算法能正確地估測出物件的姿態。同時我們也建立了一個整合物件偵測、物件定位、姿態估測、軌跡規劃以及真實機器手臂的視覺導引架構,利用視覺導引機器手臂到達目標物。
最後本論文藉由一個靈巧的機器手臂亞當展示了2個抓握方案,它必須抓取它面前所能抓到的物件,此展示顯示我們的機器手臂能在3維空間中強健且自主地抓取一個任意轉動且被SIFT偵測到的物件。
zh_TW
dc.description.abstractRecently, robotic technologies, from industrial machines to commercial entertainment products, are increasingly influential in our lives. There is continual development of robots for domestic, medical, and industrial purposes under way in corporate and university research labs. In efforts to make robots more intelligent and cognitive, robots have been developed to obtain much useful information including scene understanding and spatial relationship from a machine vision system.
The objective of this dissertation is to develop recognition methods for face recognition and pose estimation algorithms for autonomous grasping. The proposed face recognition method utilizes AAM to extract facial feature points and utilizes shape descriptors to recognize a face. Also, we demonstrate that the proposed pose estimation algorithm is capable of accurately computing an object’s pose by the 2D tracking points on an object of SIFT and 3D point cloud detected by stereo vision on an object, assuming that a 3D geometric model of an object is known a priori. Moreover, the visual guide framework integrating object detection, object localization, pose estimation, path planning and the real robot arm for guiding the robot arm to the target is established.
Finally, we demonstrate two grasping scenarios with a dexterous arm, ADAM, where an object in front of ADAM can be grasped. This demonstration shows our robot arm can robustly and autonomously grasp a randomly rotative rigid object detected by SIFT in 3D space.
en
dc.description.provenanceMade available in DSpace on 2021-06-17T00:38:23Z (GMT). No. of bitstreams: 1
ntu-101-D94522002-1.pdf: 4839073 bytes, checksum: 5cf1fa0dd78e969789b1084d464b6ff3 (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents摘要 i
Abstract iii
List of Tables vii
List of Figures ix
Nomenclature xiii
CHAPTER 1. Introduction 1
1.1. Machine Vision 1
1.2. Motivation 3
1.3. Objective and Contributions 5
1.4. The Framework of the Dissertation 6
CHAPTER 2. Face Detection and Recognition 11
2.1. Introduction 12
2.2. Facial Feature Points Extraction 14
2.3. Global Shape Features 21
2.4. Face Recognition Methods 22
2.5. Experiments 25
2.6. Summary 27
CHAPTER 3. Face Pose Estimation 29
3.1. Introduction 29
3.2. Motion Gradient Orientation of a Motion History Image 31
3.3. Face Pose Estimation with a COG of AAM Shape Features 33
3.4. Summary 35
CHAPTER 4. Object Detection and Recognition 37
4.1. Introduction 38
4.2. SIFT Feature Descriptors 40
4.2.1. Detection of Scale-Space Extrema 40
4.2.2. Accurate Keypoint Localization 43
4.2.3. Eliminating Edge Responses 44
4.2.4. Orientation Assignment 45
4.3. Object Detection 47
4.3.1. Object Detection based on HSV Color Space 47
4.3.2. Combination of SIFT and RANSAC for 2D Object Localization 47
4.4. Comparison between the SIFT and the SURF 50
CHAPTER 5. Object Localization and Pose Estimation 53
5.1. Introduction 53
5.2. Stereo Vision System 54
5.2.1. Camera Calibration 55
5.2.2. Image Rectification 57
5.2.3. Calculation of the Corresponding Point 58
5.2.4. Difference Aggregation 59
5.2.5. Optimization of Disparity Value 60
5.3. 3D Object Localization 60
5.4. 3D Object Pose Estimation 63
5.5. Summary 71
CHAPTER 6. Grasp Planning with Bi-directional RRT Planner 73
6.1. Introduction 73
6.2. Bi-directional RRT Planner 76
6.3. Application to Grasp Planning 78
6.4. Kinematics Analysis 80
6.4.1. Forward Kinematics 81
6.4.2. Inverse Kinematics 82
6.4.3. Singularity Avoidance 83
6.4.4. Joint Limit Avoidance 85
CHAPTER 7. Grasping a Known Object with a Robotic Arm System 87
7.1. Introduction 88
7.2. Software Framework 90
7.3. Hardware Framework 93
7.4. Grasp an Object Randomly placed on the Box 94
7.5. Grasp an Object of a Random pose 100
CHAPTER 8. Conclusions and Future Work 107
8.1. Conclusions 107
8.2. Future Work 108
8.2.1. Various Camera-Robot Configurations 108
8.2.2. Visual Servo Systems 109
References 112
dc.language.isoen
dc.title三維物件辨識與姿態估測zh_TW
dc.title3D Object Recognition and Pose Estimationen
dc.typeThesis
dc.date.schoolyear100-1
dc.description.degree博士
dc.contributor.oralexamcommittee蔡清元(Tsing-Iuan Tsay),黃?哲(Shiuh-Jer Huang),林沛群(Pei-Chun Lin),傅楸善(Chiou-Shann Fuh)
dc.subject.keyword人臉辨識,姿態估測,自主抓握,軌跡規劃,物件定位,zh_TW
dc.subject.keywordFace Recognition,Pose Estimation,Autonomous Grasping,Path Planning,Object Localization,en
dc.relation.page122
dc.rights.note有償授權
dc.date.accepted2012-01-31
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept機械工程學研究所zh_TW
顯示於系所單位:機械工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  目前未授權公開取用
4.73 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved