請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56580完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 江茂雄(Mao-Hsiung Chiang) | |
| dc.contributor.author | SHENG-HSIANG KUNG | en |
| dc.contributor.author | 龔聖翔 | zh_TW |
| dc.date.accessioned | 2021-06-16T05:35:57Z | - |
| dc.date.available | 2017-08-17 | |
| dc.date.copyright | 2014-08-17 | |
| dc.date.issued | 2014 | |
| dc.date.submitted | 2014-08-12 | |
| dc.identifier.citation | [1] C. Harris and M. Stephens, 'A combined corner and edge detector,' presented at the Alvey vision conference, 1988.
[2] T. Lindeberg, 'Feature detection with automatic scale selection,' International journal of computer vision, vol. 30, pp. 79-116, 1998. [3] K. Mikolajczyk and C. Schmid, 'Indexing based on scale invariant interest points,' in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, 2001, pp. 525-531. [4] D. G. Lowe, 'Object recognition from local scale-invariant features,' in Computer Vision, 1999. The proceedings of the seventh IEEE international conference on, 1999, pp. 1150-1157. [5] T. Kadir and M. Brady, 'Saliency, scale and image description,' International Journal of Computer Vision, vol. 45, pp. 83-105, 2001. [6] F. Jurie and C. Schmid, 'Scale-invariant shape features for recognition of object categories,' in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, pp. II-90-II-96 Vol. 2. [7] K. Capek, 'Rossum’s Universal Robots,' 2001. [8] I. Asimov, I, robot: Random House LLC, 2004. [9] X. Yin and M. Xie, 'Finger identification and hand posture recognition for human–robot interaction,' Image and Vision Computing, vol. 25, pp. 1291-1300, 2007. [10] R. Kjeldsen and J. Kender, 'Toward the use of gesture in traditional user interfaces,' in Automatic Face and Gesture Recognition, 1996., Proceedings of the Second International Conference on, 1996, pp. 151-156. [11] R. E. Kahn, M. J. Swain, P. N. Prokopowicz, and R. J. Firby, 'Gesture recognition using the perseus architecture,' in Computer Vision and Pattern Recognition, 1996. Proceedings CVPR'96, 1996 IEEE Computer Society Conference on, 1996, pp. 734-741. [12] J. Triesch and C. Von Der Malsburg, 'A gesture interface for human-robot-interaction,' in 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 1998, pp. 546-546. [13] Microsoft. Available: http://www.microsoft.com [14] Openni. Available: http://www.openni.org [15] Openkinect. Available: http://www.openkinect.org [16] K. Berger, K. Ruhl, Y. Schroeder, C. Bruemmer, A. Scholz, and M. A. Magnor, 'Markerless Motion Capture using multiple Color-Depth Sensors.' [17] T. Dutta, 'Evaluation of the Kinect™ sensor for 3-D kinematic measurement in the workplace,' Applied ergonomics, vol. 43, pp. 645-649, 2012. [18] T. Stoyanov, A. Louloudi, H. Andreasson, and A. J. Lilienthal, 'Comparative evaluation of range sensor accuracy in indoor environments.' [19] K. Khoshelham, 'Accuracy analysis of kinect depth data.' [20] L.-W. Tsai and S. Joshi, 'Kinematics and optimization of a spatial 3-UPU parallel manipulator,' Journal of Mechanical Design, vol. 122, p. 439, 2000. [21] J.-P. Merlet, Parallel Robots: Springer, 2006. [22] M. Bouri and R. Clavel, 'The Linear Delta: Developments and Applications,' in Robotics (ISR), 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK), 2010, pp. 1-8. [23] Z. Zalevsky, A. Shpunt, A. Malzels, and J. Garcia, 'Method and system for object reconstruction,' ed: Google Patents, 2013. [24] WIRED. Available: http://www.wired.com/ [25] P. Viola and M. Jones, 'Rapid object detection using a boosted cascade of simple features,' in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, 2001, pp. I-511-I-518 vol. 1. [26] P. Y. Simard, L. Bottou, P. Haffner, and Y. LeCun, 'Boxlets: a fast convolution algorithm for signal processing and neural networks,' Advances in Neural Information Processing Systems, pp. 571-577, 1999. [27] T. Lindeberg, 'Edge detection and ridge detection with automatic scale selection,' International Journal of Computer Vision, vol. 30, pp. 117-156, 1998. [28] J. J. Koenderink, 'The structure of images,' Biological cybernetics, vol. 50, pp. 363-370, 1984. [29] T. Lindeberg, 'Scale-space for discrete signals,' Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 12, pp. 234-254, 1990. [30] D. G. Lowe, 'Distinctive image features from scale-invariant keypoints,' International journal of computer vision, vol. 60, pp. 91-110, 2004. [31] M. Brown and D. G. Lowe, 'Invariant Features from Interest Point Groups,' presented at the BMVC, 2002. [32] H. Baya, A. Essa, T. Tuytelaarsb, and L. Van Goola, 'Speeded-up robust features (SURF),' Computer Vision and Image Understanding, vol. 110, pp. 346-359, 2008. [33] R. Hartley and A. Zisserman, Multiple view geometry in computer vision: Cambridge university press, 2003. [34] R. I. Hartley, 'In defense of the eight-point algorithm,' Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, pp. 580-593, 1997. [35] M. A. Fischler and R. C. Bolles, 'Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,' Communications of the ACM, vol. 24, pp. 381-395, 1981. [36] J. jae Lee and G. Kim, 'Robust estimation of camera homography using fuzzy RANSAC,' in Computational Science and Its Applications–ICCSA 2007, ed: Springer, 2007, pp. 992-1002. [37] XBOX. Available: http://www.xbox.com/en-CA/Kinect [38] K. Imagawa, H. Matsuo, R.-i. Taniguchi, D. Arita, S. Lu, and S. Igi, 'Recognition of local features for camera-based sign language recognition system,' in Pattern Recognition, 2000. Proceedings. 15th International Conference on, 2000, pp. 849-853. [39] G. Ariyanto, P. K. P. Lit, H.-W. Kwokt, and G. Yant, 'Hand gesture recognition using Neural Networks for robotic arm control,' in National Conference on Computer Science & Information Technology, Indonesia, 2007. [40] Matlab. Available: http://www.mathworks.com/help/images/reducing-the-number-of-colors-in-an-image.html [41] blackice. Available: http://www.blackice.com/colorspaceHSI.htm [42] B. D. Zarit, B. J. Super, and F. K. Quek, 'Comparison of five color models in skin pixel classification,' in Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, 1999. Proceedings. International Workshop on, 1999, pp. 58-63. [43] A. Rosenfeld and J. L. Pfaltz, 'Sequential operations in digital picture processing,' Journal of the ACM (JACM), vol. 13, pp. 471-494, 1966. [44] R. Fabbri, L. D. F. Costa, J. C. Torelli, and O. M. Bruno, '2D Euclidean distance transform algorithms: A comparative survey,' ACM Computing Surveys (CSUR), vol. 40, p. 2, 2008. [45] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital image processing using MATLAB: Pearson Education India, 2004. [46] R. C. Gonzalez and R. E. Woods, 'Digital image processing, 2nd,' SL: Prentice Hall, 2002. [47] J.-Y. Bouguet, 'Camera calibration toolbox for matlab,' 2004. [48] Z. Zhang, 'A flexible new technique for camera calibration,' Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, pp. 1330-1334, 2000. [49] J. Stowers, M. Hayes, and A. Bainbridge-Smith, 'Altitude control of a quadrotor helicopter using depth map from Microsoft Kinect sensor,' presented at the Mechatronics (ICM), 2011 IEEE International Conference on, 2011. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/56580 | - |
| dc.description.abstract | 本文旨在以SURF與HRI影像演算法整合三軸角錐形氣壓並聯式機構機械臂,使機械臂可藉由影像辨識物件形狀及位置,自動完成取放作業之工作。本文採用加速強健特徵點演算法 (Speed up robust feature, SURF)來定義目標物體的特徵點,並且藉由比對當前畫面中的特徵點來判斷目標物是否存在於影像中。為了強化特徵比對結果並算出抓物控制所需的參考點,本論文採用隨機取樣篩選演算法 (RANdom Sample Consensus, RANSAC)來估測平面轉換矩陣 (Homography matrix)以準確的標出目標物的中心點。並利用最新發展出的遊戲設備,ASUS Xtion Pro Live深度攝影機能夠直接擷取目標點3-D空間座標。此外,本文並發展出一套座標估測的校正法,提高對於目標物座標估測之準確度。
人機互動是提供非受訓練的使用者能夠更容易、更有效率與機器互動的一種方法。本研究提出一套利用手勢辨識來操控機械臂的方法,使用網路攝影機取得影像後經過多個處理程序,達到辨識之功能。其中的影像處理包含膚色偵測、雜訊消除和形態學以計算手指數量,並將資訊傳給控制器,利用以上所提出的理論規劃機械臂端點之軌跡。最後,經過實驗驗證本文提出之方法的可行性,證實此系統可成功導引三軸式機械臂之端點順利的移動並拿取所設定的目標物。 | zh_TW |
| dc.description.abstract | The objective of this study is to develop the SURF and HRI image algorithm integrated with a three-axial pneumatic parallel manipulator. The manipulator can pick and place objects automatically by the feature information of the image through the SURF algorithm with scale- and rotation-invariants. The speed up robust feature (SURF) algorithm is used to define the feature of a target object and to match features between the current image and the object database for confirming the target. To strengthen the feature matching results and calculate the necessary reference control point, we adopt the RANSAC(RANdom Sample And Consensus) algorithm to estimate the planar transformation matrix (homography matrix) in order to accurately mark the center of target. The ASUS Xtion Pro Live depth camera which can directly estimate the 3-D location of target point is used in this study. A set of coordinate estimation calibration method is developed to improve the accuracy of target location estimation.
Human-Robot-Interaction (HRI) offers a way to enable untrained users to interact with robots more easily and efficiently. This study also presents a method for hand gesture recognition to command the manipulator. The stages include skin detection to effectively capture only the skin region of the hand, noise elimination, and applications of the morphology to determine the active finger count. Once the finger count is determined, the information is transmitted to the manipulator controller. The end-effector of the manipulator can then move to the desired location according to the finger count. Finally, the experiments of the three-axial manipulator end-effector integrated with the feature recognition algorithm demonstrate that the proposed methods can achieve the feature recognition and pick and place of the target object successfully. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T05:35:57Z (GMT). No. of bitstreams: 1 ntu-103-R01525023-1.pdf: 5112994 bytes, checksum: 20b9260f278257e4052837a5d21a2914 (MD5) Previous issue date: 2014 | en |
| dc.description.tableofcontents | 誌謝…………………………………………. I
中文摘要………………………………… II ABSTRACT…………………………….. III LIST OF FIGURE……………………….. VII LIST OF TABLES………………………… XI Chapter 1 Introduction…………………. 1 1.1 Background 1 1.2 Literature Review 4 1.3 Motivation 7 1.4 Thesis Outline 9 Chapter 2 System Overview…………… 10 2.1 Mechanism Description 10 2.2 Test Rig Layout 14 2.2.1 Overall Manipulator System 14 2.2.2 Camera System 16 Chapter 3 Object Recognition……….. 19 3.1 Interest Point Detection 20 3.1.1 Integral Image 20 3.1.2 Hessian Matrix Based Interest Points 22 3.1.3 Scale Space Representation 25 3.1.4 Interest Point Localization 29 3.2 Interest Point Description 32 3.2.1 Orientation Assignment 33 3.3.2 Sum of Haar Wavelet Responses Descriptor 34 3.3 Feature Points Matching 38 3.3.1 Image Transformation 39 3.3.2 Exact Solution 40 3.3.3 Over-determined Solution 42 3.3.4 Normalized DLT 45 3.3.5 Robust Estimation using RANSAC 47 Chapter 4 Gesture Recognition……….. 51 4.1 Skin Color Classification 52 4.2 Noise Rejection 56 4.3 Distance Transform 57 4.4 Morphology 60 4.4.1 Structural Element 60 4.4.2 Erosion 61 4.4.3 Dilation 63 Chapter 5 3D Object Localization……. 67 5.1 Camera Model 67 5.2 Calibration of Depth Camera 71 5.3 Obtain 3D Location via Depth Camera 75 5.4 Hand-eye Coordinates Calibration 77 Chapter 6 Analysis of kinematics……… 81 6.1 Geometry of the Manipulator 81 6.2 Inverse Kinematic Analysis 84 6.3 Forward Kinematic Analysis 85 Chapter 7 Experiment…………………. 86 7.1 SURF Feature Detection 89 7.2 Feature Matching 92 7.3 Finger Count Appointment 97 7.4 Path Planning 100 7.5 Pick and Place Experiment 105 Chapter 8 Conclusions………………….. 111 REFERENCE…………………………… 113 | |
| dc.language.iso | en | |
| dc.subject | 並聯式機構機械臂 | zh_TW |
| dc.subject | 手勢辨識 | zh_TW |
| dc.subject | 隨機取樣篩選演算法 | zh_TW |
| dc.subject | 加速強健特徵點演算法 | zh_TW |
| dc.subject | 影像辨識 | zh_TW |
| dc.subject | hand gesture recognition | en |
| dc.subject | image recognition | en |
| dc.subject | parallel manipulator | en |
| dc.subject | pneumatic servo | en |
| dc.subject | speed up robust feature algorithm | en |
| dc.subject | random sample and consensus algorithm | en |
| dc.title | 以SURF與HRI影像演算法整合三軸角錐形氣壓並聯式機構機械臂之研究 | zh_TW |
| dc.title | The Integration of The SURF and HRI Image Algorithm with A Three-Axial Pyramidal Pneumatic Parallel Manipulator | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 102-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳義男(Yih-Nan Chen),林榮慶(Zone-Ching Lin),施明璋(Ming-Chang Shih),吳聰能(Tsung-Ng Wu) | |
| dc.subject.keyword | 影像辨識,並聯式機構機械臂,加速強健特徵點演算法,隨機取樣篩選演算法,手勢辨識, | zh_TW |
| dc.subject.keyword | image recognition,parallel manipulator,pneumatic servo,speed up robust feature algorithm,random sample and consensus algorithm,hand gesture recognition, | en |
| dc.relation.page | 115 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2014-08-13 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 工程科學及海洋工程學研究所 | zh_TW |
| 顯示於系所單位: | 工程科學及海洋工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-103-1.pdf 未授權公開取用 | 4.99 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
