請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/31318
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
dc.contributor.author | Jun-An Ding | en |
dc.contributor.author | 丁俊安 | zh_TW |
dc.date.accessioned | 2021-06-13T02:42:41Z | - |
dc.date.available | 2011-08-02 | |
dc.date.copyright | 2011-08-02 | |
dc.date.issued | 2011 | |
dc.date.submitted | 2011-08-01 | |
dc.identifier.citation | Books:
[1: Gonzalez & Woods 2008] R. C. Gonzalez and R. E. Wood, “Digital Image Processing,” Third Edition, Taiwan: Pearson, 2008. [2: Bradski & Kaehler 2008] G. Bradski and, A. Kaehler, “Learning OpenCV,” First Edition, O’Reilly Media, 2008. [3: Thrun et al. 2005] S. Thrun, W. Burgard, D. Fox, “Probabilistic Robotics,” London: The MIT Press, 2005. [4: Su 2010] T. H. Su, “Variable Gain Based Object Tracking and Visual Servo Control with Pan Tilt Platform,” Master Thesis of National Taiwan University, Taipei, Taiwan, 2010. Websites: [5: Mean Absolute Error 2011] Mean Absolute Error, [online]. Available: http://en.wikipedia.org/wiki/Mean_absolute_error, Apr. 06, 2011. [6: Odometry 2011] Odometry, [online]. Available: http://en.wikipedia.org/wiki/Odometry, Feb. 05, 2011. Papers: [7: Shibata & Kobayashi 2007] M. Shibata and N. Kobayashi, “Non-delayed Visual Tracking of a Moving Object with Target Speed Compensation,” in Proceedings of IEEE International Conference on Mechatronics, Kumamoto, Japan, pp. 1-6, May 8-10, 2007. [8: Wilson et al. 1996] W. J. Wilson, C. C. W. Hulls, and G. S. Bell, “Relative End-Effector Control Using Cartesian position-Based Visual Servoing,” IEEE Transactions on Robotics and Automation, Vol. 12, No. 5, pp. 684-696, Oct. 1996. [9: Nakabo & Ishikawa 1998] Y. Nakabo and M. Ishikawa, “Visual Impedance Using l ms Visual Feedback System,” in Proceedings of IEEE International Conference on Robotics & Automatio, Leuven, Belgium, pp. 2333-2338, May 16-20, 1998. [10: Ito & Shibata 2009] M. Ito and M. Shibata, “Non-delayed Visual Tracking of Hand-eye Robot for A Moving Target Object,” in Proceedings of ICROS-SICE International Conference on Joint, Fukuoka, Japan, pp. 4035-4040, Aug. 18-21, 2009. [11: Shibata et al. 2010] M. Shibata, T. Sekita, H. Eto, and M. Ito, “Visual Tracking Control to Fast Moving Target for Stereo Vision Robot,” in Proceedings of IEEE International Workshop on Advanced Motion Control, Nagaoka, Japan, pp. 210-215, Mar. 21-24, 2010. [12: Kinbara et al. 2006] I. Kinbara, S. Komada, and J. Hirai, “Visual Servo of Active Cameras and Manipulators by Time Delay Compensation of Image Features with Simple On-line Calibration,” in Proceedings of SICE-ICASE International Conference on Joint, Busan, Korea, pp. 5317-5322, Oct. 18-21, 2006. [13: Zhu et al. 2007] Z. Zhu, T. Oskiper, S. Samarasekera, R. Kumar, and H. S. Sawhney, “Ten-fold Improvement in Visual Odometry Using Landmark Matching,” in Proceedings of IEEE International Conference on Computer Vision, Rio de Janeiro, Brazil, pp. 1-8, Oct. 14-21, 2007. [14: Garcia et al. 2007] R. Garcia, M. A. Sotelo, I. Parra, D. Fernaindez, and M. Gavilan, “2D Visual Odornetry method for Global Positioning Measurement,” in Proceedings of IEEE International Symposium on Intelligent Signal Processing, Alcala de Henares, Spain, pp. 1-6, Oct. 3-5, 2007. [15: Kitt et al. 2010] B. Kitt, F. Moosmann, and C. Stiller, “Moving on to Dynamic Environments: Visual Odometry using Feature Classification,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 5551-5556, Oct. 18-22, 2010. [16: Willams & Reid 2010] B. Williams and I. Reid, “On Combining Visual SLAM and Visual Odometry,” in Proceedings of IEEE International Conference on Robotics and Automation, Alaska, USA, pp. 3494-3500, May 3-8, 2010. [17: Civera et al. 2008] J. Civera, A. J. Davison, and J. M. M. Montiel, “Inverse Depth Parametrization for Monocular SLAM,” IEEE Transactions on Robotics, Vol. 24, No. 5, pp. 932-945, Oct. 2008. [18: Vatani et al. 2009] N. N. Vatani, J. Roberts, and M. V. Srinivasan, “Practical Visual Odometry for Car-like Vehicles,” in Proceedings of IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 3551-3557, May 12-17, 2009. [19: Milford & Wyeth 2008] M. J. Milford and G. F. Wyeth, “Mapping a Suburb With a Single Camera Using a Biologically Inspired SLAM System,” IEEE Transactions on Robotics, Vol. 24, No. 5, pp. 1038-1053, Oct. 2008. [20: Montiel et al. 2006] J. M. M. Montiel, J. Civera, and A. J. Davison, “Unified Inverse Depth Parametrization for Monocular SLAM,” In Robotics: Science and Systems, Philadelphia, USA, Aug. 16-19, 2006. [21: Campbell et al. 2005] J. Campbell, R. Sukthankar, I. Nourbakhsh, and A. Pahwa, “A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision,” in Proceedings of IEEE International Conference on Robotics and Automation, Barcelona, Spain, pp. 3421-3427, Apr. 18-22, 2005. [22: Scaramuzza & Siegwart 2008] D. Scaramuzza and R. Siegwart, “Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles,” IEEE Transactions on Robotics, Vol. 24, No. 5, pp. 1015-1026, Oct. 2008. [23: Nister et al. 2004] D. Nister, O. Naroditsky, and J. Bergen, “Visual Odometry,” In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, pp. 652-659, Jun. 27-Jul. 2, 2004. [24: Ukida. 2010] H. Ukida, “Object Tracking System by Pan-Tilt Moving Camera and Robot Using Condensation Method,” in Proceedings of SICE Annual Conference on instrumentation control and information technology, Taipei, Taiwan, pp. 99-104, Aug. 18-21, 2010. [25: Thuilot et al. 2002] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, “Position Based Visual Servoing: Keeping the Object in the Field of Vision,” in Proceedings of IEEE International Conference on Robotics and Automation, Washington, DC, USA, pp. 1624-1629, May 11-15, 2002. [26: Cretual & Chaumette 2001] A. Cretual and F. Chaumette, “Application of Motion-Based Visual Servoing to Target Tracking,” International Journal of Robotics Research, Vol. 20, No, 11, pp. 878-890, Nov. 2001. [27: Shibata & Honma 2002] M. Shibata and T. Honma, “3D Object Tracking on Active Stereo Vision Robot,” in Proceedings of IEEE International Workshop on Advanced Motion Control, Maribor, Slovenia, pp. 567-572, Jul. 3-5, 2002. [28: Corke et al. 2004] P. Corke, D. Strelow, and S. Singh, “Omnidirectional Visual Odometry for a Planetary Rover,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, pp. 4007-4012, Sep. 28-Oct. 2, 2004. [29: Hutchinson et al. 1996] S. Hutchinson, G. D. Hager, and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Transactions on Robotics and Automation, Vol. 12, No. 5, pp. 651-670, Oct. 1996. [30: Davison et al. 2007] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Transactions on Pattern analysis and Machine Intelligence, Vol. 29, No. 6, pp. 1052-1067, Jun. 2007. [31: Chaumette & Hutchinson 2006] F. Chaumette and S. Hutchinson, “Visual Servo Control Part 1: Basic Approaches,” IEEE Magazines on Robotics and Automation, Vol. 13, No. 4, pp. 82-90, Dec. 2006. [32: Ishii et al. 1996] I. Ishii, Y. Nakabo, and M. Ishikawa, “Target Tracking Algorithm for l ms Visual Feedback System Using Massively Parallel Processing,” in Proceedings of IEEE International Conference on Robotics and Automation, Minnesota, USA, pp. 2309-2314, Apr. 22-28, 1996. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/31318 | - |
dc.description.abstract | 影像追蹤是讓目標物保持在影像中的指定位置,通常是在畫面的中心點。首先利用影像處理偵測出目標物在影像中的位置,並移動相機讓目標物在影像的中心點位置,這樣一來就達成影像追蹤。但是當目標物是動態物體時,且系統沒有預測目標物的動作,那麼目標物不會在影像的中心點位置,而一直會和影像中心點有距離差,這個距離差稱為影像誤差,因為目標物的動態所造成的影像誤差稱為追蹤延遲。因此本論文提出四種運動模型來預測目標物的動作,以克服追蹤延遲並降低影像誤差。最後實驗結果顯示有加入預測器的效能比沒有加入預測器的要好。
近年來對自動駕駛的研究越來越熱門,而自動駕駛的研究之一是正確估測出車子的絕對位置,可以利用影像里程計來估測出車子行徑的路徑,就可以得知車子的絕對位置。 影像里程計的目標是從錄下來的影像去估測出車子移動的路徑,其中相機裝在車子上。在車子移動時相機也在錄影,藉由分析影像上的變化,估測出車子的路徑。然而一般相機是投影的感測器(projective sensor),因此只能偵測出影像的變化,而不能偵測相機真正的移動路徑。所以本論文在影像的特定位置,利用平均絕對誤差找出像素的位移量,並把像素位移量轉換成相機的動作以估測出車子的路徑。實驗結果顯示使用提出的方式來估測路徑和真正車子的移動路徑很接近。 | zh_TW |
dc.description.abstract | Visual tracking is to maintain the target at a specific location in the image by moving the camera. Usually, the specific location is the center of the image frame. At first, the target position is detected by image processing in the image frame. Then, the camera is moved to maintain the target in the image center. This way, visual tracking is achieved. However, if the target is dynamic and the system is not predicting target motions, the target will not be in the image center. The target position will always be off from the image center. This offset is called image error. The image errors caused by target motion is called tracking delay. Therefore, this thesis proposed four motion models to predict the target motion in order to overcome the tracking delay and reduce the image error. The experimental results show that the tracking performance of a predictive system is better than a non-predictive system.
Research on autonomous navigation has been more popular in recent years. One of the autonomous navigation issues is accurately estimating a vehicle’s global location. Therefore, by using visual odometry to estimate a vehicle path, a vehicle’s global location can be found. The aim of visual odometry is to estimate a vehicle path from captured images. A camera is mounted on a vehicle, and then the camera captures images when the vehicle is moving. Finally, visual odometry estimates the vehicle path from detecting the differences in captured images. However, the standard camera is a projective sensor; it detects only the image change but not the actual camera motion. Thus, this thesis detects the pixel displacement on a special location by using mean absolute error method. Then, the vehicle path is estimated by converting pixel displacement to camera motion. The experimental results show the estimated vehicle path is close to the actual vehicle path by using the proposed method. | en |
dc.description.provenance | Made available in DSpace on 2021-06-13T02:42:41Z (GMT). No. of bitstreams: 1 ntu-100-R98921009-1.pdf: 11876319 bytes, checksum: 982503a1b03b3dd25075165263c4b19f (MD5) Previous issue date: 2011 | en |
dc.description.tableofcontents | 摘要 i
ABSTRACT iii List of Figures vii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Description 3 1.2.1 Visual tracking 3 1.2.2 Visual Odometry 5 1.3 Contribution 6 1.4 Organization of this thesis 8 Chapter 2 Literature Survey 11 2.1 Visual Tracking 11 2.2 Visual Odometry 14 Chapter 3 Visual Tracking Control 17 3.1 Block Diagram 18 3.2 Hardware 20 3.3 Image Processing 21 3.4 Relative between the Webcam and the PTU Unit 23 3.5 Controller Design 25 3.6 Global Image 28 3.7 Predictor Design 28 3.8 Experimental Results 30 3.8.1 Scenario 30 3.8.2 Tracking without Prediction 31 3.8.3 Tracking with Prediction 33 Chapter 4 Visual Odometry 35 4.1 Block Diagram 36 4.2 Estimating Camera Motion 37 4.2.1 Distance Displacement 37 4.2.2 Angular Displacement 42 4.3 Mobile Camera Path 45 4.4 Pixel Displacement 46 4.4.1. U axis Pixel Displacement 47 4.4.2. V axis Pixel Displacement 51 4.5 Post-processing of Pixel Displacement 57 4.6 Experimental Results 62 4.6.1 Platform and Scenario 62 4.6.2 Experimental results 63 Chapter 5 Conclusion and Future Work 69 5.1 Conclusion 69 5.2 Future Work 70 References 73 Appendix 77 A.1 Another two Experimental Results of Visual Odometry 77 | |
dc.language.iso | en | |
dc.title | 從影像中擷取動態資訊以達到追蹤控制與影像里程計的技術與發展 | zh_TW |
dc.title | Development of Visual Tracking Control and Visual Odometry from Dynamic Information Acquired through Sequential Images | en |
dc.type | Thesis | |
dc.date.schoolyear | 99-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 簡忠漢,李後燦,黃正民 | |
dc.subject.keyword | 影像追蹤,影像誤差,追蹤延遲,旋轉傾斜平台,影像里程計,絕對平均誤差,後處理, | zh_TW |
dc.subject.keyword | Visual tracking,image error,tracking delay,pan tilt unit,visual odometry,absolute mean error,post-processing, | en |
dc.relation.page | 83 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2011-08-01 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-100-1.pdf 目前未授權公開取用 | 11.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。