請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51755完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
| dc.contributor.author | Teng-Hsiang Yu | en |
| dc.contributor.author | 虞登翔 | zh_TW |
| dc.date.accessioned | 2021-06-15T13:48:02Z | - |
| dc.date.available | 2017-12-01 | |
| dc.date.copyright | 2015-12-01 | |
| dc.date.issued | 2015 | |
| dc.date.submitted | 2015-11-11 | |
| dc.identifier.citation | [1: Sahin & Guvenc 2007]
H. Sahin and L. Guvenc, “Household robotics: autonomous devices for vacuuming and lawn mowing [Applications of control],” IEEE Control System, Vol. 27, No. 2, pp. 20-96, 2007. [2: Merino et al. 2011] L. Merino, F. Caballero, J. R. Martinez-de-Dios, I. Maza, and A. Ollero, “An unmanned aircraft system for automatic forest fire monitoring and measurement,” Journal of Intelligence and Robotic Systems, Vol. 65, pp. 533-548, 2011. [3: Lindermuth et al. 2011] M. Lindermuth, R. Murphy, E. Steimle, W. Armitage, K. Dreger, T. Elliot, M. Hall, D. Kalyadin, J. Kramer, M. Palankar, K. Pratt, and C. Griffin, “Sea Robot-assisted inspection,” IEEE Robotic Automation Magazine, Vol. 18, No. 2, pp. 9-–107, 2011. [4: Fallon et al. 2012] M. F. Fallon, H. Johannsson, and J. J. Leonard, “Efficient Scene Simulation for Robust Monte Carlo Localization using an RGB-D Camera,” IEEE International Conference on Robotics and Automation (ICRA), pp. 1663-1670, 2012. [5: Schauwecker & Zell 2013] K. Schauwecker and A. Zell, “On-Board Dual-Stereo-Vision for Autonomous Quadrotor Navigation,” International Conference on Unmanned Aircraft Systems (ICUAS), pp. 333–342, 2013. [6: Nister et al. 2004] David Nister, Oleg Naroditsky, and James Bergen, “Visual Odometry,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, vol. 1, pp. 652-659, June 27-July 2, 2004. [7: Scaramuzza & Fraundorfer 2011] Davide Scaramuzza and Friedrich Fraundorfer, “Visual Odometry [Tutorial],” IEEE Robotics and Automation Magazine, Vol. 18, No. 4, pp. 80-92, December 2011. [8: Duda & Hart 1972] R.O. Duda and P.E. Hart, 'Use of the Hough transforms to detect lines and curves in pictures,' Communication of the ACM, Vol. 15, No. 1, pp. 11-15, 1972. [9: Herisse et al. 2010] B. Herisse, S. Oustrieres, T. Hamel, R. Mahony, and F.-X. Russotto, “A general optical flow based terrain-following strategy for a VTOL UAV using multiple views,” IEEE International Conference on Robotics and Automation (ICRA), pp. 3341–3348, 2010. [10: Mori and Scherer 2013] T. Mori and S. Scherer, “First Results in Detecting and Avoiding Frontal Obstacles from a Monocular Camera for Micro Unmanned Aerial Vehicles,” IEEE International Conference on Robotics and Automation (ICRA), pp. 1750 - 1757, 2013. [11: Bay et al. 2008] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346-359, 2008. [12: Fraundorfer et al. 2012] F. Fraundorfer, L. Heng, D. Honegger, G. H. Lee, L. Meier, P. Tanskanen, and M. Pollefeys, “Vision-Based Autonomous Mapping and Exploration Using a Quadrotor MAV,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4557-4564, 2012. [13: Symington et al. 2010] A. Symington , S. Waharte, S. Julier, and N. Trigoni, “Probabilistic Target Detection by Camera-Equipped UAVs,” IEEE International Conference on Robotics and Automation (ICRA), pp. 4076–4081, 2010. [14: Rodriguez et al. 2014] J. Rodriguez, C. Castiblanco, I. Mondragon, and J. Colorado, “Low-cost quadrotor applied for visual detection of landmine-like objects,” International Conference on Unmanned Aircraft Systems (ICUAS), pp. 83–88, 2014. [15: Lim et al. 2012] H. Lim, H. Lee, and H. J. Kim, “Onboard Flight Control of a Micro Quadrotor Using Single Strapdown Optical Flow Sensor,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 495–500, 2012. [16: Ghadiok et al. 2012] V. Ghadiok, J. Goldin, and W. Ren, “Autonomous Indoor Aerial Gripping Using a Quadrotor,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 495–500, 2012. 166 [17: Kim & Shim 2013] J. W. Kim and D. H. Shim, “A Vision-based Target Tracking Control System of a Quadrotor by using a Tablet Computer,” International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1165–1172, 2013. [18: Moravec 1980] H. Moravec, 'Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover' Tech Report CMU-RI-TR-3 Carnegie-Mellon University, Robotics Institute, 1980. [19: Harris & Stephens 1988] C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey Vision Conference, pp. 147-151, 1988. [20: Shi & Tomasi 1994] J. Shi and C. Tomasi, “Good Features to Track,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 147-151, 1988. [21: Rostern & Drummond 2006] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of European Conference on Computer Vision (ECCV), Vol. 1, pp. 430– 443, 2006. [22: Tuytelaars & Mikolajczyk 2008] [23: Lowe 2004] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, 2004. [24: Choi et al. 2011] J. Choi, W. Kim, and H. Kong, and C. Kim, “Real-time Vanishing Point Detection Using the Local Dominant Orientation Signature,” 2011 3DTV Conference: The True Vision – Capture, Transmission and Display of 3D Video (3DTV-CON), pp.1- 4, May 16-18, 2011. [25: Hartley & Zisserman 2004] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed., Cambridge University Press, pp. 239-260 , 2004. [26: Hartley & Sturm 1994] R. I. Hartley and P. Sturm, “Triangulation,” in American Image Understanding Workshop, pp. 957–966, 1994. T. Tuytelaars and K. Mikolajczyk, “Local Invariant Feature Detectors: A Survey,” Foundations and Trends in Computer Graphics and Vision, Vol. 3, No. 3, pp. 177-280, 2008. 167 [27: Wei et al. 2013] Y.-M. Wei, L. Kang, B. Yang, and L.-D. Wu, “Applications of structure from motion: a survey,” Journal of Zhejiang University, Vol. 14, No. 7, pp. 486-494, 2013. [28: Newcombe et al. 2011] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” in Proceedings of the International Conference on Computer Vision (ICCV), 2011. [29: Lovegrove & Davison 2010] S. J. Lovegrove and A. J. Davison, “Real-time spherical mosaicing using whole image alignment,” in Proceedings of the European Conference on Computer Vision (ECCV), 2010. [30: Baker & Matthews 2004] S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework: Part 1,” International Journal of Computer Vision (IJCV), pp. 221–255, 2004. [31: Bailey & Durrant-Whyte 2006] Tim Bailey and Hugh Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM): Part II State of the Art,” IEEE Robotics & Automation Magazine, Vol. 13, No. 3, pp. 108–117, September 2006. [32: Davison et al. 2007] A. Davison, I. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, June 2007. [33: Engelhard et al. 2011] N. Engelhard, F. Endres, J. Hess, J. Sturm, and W. Burgard, “Realtime 3-D visual SLAM with A hand-held camera,” in Proceedings of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, 2011. [34: Zhang 2000] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330-1334, 2000. [35: Engel et al. 2014] J. Engel, J. Sturm, and D. Cremers, “Scale-aware Navigation of a Low-cost Quadrocopter with a Monocular Camera,” Robotics and Autonomous Systems (RAS), Vol. 62, pp. 1646-1656, 2014. [36: Piskorski et al. 2012] S. Piskorski, N. Brulez, P. Eline, and F. D’Haeyer, “AR.Drone Developer Guide, SDK 2.0,” 2012. 168 [37: Krajnik et al. 2011] T. Krajnik, V. Vonasek, D. Fišer, and J. Faigl, “AR-drone as a platform for robotic research and education,” in Proceedings of the Research and Education in Robotics - EUROBOT 2011, Vol. 161, pp. 172-186, 2011. [38: Hernandez et al. 2013] A. Hernandez, C. Copot, R. De Keyser, T. Vlas, and I. Nascu, “Identification and Path Following control of an AR.Drone Quadrotor,” in Proceedings of the 17th International Conference of System Theory, Control and Computing (ICSTCC’13), pp. 583-588, 2013. [39: Zachariah & Jansson 2011] D. Zachariah, and M. Jansson, “Self-motion and wind velocity estimation for small- scale UAVs,” IEEE International Conference on Robotics and Automation (ICRA), pp. 1166-1171, 2011. [40: Yi et al. 2014] G. Yi, L. Jianxin, Q. Hangping, and W. Bo, “Survey of Structure from Motion,” IEEE International Conference on Robotics and Automation (ICRA), pp. 1166- 1171, 2011. [41: Canny 1986] J. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, pp. 679-698, 1986. [42: Raveshiya & Borisagar 2012] H. Raveshiya and V. Borisagar, “Motion Estimation Using Optical Flow Concepts,” International Journal of Computer Technology and Applications, Vol. 3, pp. 696- 700, 2012. [43: Farneback 2003] G. Farneback, “Two-frame motion estimation based on polynomial expansion,” in Proceedings of the Scandinavian Conference on Image Analysis (SCIA), pp. 363- 370, 2003. [44: Brox & Malik 2011] T. Brox and J. Malik, “Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 33, No. 3, pp. 500-513, 2011. [45: Lucas & Kanade 1981] B. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” in Proceedings of Seventh International Joint Conference on Artificial Intelligent, pp. 674-679, 1981. 169 [46: Borenstein & Koren 1991] J. Borenstein, and Y. Koren, “The vector field histogram-fast obstacle avoidance for mobile robots,” IEEE Transactions on Robotics and Automation, Vol. 3, pp.278-288, 1991. [47: Triggs et al. 2000] B. Triggs, P. McLauchian, R. Hartley, and A. Fitzgibbon, “Bundle Adjustment — A Modern Synthesis,” Lecture Notes in Computer Science, Vol. 1883 pp.298-372, 2000. [48: Kendoul et al. 2009] F. Kendoul, I. Fantoni, and K. Nonami, “Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles,” Robotics and Autonomous Systems (RAS), Vol. 57, pp. 591-602, 2009. [49: Lee & Yoon 2015] J.-K. Lee and K.-J. Yoon, “Real-time Joint Estimation of Camera Orientation and Vanishing Points,” in proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1866-1874, 2015. Books [50: Ljung 1999] L. Ljung, System Identification: Theory for the user, NJ: Prentice Hall Information and System Sciences Series, 1999. [51: Gonzalez & Woods 2008] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd adapted Ed., Editor: S. G. Miaou, Taiwan: Pearson, June 2008. [52: Szeliski 2010] R. Szeliski, Computer Vision: Algorithms and Applications, Springer, 2010. [53: Bradski & Kaehler 2008] G. Bradski and A. Kaehler, Learning OpenCV, 1st Ed., O’Reilly Media, 2008. [54: Laganiere 2011] R. Laganiere, OpenCV 2 Computer Vision Application Programming Cookbook, 1st Ed., Editor: Neha Shetty, Packt Publishing Ltd., May 2011. [55: Trucco & Verri 1998] E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, Prentice Hall, 1998. 170 Websites [56: AR.Drone 2.0 from Parrot, Inc.] AR.Drone 2.0 Product Datasheet. (2012, June). In Parrot, Inc. Official Website. Retrieved April 15, 2015, from http://ardrone2.parrot.com [57: Shimpuku 2013] N. Shimpuku (puku0x), (2013). CV Drone Free Software on https://github.com/puku0x/cvdrone | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51755 | - |
| dc.description.abstract | 相機定位應用在空中無人載具是近期一項熱門且應用廣泛的議題,諸如室內環境探勘、搜救以及空中抓取。由於相機僅提供色彩資訊,一般可以用兩個步驟來取得空間資訊以定位相機。首先在空間中放置已知相對距離的標誌來估算座標之間的轉換。另外可以藉由不同時間的影像來描述相機移動的過程,可以將其分為三個階段來討論,分別是關聯性估測、運動量估算以及定位最佳化。關聯性估測是利用搜尋環境中具有相關性的特徵資料來匹配影像,且使用匹配的結果來估算相機的運動達到定位的目的,並在最後使用多個估算的位置計算出最佳的相對位置。
首先在本文中實現的第一個步驟是設計特殊顏色的標誌以利影像上的特徵辨別,並將其設置在空間中已知的相對距離,從影像與空間之間的對應關係來估算出相機與空間之間的座標轉換關係,另外也針對相機定位及校正的結果做討論,並證實將鏡片的主點假設在影像平面的中心可以得到較佳的定位結果,再者,運用敏感度分析可以得知特徵點的像素誤差對定位結果所產生的影響。 另外一個提出的步驟是利用影像之間的稠密光流來描述不同時刻的相機位置。在本文中利用光流法具有不同方向的特性來分群,並設計一個邊界濾波器來取得可靠的光流群組,由於靜止物體的在影像中位移方向與相機移動方向相反,此時可以觀察到錯誤匹配的光流群組會先出現在影像上相機移動的方向,因此運用這個現象來估算相機垂直移動的變化量,並從理想的實驗場景中得知有使用邊界濾波器的估算結果較為準確。在空中無人載具的運動模型中,翻轉會造成光流圖有旋轉的軌跡,因此我們提出使用霍氏圓型轉換方法來獲得圓型子集合,並用每個子集合的資料特性設計線性及指數權重函數,經由理想實驗得知指數型權重函數可以獲得較正確的相機旋轉量。為了將這兩個步驟運用在真實飛行場景,我們另外對於光流圖做有效性的分析來確定取得的影像是否可靠,從分析的結果可以有效的判別出哪些光流向量是來自缺乏特徵的區域。最後從真實飛行實驗中得知錯誤匹配的光流出現位置不如預期,以及飛機傾斜所造成的水平移動並不在光流的旋轉軌跡假設當中,導致高度及旋轉角度變化估測不準確。 | zh_TW |
| dc.description.abstract | Using camera to localize the Unmanned Aerial Vehicle (UAV) is a key technology that has been widely researched in recent decade and has many applications such as indoor exploration, search and rescue, and aerial gripping. Due to the camera only provides color information, the spatial information can be obtained by two steps to localize the camera. The first step is to put the markers in real world with known distances to estimate the transformation between coordinates. Besides, the camera motion can be executed by using the images in different time steps. It can be divided into three stages: correspondence evaluation, motion estimation, local optimization. First, correspondence evaluation is used to find some distinct features with their special characteristics to match images. Second, the corresponding pairs are used to estimate the camera motion. Third, the optimization method is used to correct the localization result.
In this these, the first localization step is implemented by designing the markers with particular colors for the detection method in the image plane, and setting them with known distance in the real world. The transformation between image and spatial coordinates is used to evaluate the relationship between the camera and the spatial coordinates. Otherwise, by discussing the localization and calibration results, the setting of principle point at the center of the image plane can make the localization result is more accurate. Furthermore, by the sensitivity analysis, the localization result is affected by the detection error of features. The other step in this thesis is using the dense optical flow to describe the correspondence between two images in different time steps. To reduce the information, the flow vectors are segmented into non-overlapping blocks. Each block can be classified into the group according to the mean angle of flow vectors. Moreover, an edge filter is proposed to obtain the reliable blocks. Due to the displacement of the static scene in the image is reverse to the motion direction of the camera, then the outliers of optical flow reveal in the image plane about the direction of the camera motion. The distribution of each group is used to estimate the vertical motion of the camera. In the ideal experiment, the motion estimation result with the edge filter is more accurate. In addition, the roll rotation of UAV makes the spiral pattern of optical flow. Then, the Hough circle transformation is used to obtain the circular subsets. The linear and exponential weighting functions are designed by the standard deviations of the subsets. The estimation result of roll by using the exponential weighting function is more accurate than the result by using the linear weighting function. In order to implement these two steps in real-flight scenario, the validation analysis is used to evaluate the flow map is reliable or not. The analysis result shows that the method can effectively judge the textureless region in the image plane. However, the estimation results in real-flight experiment show that the outliers of optical flow do not reveal as expectation and the pattern of optical flow does not involve the effect of horizontal movement of the quadrotor. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-15T13:48:02Z (GMT). No. of bitstreams: 1 ntu-104-R02921068-1.pdf: 7976265 bytes, checksum: b548a38a14dcf0de73b61956fef8baca (MD5) Previous issue date: 2015 | en |
| dc.description.tableofcontents | 摘要 i
ABSTRACT ii CONTENTS vi LIST OF FIGURES viii LIST OF TABLES xiii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Formulation 4 1.3 Contributions 6 1.4 Organization of the Thesis 7 Chapter 2 Background and Literature Survey 8 2.1 Application of Quadrotor 8 2.2 Structure from Motion 11 2.3 Quadrotor Dynamic Model 18 Chapter 3 Related Algorithms 21 3.1 Camera Pin-hole Model 21 3.2 Optical Flow Estimation 3.2.1 Lucas-Kanade Optical Flow 23 3.2.2 Farneback Optical Flow Estimation 25 3.3 Edge Detection 27 3.3.1 Sobel Edge Detector 27 3.3.2 Canny Edge Detector 29 Chapter 4 Vision-Based Localization Method using Optical Flow Information 31 4.1 Marker-Based Camera Calibration and Localization 32 4.2 Keyframe Selection 38 4.3 Vertical Displacement Estimation using Weighting Blocks based on Edge Filter 50 4.4 Roll Angle Estimation using Weighting Circular Subsets based on Hough Circle Transform 61 4.5 Identification of Quadrotor Dynamics 68 Chapter 5 Experimental Result and Analysis 70 5.1 Hardware Platform: AR.Drone 70 5.1.1 Communication and Sensors 71 5.1.2 The Command Station and Control 72 5.2 Result of Camera Calibration and Localization 73 5.2.1 Experimental Setup 74 5.2.2 Camera Localization Result for Translation Case 82 5.2.3 Camera Localization Result for Rotation Case 86 5.2.4 Analysis of Localization Error 89 5.2.5 Sensitivity Analysis of Camera Localization 100 5.3 Result of Motion Estimation in Manual Scenario 113 5.3.1 Setup of Experimental Scenario for Motion Estimation 114 5.3.2 Estimation Result for vertical Motion Case 116 5.3.3 Comparing the Estimation Result of Vertical Motion with Edge Filter 120 5.3.4 Estimation Process of Roll Rotation Case 126 5.3.5 Comparing the Estimation Results using Different Weighting Function in Roll Estimation 131 5.4 Results of Keyframes Selection and Quadrotr Localization in Real-Flight Scenario 137 5.4.1 Altitude Estimation and Case Discussion for Two-Storey Scene 137 5.4.2 Localization Result and Case Discussion of Crabwise Motion 151 Chapter 6 Conclusions and Future Works 162 6.1 Conclusions 162 6.2 Future Works 164 References 165 | |
| dc.language.iso | en | |
| dc.subject | 動態估測 | zh_TW |
| dc.subject | 相機定位 | zh_TW |
| dc.subject | 相機校正 | zh_TW |
| dc.subject | 敏感度分析 | zh_TW |
| dc.subject | 稠密光流法 | zh_TW |
| dc.subject | 有效性分析 | zh_TW |
| dc.subject | camera calibration | en |
| dc.subject | motion estimation | en |
| dc.subject | validation analysis | en |
| dc.subject | dense optical flow | en |
| dc.subject | sensitivity analysis | en |
| dc.subject | Camera localization | en |
| dc.title | 利用可視標誌與稠密光流法之姿態估測及定位 | zh_TW |
| dc.title | Pose Estimation and Localization of Using Visual Markers and Their Dense Optical Flow | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 104-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 簡忠漢(Jong-Hann Jean),李後燦(Hou-Tsan Le) | |
| dc.subject.keyword | 相機定位,相機校正,敏感度分析,稠密光流法,有效性分析,動態估測, | zh_TW |
| dc.subject.keyword | Camera localization,camera calibration,sensitivity analysis,dense optical flow,validation analysis,motion estimation, | en |
| dc.relation.page | 171 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2015-11-12 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-104-1.pdf 未授權公開取用 | 7.79 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
