請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74937
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 連豊力 | |
dc.contributor.author | Jui-Che Wu | en |
dc.contributor.author | 吳睿哲 | zh_TW |
dc.date.accessioned | 2021-06-17T09:10:43Z | - |
dc.date.available | 2021-01-21 | |
dc.date.copyright | 2020-01-21 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-08-15 | |
dc.identifier.citation | Mykhaylo Andriluka, Paul Schnitzspan, Johannes Meyer, Stefan Kohlbrecher, Karen Petersen, Oskar von Stryk, Stefan Roth, and Bernt Schiele, “Vision Based Victim Detection from Unmanned Aerial Vehicles,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 1740-1747, Oct. 18-22, 2010.
T. Ozaslan, S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, “Inspection of Penstocks and Featureless Tunnel-Like Environments using Micro UAVs,” in Proceedings of Field and Service Robot Conference, Brisbane, Australia, pp. 123-136, Jan. 2013. Fredrik Heintz, Piotr Rudol, and Patrick Doherty, “From Images to Traffic Behavior A UAV Tracking and Monitoring Application,” in Proceedings of 10th International Conference on Information Fusion, Quebec, Canada, Jul. 9-12, 2007. Yang Lyu, Quan Pan, Chunhui Zhao, Yizhai Zhang, and Jinwen Hu, ‘‘Vision-Based UAV Collision Avoidance with 2d Dynamic Safety Envelope,” IEEE Aerospace and Electronic Systems Magazine, vol. 31, no. 7, pp. 16-26, Jul. 2016. Paul Pao-Yen Wu, Duncan Campbell, and Torsten Merz, “On-Board Multi-Objective Mission Planning for Unmanned Aerial Vehicles,” in Proceedings of IEEE Aerospace Conference, Big Sky, USA, pp. 1-10, Mar. 7-17, 2009. Keita Higuchi, Tetsuro Shimada, and Jun Rekimoto, “Flying sports assistant: external visual imagery representation for sports training,” in Proceedings of 2nd Augmented Human International Conference, Tokyo, Japan, Mar.13-13, 2011. Cristi Iuga, Paul Dragan, and Lucian Buoniu, “Fall Monitoring and Detection for At-Risk Persons using a UAV,” International Federation of Automatic Control, vol. 51, no. 10, pp. 199-204, 2018. Peng Chen, Yuanjie Dang, Ronghua Liang, Wei Zhu, and Xiaofei He, “Real-Time Object Tracking on a Drone with Multi-Inertial Sensing Data,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 131-139, Jan. 2018. Celine Teuliere, Laurent Eck, and Eric Marchand, “Chasing a Moving Target From a Flying UAV,” in Proceedings of IEEE International Conference on Intelligent Robots and Systems, San Francisco, USA, pp. 4929-4934, Sep. 25-30, 2011. Ivan F. Mondragon, Pascual Campoy, Miguel A. Olivares-Mendez, and Carol Martinez, “3D Object Following Based on Visual Information for Unmanned Aerial Vehicles,” in Proceedings of IEEE Colombian Conference on Automatic Control, Colombian Conference on Automatic Control, Bogota, Colombia, Oct. 1-4, 2011. Matia Pizzoli, Christian Forster, and Davide Scaramuzza, “REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time,” in Proceedings of IEEE International Conference on Robotics and Automation, Hong Kong, China, pp. 2609-2616, May. 31-Jun. 7, 2014. Sungsik Huh, David Hyunchul Shim, and Jonghyuk Kim, “Integrated Navigation System using Camera and Gimbaled Laser Scanner for Indoor and Outdoor Autonomous Flight of UAVs,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, pp. 3158-3163, Nov. 3-7, 2013. Giuseppe Loianno, Justin Thomas, and Vijay Kumar, “Cooperative Localization and Mapping of MAVs using RGB-D Sensors,” in Proceedings of IEEE International Conference on Robotics and Automation, Seattle, USA, pp. 4021-4028, May. 26-30, 2015. Lionel Heng, Lorenz Meier, Petri Tanskanen, Friedrich Fraundorfer, and Marc Pollefeys, “Autonomous Obstacle Avoidance and Maneuvering on a Vision-Guided MAV Using On-Board Processing,” in Proceedings of IEEE International Conference on Robotics and Automation, Seattle, USA, pp. 2472-2477, May. 9-13, 2011. Zdzisław Gosiewski, Jakub Ciesluk, and Leszek Ambroziak, “Vision-Based Obstacle Avoidance for Unmanned Aerial Vehicles,” in Proceedings of International Congress on Image and Signal Processing, Shanghai, China, pp. 2020-2025, Oct. 15-17, 2011. Jian Li, Xiao-min Li, “Vision-based Navigation and Obstacle Detection for UAV,” in Proceedings of International Conference on Electronics, Communications and Control, Ningbo, China, pp. 1771-1774, Sep. 9-11, 2011. Jinchun Zhou, “EKF Based Object Detect and Tracking for UAV By Using Visual-Attention-Model,” in Proceedings of IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, pp. 168-172, May. 16-18, 2014. Jesus Pestana, Jose Luis Sanchez-Lopez, Srikanth Saripalli, and Pascual Campoy, “Computer Vision Based General Object Following for GPS-denied Multirotor Unmanned Vehicles,” in Proceedings of American Control Conference, Portland, USA, pp. 1886 - 1891, Jun. 4-6, 2014. Yingmao Li, Emily A. Doucette, J. Willard Curtis, and Nicholas Gans, “Ground Target Tracking and Trajectory Prediction by UAV using a Single Camera and 3D Road Geometry Recovery,” in Proceedings of American Control Conference, Seattle, USA, pp. 1238-1243, May. 24-26, 2017. Mohammadreza Radmanesh, Paul H. Guentert, Manish Kumar, and Kelly Cohen, “Analytical PDE based trajectory planning for Unmanned Air Vehicles in dynamic hostile environments,” in Proceedings of American Control Conference, Seattle, USA, pp. 4248-4253, May. 24-26, 2017. Carmelo D. Franco, Giorgio Buttazzo, “Coverage Path Planning for UAVs Photogrammetry with Energy and Resolution Constraints,” Journal of Intelligent and Robotic Systems, vol. 83, no. 4, pp. 445-462, Sep. 2016. Pedro Serra, Rita Cunha, Tarek Hamel, David Cabecinhas, and Carlos Silvestre, “Landing of a Quadrotor on a Moving Target Using Dynamic Image-Based Visual Servo Control,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1524-1535, Dec. 2016. Yongwei Zhang, Yangguang Yu, Shengde Jia, and Xiangke Wang, “Autonomous Landing on Ground Target of UAV by Using Image-Based Visual Servo Control,” in Proceedings of 36th Chinese Control Conference, Dalian, China, pp. 11204-11209, Jul. 26-28, 2017. Srikanth Saripalli and Gaurav S. Sukhatme, “Landing a Helicopter on a Moving Target,” in Proceedings of IEEE International Conference on Robotics and Automation, Roma, Italy, pp. 2030-2035, Apr. 10-14, 2007. Weiming Hu, Xue Zhou, Wei Li, Wenhan Luo, Xiaoqin Zhang, and Stephen Maybank, “Active Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions,” IEEE Transactions on Image Processing, vol. 22, no.5, pp. 1778-1792, May. 2013. Xingyu Wu, Xia Mao, Lijiang Chen, and Angelo Compare, “Combined Motion and Region-Based 3D Tracking in Active Depth Image Sequence,” in Proceedings of IEEE International Conference on Green Computing, Beijing, China, pp. 1734-1739, Aug. 20-23, 2013. Georg Nebehay, Roman Pflugfelder, “Consensus-based Matching and Tracking of Keypoints for Object Tracking,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, USA, pp. 862-869, Mar. 24-26, 2014. Justin Thomas, Jake Welde, Giuseppe Loianno, Kostas Daniilidis, and Vijay Kumar, “Autonomous Flight for Detection, Localization, and Tracking of Moving Targets with a Small Quadrotor,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1762-1769, May. 08, 2017. Ivan F. Mondragon, Pascual Campoy, Carol Martinez, and Miguel A. Olivares-Mendez, “3D Pose Estimation Based on Planar Object Tracking for UAVs Control,” in Proceedings of IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, pp. 35-41, May. 3-7, 2010. Nicolas Guenard, Tarek Hamel, and Robert Mahony, “A Practical Visual Servo Control for an Unmanned Aerial Vehicle,” IEEE Transactions on Robotics, vol. 24, no. 2, pp. 331-340, Apr. 2008. Odile Bourquardez, Robert Mahony, Nicolas Guenard, Francois Chaumette, Tarek Hamel, and Laurent Eck, “Image-Based Visual Servo Control of the Translation Kinematics of a Quadrotor Aerial Vehicle,” IEEE Transactions on Robotics, vol. 25, no. 3, pp. 743-749, Jun. 2009. Anjan Chakrabarty, Robert Morris, Xavier Bouyssounouse and Rusty Hunt, “Autonomous Indoor Object Tracking with the Parrot AR.Drone,” in Proceedings of International Conference on Unmanned Aircraft Systems, Arlington, USA, pp. 25-30, Jun. 7-10, 2016. Robert Laganiere, “Estimating Projective Relations in Images,” in OpenCV 2, computer vision application programming cookbook: Over 50 recipes to master this library of programming functions for real-time computer vision, Packt Pub, Brimingham, UK, 2011. Qiang Li, Ranyang Li, Kaifan Ji, and Wei Dai, “Kalman Filter and Its Application,” in Proceedings of 8th International Conference on Intelligent Networks and Intelligent Systems, Tianjin, China, pp. 74-77, Nov. 1-3, 2015. D. Lee, A. Papageorgiou, and G.W. Wasilkowski “Computing Optical Flow,” in Proceedings of Workshop on Visual Motion, Irvine, USA, pp. 99-106, Mar. 20-22, 1989. Shaohua Wang, Ying Yang “Quadrotor Aircraft Attitude Estimation and Control Based on Kalman Filter,” in Proceedings of 31st Chinese Control Conference, Hefei, China, pp. 5634-5639, Jul. 25-27, 2012. Stefan Leutenegger, Margarita Chli, and Roland Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints,” in Proceedings of International Conference on Computer Vision, Barcelona, Spain, pp. 2548-2555, Nov. 6-13, 2011. Jianbo Shi, Carlo Tomasi, “Good Features to Track,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 593-600, Jun. 21-23, 1994. J.Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the Algorithm,” Tech. Rep, Intel Corporation, Microprocessor Research Labs, 1999. Zdenek Kalal, Krystian Mikolajczyk, and Jiri Matas, “Forward-Backward Error: Automatic Detection of Tracking Failures,” in Proceedings of 20th International Conference on Pattern Recognition, Istanbul, Turkey, pp. 2576-2579, Aug. 23-26, 2010. Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, Eric Berger, Rob Wheeler, and Andrew Ng, “ROS: an open-sources Robot Operating System,” ICRA Workshop on Open Sources Software, 2009. Nathan Koenig, Andrew Howard, “Design and Use Paradigms for Gazebo, an open-source multi-robot simulator,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, pp. 2149-2154, Sep. 28-Oct. 2, 2004. Francois Chaumette, Seth Hutchinson, “Visual Servo Control. I. Basic Approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82-90, Dec. 2006. Xuetao Zhang, Yongchun Fang, Xiao Liang, and Xuebo Zhang, “Geometric Adaptive Dynamic Visual Servoing of a Quadrotor UAV,” in Proceedings of IEEE International Conference on Advanced Intelligent Mechatronics, Banff, Canada, pp. 312-317, Jul. 12-15, 2016. Marinela Georgieva Popova. “Visual Servoing for a Quadrotor UAV in Target Tracking Applications,” M.S. thesis, University of Toronto, 2015. Francois Chaumette, “Potential Problems of Unstability and Divergence in Image-Based and Position-Based Visual Servoing,” in European Control Conference, Karlsruhe, Germany, pp. 4549-4554, Aug. 31- Sep. 3, 1999. Christoforos Kanellakis, George Nikolakopoulos, “Survey on Computer Vision for UAVs : Current Developments and Trends,” Journal of Intelligent and Robotic Systems, vol. 87, no. 1, pp. 141-168, Jul. 2017. Parrot, Inc. PARROT AR.DRONE 2.0 ELITE EDITION. [Online]. Available: https://www.parrot.com/global/drones/parrot-ardrone-20-elite-edition Mani Monajjemi et al. (2016). Ardron_automomy. Github repository. [Online]. Available: https://github.com/AutonomyLab/ardrone_autonomy.git Giannis Lypirids. (Jul. 2016). Ardrone_simulator_gazebo7. Github repository. [Online]. Available: https://github.com/iolyp/ardrone_simulator_gazebo7.git | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74937 | - |
dc.description.abstract | 本篇論文提出了一個基於單眼視覺來進行伺服控制的無人飛行器行人監視系統。此系統可以跟隨任何有紋理的物體,像是衣服上的圖案。所以此系統也可以被用來監視較易受傷害的人,像是小孩以及老人等。當他們發生危險狀況時,此監視系統便會發出警報來降低損傷。
這個系統的主要架構可以分成兩個部分:移動物體追蹤與自動跟隨目標物。物體追蹤演算法可以提供物體在影像中的位置,於是影像伺服控制器便可以根據此資訊來進行自動跟隨目標物。 在物體追蹤的部分,主要是使用基於特徵點方法的追蹤器來找出物體在二維影像中的位置,且為了達到實時的效果,本篇論文使用了一個較快的特徵點偵測方法。為了在目標物上能取得足夠的特徵點,光流演算法被用來追蹤特徵點,並且跟之前配對在目標物上的特徵點結合。結合後的特徵點使用了特徵點之間的幾何關係來預測物體的旋轉以及尺度,並且使用一個異常值移除演算法來決定目標物的中心點。為了減少影像模糊以及無人機突然震動造成的影響,本篇論文使用了卡爾曼濾波器來預測目標物的邊界框大小。 在目標物跟隨的部分,追蹤器提供的目標物資訊可以被用來當作姿態控制器的輸入量測值。首先,追蹤器預測的目標物邊界框大小可以用來控制無人機與目標物之間的相對距離,邊界框的中心點資訊可以用來維持目標物於前視相機影像的正中心。最後,為了解決單眼相機無法取得絕對尺度的問題,針孔相機模型結合了從超聲波取得的高度資訊來預測無人機與目標物之間的相對距離。於是無人機便可以根據使用者給的目標物與無人機之間的預期距離,來進行絕對距離的控制,解決了基於影像視覺伺服控制無法對絕對距離進行控制的問題。 本系統同時在模擬與真實環境下進行評估。模擬環境是在gazebo模擬器中進行操作,一個脆弱的人可能產生的移動行為像是走路與轉彎等…都會被測試。為了測試此系統的跟隨性能與穩健性,與在模擬中一樣的移動行為模式也會在室內場景中進行實驗。 | zh_TW |
dc.description.abstract | In this thesis, a UAV person surveillance system mainly using on-board monocular vision for image-based visual servoing control is proposed. The system can follow any textured object, such as pattern on the clothe. Therefore, it can be utilized to oversee the fragile people, such as children and old people, etc. Whenever these people are under risk, the person surveillance system will send an alarm to reduce the damage.
The system is composed of two main processes: moving object tracking and autonomous target following. Moving object tracking algorithm can locate the target position on the image. Thus, target information can be given to the image-based visual servoing controller for autonomous target following. For object tracking, a keypoints-based tracker is designed to locate the object in the 2-D image space. To achieve real time tracking performance, a faster keypoints detection method is utilized. To capture sufficient keypoints on the target, an optical flow algorithm is also utilized for keypoints tracking and fused with the keypoints matched on the target. Fused keypoints are mainly utilized for scale and rotation estimation using geometric relationship between keypoints. Then, the target center can be estimated. To further reduce the influence of image blur and sudden movement of the drone, a Kalman filter is utilized to predict the size of the target bounding box. For autonomous target following, target information provided by the tracker is utilized as measurement for the pose controller. First, the size of the bounding box predicted by the tracker can be utilized to control the relative distance between the drone and the target, and the center of the bounding box can be utilized to maintain the target at the center field of view of the on-board camera. To solve the problem of absence of the absolute scale in pure monocular system, a pinhole camera model is combined with the height information from on-board ultrasound to estimate the distance between drone and the target. Thus, the drone can be controlled to maintain a desired distance relative with the target given by user, solving the problem that IBVS can’t control absolute distance. The proposed system is evaluated both in simulation and experiment. Simulation environment is operated under the gazebo simulator, and different action modes a fragile person may conduct such as walking and turning, and so on are tested. To further evaluate the performance of the following result and robustness, same cases are also tested in an indoor environment scenario. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T09:10:43Z (GMT). No. of bitstreams: 1 ntu-108-R06921095-1.pdf: 11692166 bytes, checksum: 0370bd0f0d7d0b56b3b3f9780b2ac67b (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 摘要 vi
ABSTRACT viii CONTENTS xi LIST OF FIGURES xiii LIST OF TABLES xxi Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Formulation 2 1.3 Contributions 4 1.4 Organization of the Thesis 5 Chapter 2 Background and Literature Survey 6 2.1 Unmanned Aerial System 6 2.2 Target Tracking by a UAV 8 Chapter 3 Related Algorithms 13 3.1 Pinhole Camera Model 13 3.2 Kalman Filter 15 3.3 Optical Flow 16 3.4 Principle of Quadrotor Dynamics 17 3.5 Coordinate Systems of Quadrotor 20 Chapter 4 Moving Object Tracking 22 4.1 System Architecture 22 4.2 Features Matching and Tracking 25 4.2.1 Global Matching 25 4.2.2 Local Tracking 28 4.2.3 Keypoints Fusion 31 4.3 Scale and Rotation Estimation 32 4.3.1 Scale Estimation 32 4.3.2 Rotation Estimation 34 4.4 Center Estimation 35 4.5 Kalman Filter for Size Correction 38 4.6 Summary 39 Chapter 5 Autonomous Target Following 42 5.1 System Architecture 42 5.2 Image Features Extraction 44 5.3 Relative Distance Control 46 5.3.1 Image Features Control 47 5.3.2 Attitude Control 49 5.3.3 Distance Calculation 50 5.3.4 Relative Depth Control 54 5.4 Improvement of IBVS Controllers 55 5.5 Summary 60 Chapter 6 Simulation and Experimental Results 62 6.1 Hardware and Software Platforms 62 6.1.1 Hardware Platform 62 6.1.2 Software Platform 63 6.2 Simulation and Experiment Setup 64 6.3 Simulation Result 69 6.3.1 Case Hover 70 6.3.2 Case Scale 77 6.3.3 Case Straight 92 6.3.4 Case Horizontal 101 6.3.5 Case Run 109 6.3.6 Case Turn 118 6.4 Experimental Result 127 6.4.1 Case Hover 127 6.4.2 Case Scale 139 6.4.3 Case Straight 165 6.4.4 Case Horizontal 179 6.4.5 Case Run 191 6.4.6 Case Turn 207 6.5 Object Tracking Result 221 6.5.1 Experimental Setup 221 6.5.2 Keypoints Detection 222 6.5.3 Size and Rotation Estimation 223 6.5.4 Center Estimation 225 6.5.5 Size Correction 226 6.6 Summary 227 6.6.1 Simulation 227 6.6.2 Experiment 232 6.6.3 Object Tracking 237 Chapter 7 Conclusions and Future Works 239 7.1 Conclusions 239 7.2 Future Works 240 References 241 | |
dc.language.iso | en | |
dc.title | 基於影像視覺伺服控制之無人飛行器行人監視系統 | zh_TW |
dc.title | Person Surveillance System by Unmanned Aerial Vehicles Using Image-Based Visual Servoing Control | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 黃正民,李後燦 | |
dc.subject.keyword | 目標跟隨,物體追蹤,視覺伺服,四旋翼,姿態控制,無人飛行器, | zh_TW |
dc.subject.keyword | Target Following,Object Tracking,Visual Servoing,Quadrotor,Pose Control,Unmanned Aerial Vehicle, | en |
dc.relation.page | 247 | |
dc.identifier.doi | 10.6342/NTU201903642 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-08-15 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 11.42 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。