請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92147完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力 | zh_TW |
| dc.contributor.advisor | Feng-Li Lian | en |
| dc.contributor.author | 張晁維 | zh_TW |
| dc.contributor.author | Chao-Wei Chang | en |
| dc.date.accessioned | 2024-03-07T16:17:56Z | - |
| dc.date.available | 2024-03-08 | - |
| dc.date.copyright | 2024-03-07 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-02-16 | - |
| dc.identifier.citation | [1: Yeager 2019] Charles Yeager. “The dolly shot: How it works and why it's powerful.” (2019), [Online]. Available: https://www.premiumbeat.com/blog/how-to-achieve-perfect-dolly-shot/.
[2: Newton Nordic 2023] Newton Nordic. “Telescopic camera crane: Newton stabilized head.” (2023), [Online]. Available: https://newtonnordic.com/telescopic-camera-crane-with-newton-stabilized-remote-head/. [3: Spidercam 2020] Spidercam. “Spidercam field.” (2020), [Online]. Available: https://www.spidercam.tv/spidercam-field/. [4: Wikipedia 2022] Wikipedia. “Beyond beauty taiwan from above.” (2022), [Online]. Available: https://en.wikipedia.org/wiki/Beyond_Beauty:_Taiwan_from_Above. [5: Aviation Safety Council 2018] Aviation Safety Council. “發布凌天航空公司 b-31118 飛航事故調查報告.” (2018), [Online]. Available: https://web.archive.org/web/20181104130119/https://www.asc.gov.tw/main_ch/docDetail.aspx?uid=227&pid=227&docid=1015. [6: 黃鈞浩 2020] 黃鈞浩. “無人空拍機怎麼拍才像電影?從《哈利波特》、《權力遊戲》等大片一探究竟.” (2020), [Online]. Available: https://www.thenewslens.com/article/142520. [7: New York City Drone Film Festival 2019] New York City Drone Film Festival. “2019 nycdff winners.” (2019), [Online]. Available: https://www.nycdronefilmfestival.com/winners. [8: Zero Zero Robotics 2019] Zero Zero Robotics. “Hover2: The drone that flies itself.” (2019), [Online]. Available: https://gethover.com/hover2. [9: KICKSTARTER 2018] KICKSTARTER. “Pitta - transformative autonomous 4k selfie drone.” (2018), [Online]. Available: https://www.kickstarter.com/projects/1627662609/pitta-transformative-autonomous-4k-selfie-drone. [10: Air selfie camera 2023] Air selfie camera. “The #1 tiktok selfie camera taking over the internet.” (2023), [Online]. Available: https://airselfiecamera.com/. [11: Laws and Regulations Database of The Republic of China (Taiwan) 2022] Laws and Regulations Database of The Republic of China (Taiwan). “Regulation of drone.” (2022), [Online]. Available: https://law.moj.gov.tw/ENG/LawClass/LawAll.aspx?pcode=K0090083. [12: Wikipedia 2023] Wikipedia. “Skyfall.” (2023), [Online]. Available: https://en.wikipedia.org/wiki/Skyfall. [13: Youtube 2013] Youtube. “Skyfall - opening scene: Motorbike chase (1080p).” (2013), [Online]. Available: https://www.youtube.com/watch?v=tHRLX8jRjq8. [14: BBC one 2023] BBC one. “Planet earth ii.” (2023), [Online]. Available: https://www.bbc.co.uk/programmes/p02544td. [15: Skynamic 2015] Skynamic. “Game of thrones commercial - making of.” (2015), [Online]. Available: https://www.youtube.com/watch?v=lR6QzBS5uNs. [16: HBO 2023] HBO. “Game of thrones.” (2023), [Online]. Available: https://www.hbo.com/game-of-thrones. [17: Wikipedia 2023] Wikipedia. “The amazing spider-man 2.” (2023), [Online]. Available: https://en.wikipedia.org/wiki/The_Amazing_Spider-Man_2. [18: YouTube 2017] YouTube. “Go behind the scenes of the amazing spider-man 2 (2014).” (2017), [Online]. Available: https://www.youtube.com/watch?v=OLJOCOSCC_g&t=249s. [19: Wikipedia 2023] Wikipedia. “Captain america civil war.” (2023), [Online]. Available: https://en.wikipedia.org/wiki/Captain_America:_Civil_War. [20: YouTube 2016] YouTube. “Marvel【美國隊長三:內戰】幕後拍攝花絮.” (2016), [Online]. Available: https://www.youtube.com/watch?v=KYSgWvHH3rw&t=85s. [21: Arijon, Daniel 1991] Arijon, Daniel, Grammar of the film language. Silman-James Press, 1991. [22: Wikipedia 2023] Wikipedia. “The tomorrow war.” (2023), [Online]. Available: https://en.wikipedia.org/wiki/The_Tomorrow_War. [23: YouTube 2020] YouTube. “The tomorrow war - behind the scenes.” (2020), [Online]. Available: https://www.youtube.com/watch?v=kB3VkLpWE6Y&t=36s. [24: Wikipedia 2023] Wikipedia. “Tenet (film).” (2023), [Online]. Available: https://en.wikipedia.org/wiki/Tenet_(film). [25: YouTube 2020] YouTube. “Tenet | try and keep up featurette.” (2020), [Online]. Available: https: //www.youtube.com/watch?v=V_X2jQx_ueA&t=164s. [26: MultiDrone Project 2019] MultiDrone Project. “Multidrone in short.” (2019), [Online]. Available: https://multidrone.eu/multidrone-in-short/. [27: RedDotDrone 2021] RedDotDrone. “Sports aerial filming - delivering an immersive experience to the audience.” (2021), [Online]. Available: https://reddotdrone.com/sports. [28: MultiDrone Project 2020] MultiDrone Project. “Multidrone feature.” (2020), [Online]. Available: https ://www.youtube.com/watch?v=iLs6Xo87j78&t=488s. [29: RedDotDrone 2021] RedDotDrone. “Automatic aerial filming with qzss - みどころ自動撮影実証実験 5 月 15 日.” (2021), [Online]. Available: https://www.youtube.com/watch?v=TcbMwUAybPQ. [30: Thompson and Bowen 2009] Roy Thompson and Christopher J. Bowen, Grammar of the shot, 2nd ed. Focal Press, 2009, ISBN: 978-0-240-52121-3. [31: Nägeli et al. 2017] Tobias Nägeli et al., “Real-time planning for automated multi-view drone cinematography,” ACM Trans. Graph., vol. 36, no. 4, pp. 1–10, Jul. 20, 2017, ISSN: 0730-0301, 1557-7368. DOI: 10.1145/3072959.3073712. [32: Nageli et al. 2017] Tobias Nageli et al., “Real-time motion planning for aerial videography with dynamic obstacle avoidance and viewpoint optimization,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1696–1703, Jul. 2017, ISSN: 2377-3766, 2377-3774. DOI: 10.1109/LRA.2017.2665693. [33: Bonatti et al. 2020] Rogerio Bonatti et al., “Autonomous drone cinematographer: Using artistic principles to create smooth, safe, occlusion-free trajectories for aerial filming,” in Proceedings of the 2018 International Symposium on Experimental Robotics, Jing Xiao, Torsten Kröger, and Oussama Khatib, Eds., vol. 11, Springer International Publishing, 2020, pp. 119–129, ISBN: 978-3-030-33949-4 978-3-030-33950-0. DOI:10.1007/978-3-030-33950-0_11. [34: Mellinger and Kumar 2011] Daniel Mellinger and Vijay Kumar, “Minimum snap trajectory generation and control for quadrotors,” in 2011 IEEE International Conference on Robotics and Automation, Shanghai, China: IEEE, May 2011, pp. 2520–2525, ISBN: 978-1-61284-386-5. DOI: 10.1109/ICRA.2011.5980409. [35: Mueller, Hehn, and D’Andrea 2015] Mark W. Mueller, Markus Hehn, and Raffaello D’Andrea, “A computationally efficient motion primitive for quadrocopter trajectory generation,” IEEE Trans. Robot., vol. 31, no. 6, pp. 1294–1310, Dec. 2015, ISSN: 1552-3098, 1941-0468. DOI: 10.1109/TRO.2015.2479878. [36: Zucker et al. 2013] Matt Zucker et al., “Chomp: Covariant hamiltonian optimization for motion planning,” The International Journal of Robotics Research, vol. 32, no. 9-10, pp. 1164–1193, 2013. DOI: 10.1177/0278364913488805. eprint: https://doi.org/10.1177/0278364913488805. [37: Wang et al. 2022] Zhepei Wang et al., “Geometrically constrained trajectory optimization for multicopters,” IEEE Trans. Robot., vol. 38, no. 5, pp. 3259–3278, Oct. 2022, ISSN: 1552-3098, 1941-0468. DOI: 10.1109/TRO.2022.3160022. [38: Zhang et al. 2023] Zhiwei Zhang et al., “Auto filmer: Autonomous aerial videography under human interaction,” IEEE Robot. Autom. Lett., vol. 8, no. 2, pp. 784–791, Feb. 2023, ISSN: 2377-3766, 2377-3774. DOI: 10.1109/LRA.2022.3231828. [39: Fleureau et al. 2016] Julien Fleureau et al., “Generic drone control platform for autonomous capture of cinema scenes,” in Proceedings of the 2nd Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use, ACM, Jun. 26, 2016, pp. 35–40, ISBN: 978-1-4503-4405-0. DOI: 10.1145/2935620.2935622. [40: Huang et al. 2018] Chong Huang et al., “Through-the-lens drone filming,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Oct. 2018, pp. 4692–4699, ISBN: 978-1-5386-8094-0. DOI: 10.1109/IROS.2018.8594333. [41: Saska et al. 2017] Martin Saska et al., “Documentation of dark areas of large historical buildings by a formation of unmanned aerial vehicles using model predictive control,” in 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), IEEE, Sep. 2017, pp. 1–8, ISBN: 978-1-5090-6505-9. DOI: 10.1109/ETFA.2017.8247654. [42: Kratky et al. 2020] Vit Kratky et al., “Autonomous reflectance transformation imaging by a team of unmanned aerial vehicles,” IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 2302–2309, Apr. 2020, ISSN: 2377-3766, 2377-3774. DOI: 10.1109/LRA.2020.2970646. [43: Kratky et al. 2021] Vit Kratky et al., “Autonomous aerial filming with distributed lighting by a team of unmanned aerial vehicles,” IEEE Robot. Autom. Lett., vol. 6, no. 4, pp. 7580–7587, Oct. 2021, ISSN: 2377-3766, 2377-3774. DOI: 10.1109/LRA.2021.3098811. [44: Gebhardt and Hilliges 2021] Christoph Gebhardt and Otmar Hilliges, “Optimization-based user support for cinematographic quadrotor camera target framing,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, May 6, 2021, pp. 1–13, ISBN: 978-1-4503-8096-6. DOI: 10.1145/3411764.3445568. [45: Vicon Motion Capture Systems Ltd. 2023] Vicon Motion Capture Systems Ltd. “Vicon | award winning motion capture systems.” (2023), [Online]. Available: https://www.vicon.com/. [46: Nägeli et al. 2018] Tobias Nägeli et al., “Flycon: Real-time environment-independent multi-view human pose estimation with aerial vehicles,” ACM Trans. Graph., vol. 37, no. 6, pp. 1–14, Dec. 31, 2018, ISSN: 0730-0301, 1557-7368. DOI: 10.1145/3272127.3275022. [47: Roumeliotis, Sukhatme, and Bekey 1999] S.I. Roumeliotis, G.S. Sukhatme, and G.A. Bekey, “Circumventing dynamic modeling: Evaluation of the error-state Kalman filter applied to mobile robot localization,” in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), vol. 2, Detroit, MI, USA: IEEE, 1999, pp. 1656–1663, ISBN: 978-0-7803-5180-6. DOI: 10.1109/ROBOT.1999.772597. [48: Ashtari et al. 2020] Amirsaman Ashtari et al., “Capturing subjective first-person view shots with drones for automated cinematography,” ACM Trans. Graph., vol. 39, no. 5, pp. 1–14, Oct. 31, 2020, ISSN: 0730-0301, 1557-7368. DOI: 10.1145/3378673. [49: Huang et al. 2018] Chong Huang et al., “ACT: An autonomous drone cinematography system for action scenes,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, May 2018, pp. 7039–7046, ISBN: 978-1-5386-3081-5. DOI: 10.1109/ICRA.2018.8460703. [50: Huang et al. 2019] Chong Huang et al., “Learning to capture a film-look video with a camera drone,” in 2019 International Conference on Robotics and Automation (ICRA), IEEE, May 2019, pp. 1871–1877, ISBN: 978-1-5386-6027-0. DOI: 10.1109/ICRA.2019.8793915. [51: Huang et al. 2019] Chong Huang et al., “Learning to film from professional human motion videos,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2019, pp. 4239–4248, ISBN: 978-1-72813-293-8. DOI: 10.1109/CVPR.2019.00437. [52: Huang et al. 2021] Chong Huang et al., “One-shot imitation drone filming of human motion videos,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–1, 2021, ISSN: 0162-8828, 2160-9292, 1939-3539. DOI: 10.1109/TPAMI.2021.3067359. [53: Dang et al. 2022] Yuanjie Dang et al., “Imitation learning-based algorithm for drone cinematography system,” IEEE Trans. Cogn. Dev. Syst., vol. 14, no. 2, pp. 403–413, Jun. 2022, ISSN: 2379-8920, 2379-8939. DOI: 10.1109/TCDS.2020.3043441. [54: Redmon and Farhadi 2018] Joseph Redmon and Ali Farhadi, “Yolov3: An incremental improvement,” CoRR, vol. abs/1804.02767, 2018. arXiv: 1804.02767. [55: Cao et al. 2016] Zhe Cao et al., “Realtime multi-person 2d pose estimation using part affinity fields,” CoRR, vol. abs/1611.08050, 2016. arXiv: 1611.08050. [56: Shi et al. 2015] Xingjian Shi et al., “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” in Advances in Neural Information Processing Systems, vol. 28, Curran Associates, Inc., 2015, pp. 802–810. [57: Wang et al. 2019] Wenshan Wang et al., “Improved generalization of heading direction estimation for aerial filming using semi-supervised regression,” in 2019 International Conference on Robotics and Automation (ICRA), IEEE, May 2019, pp. 5901–5907, ISBN: 978-1-5386-6027-0. DOI: 10.1109/ICRA.2019.8793994. [58: Bonatti et al. 2019] Rogerio Bonatti et al., “Towards a robust aerial cinematography platform: Localizing and tracking moving targets in unstructured environments,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Nov. 2019, pp. 229–236, ISBN: 978-1-72814-004-9. DOI: 10.1109/IROS40897.2019.8968163. [59: Gschwindt et al. 2019] Mirko Gschwindt et al., “Can a robot become a movie director? learning artistic principles for aerial cinematography,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China: IEEE, Nov. 2019, pp. 1107–1114, ISBN: 978-1-72814-004-9. DOI: 10.1109/IROS40897.2019.8967592. [60: Bucker, Bonatti, and Scherer 2021] Arthur Bucker, Rogerio Bonatti, and Sebastian Scherer, “Do you see what i see? coordinating multiple aerial cameras for robot cinematography,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, May 30, 2021, pp. 7972–7979, ISBN: 978-1-72819-077-8. DOI: 10.1109 /ICRA48506.2021.9561086. [61: Caraballo et al. 2020] Luis-Evaristo Caraballo et al., “Autonomous planning for multiple aerial cinematographers,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA: IEEE, Oct. 24, 2020, pp. 1509–1515, ISBN: 978-1-72816-212-6. DOI: 10.1109/IROS45743.2020.9341622. [62: Alcantara et al. 2020] Alfonso Alcantara et al., “Autonomous execution of cinematographic shots with multiple drones,” IEEE Access, vol. 8, pp. 201 300–201 316, 2020, ISSN: 2169-3536. DOI: 10.1109/ACCESS.2020.3036239. [63: Alcántara et al. 2021] Alfonso Alcántara et al., “Optimal trajectory planning for cinematography with multiple unmanned aerial vehicles,” Robotics and Autonomous Systems, vol. 140, p. 103 778, Jun. 2021, ISSN: 09218890. DOI: 10.1016/j.robot.2021.103778. arXiv: 2009.04234[cs]. [64: Qin, Li, and Shen 2018] Tong Qin, Peiliang Li, and Shaojie Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018. DOI: 10.1109/TRO.2018.2853729. [65: Shen, Michael, and Kumar 2015] Shaojie Shen, Nathan Michael, and Vijay Kumar, “Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft mavs,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 5303–5310. DOI: 10.1109/ICRA.2015.7139939. [66: Forster et al. 2017] Christian Forster et al., “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, 2017. DOI: 10.1109/TRO.2016.2597321. [67: Mourikis and Roumeliotis 2007] Anastasios I. Mourikis and Stergios I. Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007, pp. 3565–3572. DOI: 10.1109/ROBOT.2007.364024. [68: Bloesch et al. 2017] Michael Bloesch et al., “Iterated extended kalman filter based visual-inertial odometry using direct photometric feedback,” The International Journal of Robotics Research, vol. 36, no. 10, pp. 1053–1072, 2017. DOI: 10.1177/0278364917728574. [69: Sun et al. 2018] Ke Sun et al., “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965–972, 2018. DOI: 10.1109/LRA.2018.2793349. [70: Thalagala et al. ] Ravindu G. Thalagala et al., “Two key-frame state marginalization for computationally efficient visual inertial navigation,” in 2021 European Control Conference (ECC), IEEE, pp. 1138–1143. DOI: 10.23919/ECC54610.2021.9654924. [71: Kalman 1960] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Journal of Basic Engineering, vol. 82, no. 1, pp. 35–45, Mar. 1960, ISSN: 0021-9223. DOI: 10.1115/1.3662552. eprint: https://asmedigitalcollection.asme.org/fluidsengineering/article-pdf/82/1/35/5518977/35\_1.pdf. [72: Markley 2003] F. Landis Markley, “Attitude error representations for kalman filtering,” Journal of Guidance, Control, and Dynamics, vol. 26, no. 2, pp. 311–317, Mar. 2003, ISSN: 0731-5090, 1533-3884. DOI: 10.2514/2.5048. [73: Zanetti and Bishop 2006] Renato Zanetti and Robert Bishop, “Quaternion estimation and norm constrained kalman filtering,” in AIAA/AAS Astrodynamics Specialist Conference and Exhibit.2006. DOI: 10.2514/6.2006-6164. [74: Mahony, Hamel, and Pflimlin 2008] Robert Mahony, Tarek Hamel, and Jean-Michel Pflimlin, “Nonlinear complementary filters on the special orthogonal group,” IEEE Transactions on Automatic Control, vol. 53, no. 5, pp. 1203–1218, 2008. DOI: 10.1109/TAC.2008.923738. [75: Bonnabel, Martin, and Salaun 2009] Silvere Bonnabel, Philippe Martin, and Erwan Salaun, “Invariant extended kalman filter: Theory and application to a velocity-aided attitude estimation problem,” presented at the 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC 2009), IEEE, Dec. 2009, pp. 1297–1304, ISBN: 978-1-4244-3871-6. DOI: 10.1109/CDC.2009.5400372. [76: Barrau and Bonnabel 2017] Axel Barrau and Silvère Bonnabel, “The invariant extended kalman filter as a stable observer,” IEEE Transactions on Automatic Control, vol. 62, no. 4, pp. 1797–1812, 2017. DOI: 10.1109/TAC.2016.2594085. [77: Barrau and Bonnabel 2018] Axel Barrau and Silvère Bonnabel, “Invariant Kalman Filtering,” Annu. Rev. Control Robot. Auton. Syst., vol. 1, no. 1, pp. 237–257, May 28, 2018, ISSN: 2573-5144, 2573-5144. DOI: 10.1146/annurev-control-060117-105010. [78: Wang and Tayebi 2020] Miaomiao Wang and Abdelhamid Tayebi, “Hybrid nonlinear observers for inertial navigation using landmark measurements,” IEEE Transactions on Automatic Control, vol. 65, no. 12, pp. 5173–5188, 2020. DOI: 10.1109/TAC.2020.2972213. [79: Santamaria-Navarro, Solà, and Andrade-Cetto 2015] Angel Santamaria-Navarro, Joan Solà, and Juan Andrade-Cetto, “High-frequency mav state estimation using low-cost inertial and optical flow measurement units,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 1864–1871. DOI: 10.1109/IROS.2015.7353621. [80: Solà 2017] Joan Solà, “Quaternion kinematics for the error-state kalman filter,” Nov. 3, 2017. arXiv: 1711.02508[cs]. [Online]. Available: http://arxiv.org/abs/1711.02508 (visited on 01/20/2023). [81: Madyastha et al. 2011] Venkatesh Madyastha et al., “Extended kalman filter vs. error state kalman filter for aircraft attitude estimation,” in AIAA Guidance, Navigation, and Control Conference, 2011, p. 6615. [82: Barrau and Bonnabel 2018] Axel Barrau and Silvere Bonnabel, “Stochastic observers on Lie groups: A tutorial,” in 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL: IEEE, Dec. 2018, pp. 1264–1269, ISBN: 978-1-5386-1395-5. DOI: 10.1109/CDC.2018.8618988. [83: Olfati-Saber, Fax, and Murray 2007] Reza Olfati-Saber, J. Alex Fax, and Richard M. Murray, “Consensus and Cooperation in Networked Multi-Agent Systems,” Proc. IEEE, vol. 95, no. 1, pp. 215–233, Jan. 2007, ISSN: 0018-9219. DOI: 10.1109/JPROC.2006.887293. [84: Anderson et al. 2008] Brian D.O. Anderson et al., “Rigid graph control architectures for autonomous formations,” IEEE Control Systems Magazine, vol. 28, no. 6, pp. 48–63, 2008. DOI: 10.1109/MCS.2008.929280. [85: Oh, Park, and Ahn 2015] Kwang-Kyo Oh, Myoung-Chul Park, and Hyo-Sung Ahn, “A survey of multi-agent formation control,” Automatica, vol. 53, pp. 424–440, Mar. 2015, ISSN: 00051098. DOI: 10.1016/j.automatica.2014.10.022. [86: Liu and Bucknall 2018] Yuanchang Liu and Richard Bucknall, “A survey of formation control and motion planning of multiple unmanned vehicles,” Robotica, vol. 36, no. 7, pp. 1019–1047, Jul. 2018, ISSN: 0263-5747, 1469-8668. DOI: 10.1017/S0263574718000218. [87: Olfati-Saber and Murray 2004] R. Olfati-Saber and R.M. Murray, “Consensus Problems in Networks of Agents With Switching Topology and Time-Delays,” IEEE Trans. Automat. Contr., vol. 49, no. 9, pp. 1520–1533, Sep. 2004, ISSN: 0018-9286. DOI: 10.1109/TAC.2004.834113. [88: Wei Ren 2006] Wei Ren, “Consensus based formation control strategies for multi-vehicle systems,” in 2006 American Control Conference, Minneapolis, MN, USA: IEEE, 2006, 6 pp. ISBN: 978-1-4244-0209-0. DOI: 10.1109/ACC.2006.1657384. [89: Kawakami and Namerikawa 2009] Hiroki Kawakami and Toru Namerikawa, “Cooperative target-capturing strategy for multi-vehicle systems with dynamic network topology,” in 2009 American Control Conference, St. Louis, MO, USA: IEEE, 2009, pp. 635–640, ISBN: 978-1-4244-4523-3. DOI: 10.1109/ACC.2009.5160030. [90: Lin et al. 2014] Zhiyun Lin et al., “Distributed Formation Control of Multi-Agent Systems Using Complex Laplacian,” IEEE Trans. Automat. Contr., vol. 59, no. 7, pp. 1765–1777, Jul. 2014, ISSN: 0018-9286, 1558-2523. DOI: 10.1109/TAC.2014.2309031. [91: Van Tran et al. 2019] Quoc Van Tran et al., “Finite-Time Bearing-Only Formation Control via Distributed Global Orientation Estimation,” IEEE Trans. Control Netw. Syst., vol. 6, no. 2, pp. 702–712, Jun. 2019, ISSN: 2325-5870, 2372-2533. DOI: 10.1109/TCNS.2018.2873155. [92: Dunbar and Murray 2006] William B. Dunbar and Richard M. Murray, “Distributed receding horizon control for multi-vehicle formation stabilization,” Automatica, vol. 42, no. 4, pp. 549–558, Apr. 2006, ISSN: 00051098. DOI: 10.1016/j.automatica.2005.12.008. [93: Dunbar 2007] William B. Dunbar, “Distributed Receding Horizon Control of Dynamically Coupled Nonlinear Systems,” IEEE Trans. Automat. Contr., vol. 52, no. 7, pp. 1249–1263, Jul. 2007, ISSN: 0018-9286. DOI: 10.1109/TAC.2007.900828. [94: Franco et al. 2008] Elisa Franco et al., “Cooperative Constrained Control of Distributed Agents With Nonlinear Dynamics and Delayed Information Exchange: A Stabilizing Receding-Horizon Approach,” IEEE Trans. Automat. Contr., vol. 53, no. 1, pp. 324–338, Feb. 2008, ISSN: 0018-9286. DOI: 10.1109/TAC.2007.914956. [95: Saska, Spurný, and Vonásek 2016] Martin Saska, Vojtěch Spurný, and Vojtěch Vonásek, “Predictive control and stabilization of nonholonomic formations with integrated spline-path planning,” Robotics and Autonomous Systems, vol. 75, pp. 379–397, Jan. 2016, ISSN: 09218890. DOI: 10.1016/j.robot.2015.09.004. [96: Spurny, Baca, and Saska 2016] Vojtech Spurny, Tomas Baca, and Martin Saska, “Complex manoeuvres of heterogeneous MAV-UGV formations using a model predictive control,” in 2016 21st International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland: IEEE, Aug. 2016, pp. 998–1003, ISBN: 978-1-5090-1866-6. DOI: 10.1109/MMAR.2016.7575274. [97: Desai, Ostrowski, and Kumar 2001] J.P. Desai, J.P. Ostrowski, and V. Kumar, “Modeling and control of formations of nonholonomic mobile robots,” IEEE Trans. Robot. Automat., vol. 17, no. 6, pp. 905–908, 2001, ISSN: 1042296X. DOI: 10.1109/70.976023. [98: Mariottini et al. 2009] G.L. Mariottini et al., “Vision-Based Localization for Leader–Follower Formation Control,” IEEE Trans. Robot., vol. 25, no. 6, pp. 1431–1438, Dec. 2009, ISSN: 1552-3098, 1941-0468. DOI: 10.1109/TRO.2009.2032975. [99: Liu and Jiang 2013] Tengfei Liu and Zhong-Ping Jiang, “Distributed formation control of nonholonomic mobile robots without global position measurements,” Automatica, vol. 49, no. 2, pp. 592–600, Feb. 2013, ISSN: 00051098. DOI: 10.1016/j.automatica.2012.11.031. [100: Liao et al. 2017] Fang Liao et al., “Distributed Formation and Reconfiguration Control of VTOL UAVs,” IEEE Trans. Contr. Syst. Technol., vol. 25, no. 1, pp. 270–277, Jan. 2017, ISSN: 1063-6536, 1558-0865. DOI: 10.1109/TCST.2016.2547952. [101: Jasim and Gu 2018] Wesam Jasim and Dongbing Gu, “Robust Team Formation Control for Quadrotors,” IEEE Trans. Contr. Syst. Technol., vol. 26, no. 4, pp. 1516–1523, Jul. 2018, ISSN: 1063-6536, 1558-0865. DOI: 10.1109/TCST.2017.2705072. [102: Ren and Beard 2004] Wei Ren and Randal W. Beard, “Decentralized Scheme for Spacecraft Formation Flying via the Virtual Structure Approach,” Journal of Guidance, Control, and Dynamics, vol. 27, no. 1, pp. 73–82, Jan. 2004, ISSN: 0731-5090, 1533-3884. DOI: 10.2514/1.9287. [103: Ren 2008] Wei Ren, “Decentralization of Virtual Structures in Formation Control of Multiple Vehicle Systems via Consensus Strategies,” European Journal of Control, vol. 14, no. 2, pp. 93–103, Jan. 2008, ISSN: 09473580. DOI: 10.3166/ejc.14.93-103. [104: Olfati-Saber and Murray 2002] Reza Olfati-Saber and Richard M. Murray, “DISTRIBUTED COOPERATIVE CONTROL OF MULTIPLE VEHICLE FORMATIONS USING STRUCTURAL POTENTIAL FUNCTIONS,” IFAC Proceedings Volumes, vol. 35, no. 1, pp. 495–500, 2002, ISSN: 14746670. DOI: 10.3182/20020721-6-ES-1901.00244. [105: Kim et al. 2004] D.H. Kim et al., “Decentralized control of autonomous swarm systems using artificial potential functions: Analytical design guidelines,” in 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601), Nassau, Bahamas: IEEE, 2004, 159–164 Vol.1, ISBN: 978-0-7803-8682-2. DOI: 10.1109/CDC.2004.1428623. [106: Zelazo, Franchi, and Giordano 2014] Daniel Zelazo, Antonio Franchi, and Paolo Robuffo Giordano, “Rigidity theory in SE(2) for unscaled relative position estimation using only bearing measurements,” in 2014 European Control Conference (ECC), Strasbourg, France: IEEE, Jun. 2014, pp. 2703–2708, ISBN: 978-3-9524269-1-3. DOI: 10.1109/ECC.2014.6862558. [107: Zelazo, Giordano, and Franchi 2015] Daniel Zelazo, Paolo Robuffo Giordano, and Antonio Franchi, “Bearing-only formation control using an SE(2) rigidity theory,” in 2015 54th IEEE Conference on Decision and Control (CDC), Osaka: IEEE, Dec. 2015, pp. 6121–6126, ISBN: 978-1-4799-7886-1. DOI: 10.1109/CDC.2015.7403182. [108: Schiano et al. 2016] Fabrizio Schiano et al., “A rigidity-based decentralized bearing formation controller for groups of quadrotor UAVs,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, South Korea: IEEE, Oct. 2016, pp. 5099–5106, ISBN: 978-1-5090-3762-9. DOI: 10.1109/IROS.2016.7759748. [109: Tron et al. 2016] Roberto Tron et al., “A distributed optimization framework for localization and formation control: Applications to vision-based measurements,” IEEE Control Systems Magazine, vol. 36, no. 4, pp. 22–44, 2016. DOI: 10.1109/MCS.2016.2558401. [110: Garcia de Marina, Jayawardhana, and Cao 2016] Hector Garcia de Marina, Bayu Jayawardhana, and Ming Cao, “Distributed rotational and translational maneuvering of rigid formations and their applications,” IEEE Transactions on Robotics, vol. 32, no. 3, pp. 684–697, 2016. DOI: 10.1109/TRO.2016.2559511. [111: de Marina, Jayawardhana, and Cao 2018] Hector Garcia de Marina, Bayu Jayawardhana, and Ming Cao, “Taming Mismatches in Inter-agent Distances for the Formation-Motion Control of Second-Order Agents,” IEEE Trans. Automat. Contr., vol. 63, no. 2, pp. 449–462, Feb. 2018, ISSN: 0018-9286, 1558-2523. DOI: 10.1109/TAC.2017.2715226. [112: Teng et al. 2022] Sangli Teng et al., “Lie algebraic cost function design for control on lie groups,” in 2022 IEEE 61st Conference on Decision and Control (CDC), 2022, pp. 1867–1874. DOI: 10.1109/CDC51059.2022.9993143. [113: Marion and Thronton 2008] Jerry B. Marion and Stephen T. Thronton, “Chapter 10 - motion in a noninertial reference frame,” in Classical Dynamics of Particles and Systems, Brooks/Cole Publishing Company, 2008, pp. 387–410, ISBN: 0-534-40896-6. [114: Solà, Deray, and Atchuthan 2021] Joan Solà, Jeremie Deray, and Dinesh Atchuthan, “A micro lie theory for state estimation in robotics,” Dec. 8, 2021, Number: arXiv:1812.01537. arXiv: 1812.01537[cs]. [115: Belta and Kumar 2002] C Belta and V Kumar, “Euclidean metrics for motion generation on SE (3),” Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, vol. 216, no. 1, pp. 47–60, Jan. 1, 2002, ISSN: 0954-4062, 2041-2983. DOI: 10.1243/0954406021524909. [116: Belta and Kumar 2002] C. Belta and V. Kumar, “An svd-based projection method for interpolation on se(3),” IEEE Transactions on Robotics and Automation, vol. 18, no. 3, pp. 334–345, 2002. DOI: 10.1109/TRA.2002.1019463. [117: Kanatani 1993] K. Kanatani, Geometric Computation for Machine Vision (Oxford Engineering Science Series). Clarendon Press, 1993, ISBN: 9780198563853. [118: Bloch 2015] A.M. Bloch, Nonholonomic Mechanics and Control (Interdisciplinary Applied Mathematics), P. S. Krishnaprasad and R.M. Murray, Eds. New York, NY: Springer New York, 2015, vol. 24, ISBN: 978-1-4939-3016-6 978-1-4939-3017-3. DOI: 10.1007/978-1-4939-3017-3. [119: Gallier 2023] Jean H. Gallier, “Lecture notes in advanced geometric methods in computer science,” 2023. [120: Chirikjian 2012] Gregory S. Chirikjian, Stochastic Models, Information Theory, and Lie Groups, Volume 2: Analytic Methods and Modern Applications (Applied and Numerical Harmonic Analysis). Boston: Birkhäuser Boston, 2012, ISBN: 978-0-8176-4943-2978-0-8176-4944-9. DOI: 10.1007/978-0-8176-4944-9. [121: Marchand, Uchiyama, and Spindler 2016] Eric Marchand, Hideaki Uchiyama, and Fabien Spindler, “Pose estimation for augmented reality: A hands-on survey,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 12, pp. 2633–2651, 2016. DOI: 10.1109/TVCG.2015.2513408. [122: Bradski 2000] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000. [123: Garrido-Jurado et al. 2014] S. Garrido-Jurado et al., “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, no. 6, pp. 2280–2292, 2014, ISSN: 0031-3203. DOI: https://doi.org/10.1016/j.patcog.2014.01.005. [124: Otsu 1979] Nobuyuki Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. DOI: 10.1109/TSMC.1979.4310076. [125: Viola and Jones 2004] Paul Viola and Michael J Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, May 1, 2004, ISSN: 1573-1405. DOI: 10.1023/B:VISI.0000013087.49260.fb. [126: Viola and Jones 2001] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, 2001, pp. I–I. DOI: 10.1109/CVPR.2001.990517. [127: Ren et al. 2014] Shaoqing Ren et al., “Face alignment at 3000 fps via regressing local binary features,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1685–1692. DOI: 10.1109/CVPR.2014.218. [128: Freund and Schapire 1997] Yoav Freund and Robert E Schapire, “A decision-theoretic generalization of online learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119–139, 1997, ISSN: 0022-0000. DOI: https://doi.org/10.1006/jcss.1997.1504. [129: Lienhart and Maydt 2002] R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection,” in Proceedings. International Conference on Image Processing, vol. 1, 2002, pp. I–I. DOI: 10.1109/ICIP.2002.1038171. [130: Dollár, Welinder, and Perona 2010] Piotr Dollár, Peter Welinder, and Pietro Perona, “Cascaded pose regression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 1078–1085. DOI: 10.1109/CVPR.2010.5540094. [131: Breiman 2001] Leo Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, Oct. 1, 2001, ISSN: 1573-0565. DOI: 10.1023/A:1010933404324. [132: Redmon et al. 2016] Joseph Redmon et al., “You only look once: Unified, real-time object detection,” 2016. arXiv: 1506.02640 [cs.CV]. [133: Redmon and Farhadi 2016] Joseph Redmon and Ali Farhadi, “Yolo9000: Better, faster, stronger,” 2016. arXiv: 1612.08242 [cs.CV]. [134: Quigley 2009] Morgan Quigley, “ROS: An open-source robot operating system,” in IEEE International Conference on Robotics and Automation, 2009. [135: Bjelonic 2018] Marko Bjelonic, “YOLO ROS: Real-time object detection for ROS,” 2018. [136: Redmon 2016] Joseph Redmon, “Darknet: Open source neural networks in c,” 2016. [137: Mahony, Kumar, and Corke 2012] Robert Mahony, Vijay Kumar, and Peter Corke, “Multirotor aerial vehicles: Modeling, estimation, and control of quadrotor,” IEEE Robot. Automat. Mag., vol. 19, no. 3, pp. 20–32, Sep. 2012, ISSN: 1070-9932, 1558-223X. DOI: 10.1109/MRA.2012.2206474. [138: Qin et al. 2020] Chao Qin et al., “Lins: A lidar-inertial state estimator for robust and efficient navigation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 8899–8906. DOI: 10.1109/ICRA40945.2020.9197567. [139: Engel, Koltun, and Cremers 2016] Jakob Engel, Vladlen Koltun, and Daniel Cremers, “Direct sparse odometry,” 2016. arXiv: 1607.02565 [cs.CV]. [140: Campos et al. 2021] Carlos Campos et al., “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021. DOI: 10.1109/TRO.2021.3075644. [141: Yang and Yu 2002] Yi-Hsueh Yang and Chi-Yuang Yu, “A 3d craniofacial anthropometric database and with an application example design of full-face respiratory masks,” Asian Journal of Ergonomics, Practice and Its Theory, vol. 3, no. 2, pp. 83–104, Feb. 2002. [142: Chen, Lee, and Wang 2014] Hui-Ling Chen, Yueh-Tse Lee, and Yu-Ching Wang, “Profile angle defines soft tissue chin position in adults,” Taiwanese Journal of Orthodontics, vol. 26, no. 1, pp. 4–13, Mar. 2014, ISSN: 1029-8231. DOI: 10.30036/TJO.201403_26(1).0001. [143: Chung et al. 2015] Ju-Hui Chung et al., “A CT-scan database for the facial soft tissue thickness of taiwan adults,” Forensic Science International, vol. 253, 132.e1–132.e11, 2015, ISSN: 0379-0738. DOI: https://doi.org/10.1016/j.forsciint.2015.04.028. [144: Khalil 2002] H.K. Khalil, Nonlinear Systems (Pearson Education). Prentice Hall, 2002, ISBN: 9780130673893. [145: Xu 2018] Anqi Xu. “anqixu/tello_driver.” (2018), [Online]. Available: https://github.com/anqixu/tello_driver/tree/master (visited on 10/08/2023). [146: Lepetit, Moreno-Noguer, and Fua 2009] Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua, “EPnP: An accurate o(n) solution to the PnP problem,” Int J Comput Vis, vol. 81, no. 2, pp. 155–166, Feb. 2009, ISSN: 0920-5691, 1573-1405. DOI: 10.1007/s11263-008-0152-6. [147: Haykin 2014] S.S. Haykin, Adaptive Filter Theory. Pearson, 2014, ISBN: 9780132671453. [148: Thrun, Burgard, and Fox 2005] Sebastian Thrun, Wolfram Burgard, and Dieter Fox, Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). The MIT Press, 2005, ISBN: 0262201623. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92147 | - |
| dc.description.abstract | 近年來,由於無人機自動化與操作的快速發展,無人機空中攝影也逐漸見於電影工業中,如使用單一無人機掛載相機拍攝動態場景。動態場景的多角度同時拍攝是常見於許多傑出的影視作品的手法。然而,由於多台無人機同時進行空中攝影的技術尚未發展成熟,幾乎沒有在電影工業中觀察到多台無人機同時拍攝動態場景的情形。因此本篇論文提出一個多台無人機自動空中攝影系統,專注於解決三個多台無人機空中攝影的核心議題:基於視覺的無人機狀態估測、目標相機隊形的參數化、以及無人機隊形追蹤控制器。
現有文獻中的多台無人機空中攝影多仰賴於室內的精確定位系統。為了克服使用外部定位系統造成的限制,本篇論文提出使用基於誤差狀態卡爾曼濾波器之視覺里程計來求得無人機以及拍攝演員的軌跡。臉部偵測與特徵辨識模型以及標記物相對位姿估測演算法則使用現有的電腦視覺演算法,透過無人機掛載相機進行相對位姿量測。 相較現有文獻中對相機位置參數的使用僅限於單一拍攝演員以及單一無人機,本篇論文定義相機位置參數並用以計算相對演員的目標相機隊形,以及各台無人機對應的相機位姿。 最後,本篇論文提出基於梯度下降法的無人機群分散式隊形控制使無人機追蹤至目標相機隊形,並將控制器損失函數定義於特殊歐式群的李代數空間上。 為了驗證基於誤差狀態卡爾曼濾波器之視覺里程計對三台無人機與一個拍攝演員的軌跡估測精確度,本篇論文中使用一固定光達作為位置真實數值,並於多個實驗中比較狀態估測演算法求得位置與光達測量值的差異。無人機群分散式隊形控制則使用與實驗相同的無人機、演員數量以及其初始狀態進行數值模擬。實驗結果顯示,於單一無人機位姿估測時,本文提出的基於誤差狀態卡爾曼濾波器之視覺里程計可以求得較原本誤差狀態卡爾曼濾波器更精準的估測位姿;而對拍攝演員的里程計則可以在無人機重新測得人物時恢復至平滑且低誤差的位置估測。此外,模擬結果顯示本篇論文提出的無人機群隊形控制器可以使無人機在演員行走速度受雜訊干擾時仍能收斂至目標相機隊形進行空中攝影,反應此隊形控制器的可行性。 | zh_TW |
| dc.description.abstract | In recent decades, the fast progress on the low-level autonomy and manipulation of drones have unveiled the development of aerial cinematography for the film industries, especially on capturing a dynamic scene with a drone equipped with cameras for media production. However, multiple drones are seldom utilized simultaneously to capture a dynamic scene with multiple perspectives, a common practice for those outstanding film production, owing to the limited development of multiple drone aerial cinematography techniques. As a result, in this thesis we propose a multiple drone autonomous aerial cinematography system focusing on three core issues of the multiple drone aerial cinematography problem: a vision-based robocentric state estimation, camera positioning parameters for a desired camera formation, and a formation tracking controller for drones to track the desired camera trajecotries.
Existing works of multiple drone aerial cinematography mostly rely on accurate external positioning facilities in indoor environment. To overcome the restricitons of external positioning fascilities, an error-state Kalman filter (ESKF) based visual inertial odometry (VIO) is applied on drones and the human actors to derive smooth estimates of their motions without external equipments. Existing machine learning (ML) techniques on computer visions for face recognition, face landmark detection, and marker pose estimation is adopted to retrieve the pose measurement from the onboard camera. A set of camera positioning parameters are defined in this thesis to assign the desired relative poses between the human actor and each drone to emulate the desired camera formation in the cinematographic literature, while in literature camera positioning parameters are only defined on a human actor and a drone. For a formation tracking controller, a distributive gradient-based formation control using cost functions on $\mathfrak{se}(3)$ is proposed to track the desire camera formation with regard to the human actor. To verify the ESKF-based VIO on the motion of three commerical drones and a human actor, experiments are conducted to examine the accuracy of the estimated position in comparison with position ground truths from a space-fixed LiDAR. The proposed distributive formation controller is validated by numerical simulations with a variety of scenarios and initial conditions using the same setting of drones and the human actor as the real-world experiments. Experimental results demonstrate that the proposed ESKF-based VIO outperforms the original ESKF on both position and orientation estimates for a single drone, and that the estimated odometry of the human actor could recover smooth and accurate position estimates from detection failures. Besides, Convergent results for the formation tracking with a noisy line of action (LoA) linear velocity of the human actor in numerical simulations also reveal the feasibility of the proposed distributive formation controller. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-03-07T16:17:56Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-03-07T16:17:56Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii ABSTRACT v CONTENTS vii LIST OF FIGURES xi LIST OF TABLES xxi Chapter 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 2 Background and Literature Survey 13 2.1 Autonomous Aerial Cinematography . . . . . . . . . . . . . . . . . 13 2.2 Visual Inertial Odometry (VIO) . . . . . . . . . . . . . . . . . . . . 22 2.3 Formation Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 3 Math Preliminaries and Related Algorithms 33 3.1 Math Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1.1 Derivation of a Rotation Matrix . . . . . . . . . . . . . . . . . . . 33 3.1.2 Projection onto SO(3) with Singular Value Decomposition . . . . 35 3.1.3 A Brief on Riemannian Geometry . . . . . . . . . . . . . . . . . . 38 3.1.4 Lie Groups, Lie Algebras, and the Invariant Metric . . . . . . . . 49 3.1.5 Special Euclidean Groups SE(3) . . . . . . . . . . . . . . . . . . 55 3.1.5.1 Exponential Map and Logarithm Map . . . . . . . . . . . . 56 3.1.5.2 Adjoint Action and Adjoint Map . . . . . . . . . . . . . . 57 3.1.5.3 Inner Products on Lie Algebra . . . . . . . . . . . . . . . . 59 3.2 Related Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.2.1 Error-State Kalman Filter (ESKF) . . . . . . . . . . . . . . . . . . 60 3.2.1.1 State Definition and Coordinate System . . . . . . . . . . . 61 3.2.1.2 Sensor and Measurement Model . . . . . . . . . . . . . . . 63 3.2.1.3 State Propagation . . . . . . . . . . . . . . . . . . . . . . 64 3.2.1.4 Filter Update . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.2.1.5 Injection and Reset . . . . . . . . . . . . . . . . . . . . . . 68 3.2.2 Direct Linear Transform (DLT) Method for Perspective-n-Point (PnP) Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.2.3 Image-Based Feature Selection . . . . . . . . . . . . . . . . . . . 74 3.2.3.1 ArUco Marker . . . . . . . . . . . . . . . . . . . . . . . . 74 3.2.3.2 Face Recognition and Landmark Detection . . . . . . . . . 75 3.2.3.3 YOLO Object Detection . . . . . . . . . . . . . . . . . . . 78 Chapter 4 System Overview 81 4.1 Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 Camera Positioning Parameters . . . . . . . . . . . . . . . . . . . . 82 4.3 Quadrotor Dynamical System . . . . . . . . . . . . . . . . . . . . . 86 4.4 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Chapter 5 State Estimation for Drones and Human Pose 89 5.1 ESKF-based Inertial Odometry Using Quaternion Measurement . . . 89 5.2 Multi-Drone VIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.2.1 Marker Supporting Frame on the Drone . . . . . . . . . . . . . . 93 5.2.2 Multi-Drone VIO based on Fiducial Markers and ESKF-Q . . . . . 95 5.3 Human Face Motion Estimation . . . . . . . . . . . . . . . . . . . . 97 5.3.1 Face Landmark Heuristics . . . . . . . . . . . . . . . . . . . . . . 97 5.3.2 Human Pose Estimation Using YOLO . . . . . . . . . . . . . . . 101 5.3.3 Face Odometry based on ESKF-Q . . . . . . . . . . . . . . . . . . 102 5.3.3.1 State Definition . . . . . . . . . . . . . . . . . . . . . . . 102 5.3.3.2 System Kinematics . . . . . . . . . . . . . . . . . . . . . . 104 5.3.3.3 State Propagation . . . . . . . . . . . . . . . . . . . . . . 107 5.3.3.4 Measurement Models . . . . . . . . . . . . . . . . . . . . 108 5.3.3.5 Filter Update . . . . . . . . . . . . . . . . . . . . . . . . . 111 Chapter 6 Autonomous Aeiral Cinematography using Distributive Gradient-Based Formation Control 115 6.1 Line-of-Action of the Human Actor . . . . . . . . . . . . . . . . . . 115 6.2 Distributive Gradient-Based Formation Control Using Cost Function on se(3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6.2.1 Desired Relative Pose from Camera Positioining Parameters . . . 119 6.2.2 State Definition, Kinematics, and Tracking Problem on SE(3) . . 122 6.2.3 Cost Function on se(3) . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2.4 Distributive Kinematic Chains in The Camera Formation . . . . . 129 6.2.4.1 Drone 1 to The Human Actor’s Face Frame . . . . . . . . . 129 6.2.4.2 Drone 0 to The Human Actor’s LoA Frame . . . . . . . . . 132 6.2.4.3 Drone 2 to The Human Actor’s LoA Frame . . . . . . . . . 135 6.2.5 Distributive Formation Controller . . . . . . . . . . . . . . . . . . 138 Chapter 7 Experimental Results and Simulations 141 7.1 Introduction of the Experiment Platform . . . . . . . . . . . . . . . . 141 7.2 Experimental Results for the ESKF-Q . . . . . . . . . . . . . . . . . 143 7.2.1 LiDAR Measurement Setup . . . . . . . . . . . . . . . . . . . . . 143 7.2.2 Comparison of Inertial Odometry Performances . . . . . . . . . . 145 7.3 Experimental Results for the Multi-Drone VIO . . . . . . . . . . . . 158 7.3.1 Hovering Experiment of Multi-Drone VIO . . . . . . . . . . . . . 158 7.3.2 Moving Experiment of Multi-Drone VIO . . . . . . . . . . . . . . 165 7.4 Experimental Results for the Face Odometry . . . . . . . . . . . . . 172 7.5 Simulation Validation for Distributive Gradient-Based Formation Controller Using Cost Function on se(3) . . . . . . . . . . . . . . . . . . 179 7.5.1 Formation Regulation . . . . . . . . . . . . . . . . . . . . . . . . 182 7.5.1.1 Ideal Initial Condition . . . . . . . . . . . . . . . . . . . . 183 7.5.1.2 Non-Ideal Initial Condition . . . . . . . . . . . . . . . . . 190 7.5.2 Formation Tracking . . . . . . . . . . . . . . . . . . . . . . . . . 201 7.5.2.1 Constant Linear Velocity of the Human Actor . . . . . . . 201 7.5.2.2 Constant Linear and Angular Velocity of the Human Actor . 211 7.5.2.3 Perturbed Linear Velocity of the Human Actor . . . . . . . 222 Chapter 8 Conclusions and Future Works 235 8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 8.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 8.2.1 Distributive Formation Controller . . . . . . . . . . . . . . . . . . 237 8.2.2 ESKF-based VIO of Drones and the Human Actor . . . . . . . . . 238 References 239 Appendix A Pinhole Camera Model 261 Appendix B Quaternion Properties and Kinematics 265 B.1 Quaternion Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 B.2 Quaternion Conjugate . . . . . . . . . . . . . . . . . . . . . . . . . 266 B.3 Quaternion Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 B.4 Quaternion Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 B.5 Unit Quaternion Group S3 . . . . . . . . . . . . . . . . . . . . . . . 267 B.6 Rotation Matrix and Unit Quaternions . . . . . . . . . . . . . . . . . 268 B.7 Exponential Map, Logarithmic Map, and Kinematics of S3 . . . . . . 268 Appendix C Kalman Filter 273 Appendix D Supplementary Materials of Experimental Results and Simulation Validation 279 D.1 Multi-Drone VIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 D.1.1 Hovering Experiment . . . . . . . . . . . . . . . . . . . . . . . . 279 D.1.2 Moving Experiment . . . . . . . . . . . . . . . . . . . . . . . . . 279 D.2 Face Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 D.3 Simulation Validation for Distributive Gradient-Based Formation Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 D.3.1 Formation Regulation . . . . . . . . . . . . . . . . . . . . . . . . 282 D.3.1.1 Ideal Initial Condition . . . . . . . . . . . . . . . . . . . . 282 D.3.1.2 Non-Ideal Initial Condition . . . . . . . . . . . . . . . . . 284 D.3.2 Formation Tracking . . . . . . . . . . . . . . . . . . . . . . . . . 285 D.3.2.1 Constant Linear Velocity of the Human Actor . . . . . . . 285 Appendix E Tello Coordinates 291 Figure 1.1 The photos of conventionally used shooting equipment. (a) Camera dolly [1: Yeager 2019]. (b) Camera crane for live TV [2: Newton Nordic 2023]. (c) Spydercam in the field [3: Spidercam 2020]. . . . . . . . . . . 2 Figure 1.2 The evolution of aerial filming facilities. . . . . . . . . . . . . . . 3 Figure 1.3 Block diagram of the proposed autonomous aerial cinematography system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Figure 1.4 (a) The schematics of the external reversed angle in [21: Arijon, Daniel 1991]. (b) The schematics of the proposed drone formation to emulate the external reverse angle as in Figure 1.4 (a). . . . . . . . . . . . . 10 Figure 2.1 The table of surveyed literature about aerial cinematography. (a) A system aspect of aerial cinematography: number of targets and number of drones. (b) A scenario aspect of aerial cinematography: indoor/outdoor, cluttered/uncluttered, static/dynamic environment. . . . . . . . . . . . . . 14 Figure 2.2 A coarse node diagram for autonomous aerial cinematography. . . 15 Figure 2.3 The node diagram for the perception issues and techniques used in autonomous aerial cinematography. . . . . . . . . . . . . . . . . . . . . 15 Figure 2.4 The node diagram for the planning techniques involved in autonomous aerial cinematography. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Figure 2.5 The node diagram for the control techniques involved in autonomous aerial cinematography. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Figure 2.6 The coarse node diagram of the mainstream VIO methodologies. . 22 Figure 2.7 The node diagram of surveyed paper of filter-based attitude estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Figure 2.8 The node diagram of surveyed papers of formation control. . . . . 26 Figure 3.1 Summary of the comparison between ESKF, EKF, and Kalman filter. 61 Figure 3.2 Block diagram of the ESKF. . . . . . . . . . . . . . . . . . . . . . 64 Figure 4.1 Schematics of the coordinate systems. . . . . . . . . . . . . . . . 81 Figure 4.2 Schematics of the camera positioning parameters. . . . . . . . . . 84 Figure 4.3 Block diagram of the proposed autonomous aerial cinematography system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Figure 5.1 The schematics of marker supporting frame on the drones with attached coordinate systems. . . . . . . . . . . . . . . . . . . . . . . . . 94 Figure 5.2 Scenario of the marker-based relative pose measurement. . . . . . 95 Figure 5.3 The measurement schematics of the proposed marker-based relative pose measurement between three drones. The camera measurments are marked in magenta dashed lines and inertial odometry estimation for each drone is marked in black dashed lines. . . . . . . . . . . . . . . . . 96 Figure 5.4 Two heuristics to generate the state estimation of the human target as the face odometry in Section 5.3. . . . . . . . . . . . . . . . . . . . . 98 Figure 5.5 The visualization of 68 face landmark from the iBUG 300-W dataset. Blue circles indicate the landmarks used in pose estimation. Others are only drawed on the image plane but not used in face pose estimation. . . . 99 Figure 5.6 Scenario of the face odometry measured by drone 1. . . . . . . . . 108 Figure 6.1 The schematics of the Line-of-Action (LoA) frame {L} and the face frame {F }. The origin of {F } and {L} should be coincided, though for the clarity the two coordinate is drawed separately in this figure. . . . 116 Figure 6.2 Sensing topology of the camera formation. The dashed line means a less reliable measurement owing to a stronger assumption in Section 5.3.2.117 Figure 6.3 Tracking topology of the camera formation. All the drones are supposed to track fixed relative pose with regard to the leader (the human actor). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Figure 6.4 Full scenario of the formation control task. . . . . . . . . . . . . . 118 Figure 6.5 The block diagram of the proposed distributed formation controller. 119 Figure 6.6 Schematics of the kinematic relation between drone 1 and the human actor’s face frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Figure 6.7 Schematics of the kinematic relation between drone 0, drone 1, and the human actor’s LoA frame. . . . . . . . . . . . . . . . . . . . . . . . 133 Figure 6.8 Schematics of the kinematic relation between drone 2, drone 0, drone 1, and the human actor’s LoA frame. . . . . . . . . . . . . . . . . 136 Figure 7.1 Downward view of Tello EDU (Tello). . . . . . . . . . . . . . . . 141 Figure 7.2 Various coordinates of Tello EDU (Tello). The dashed line of {C} indicates the ideal camera frame {C′}. . . . . . . . . . . . . . . . . . . . 142 Figure 7.3 The visualization of 3D trajectories from VLP-16 measurement, ESKF-Q, and ESKF [80: Solà 2017] for flight Cir-1 in the space-fixed frame {I}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Figure 7.4 Comparison of estimated position from VLP-16 measurement, ESKF-Q, and ESKF for flight Cir-1 in the space-fixed frame {I}. . . . . . . . . 146 Figure 7.5 Position error profile and logarithmic distance of orientation (Equation Equation (7.6)) to initial for flight Cir-1. . . . . . . . . . . . . . . . . 147 Figure 7.6 Comparison of estimated bias with measured values for linear acceleration and angular velocity from IMU. Note that linear acceleration signals are recorded in g, and that the angular velocity bias of ESKF is scaled by 0.01 just to properly present on the same plot with IMU measurement and ESKF-Q estimation. . . . . . . . . . . . . . . . . . . . . . 147 Figure 7.7 The imu rotation error profile of ESKF-Q for Cir-1 to Cir-3 and Sq-1 to Sq-3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Figure 7.8 Linear acceleration bias compared with [80: Solà 2017] and imu acceleration measure for Cir-2. . . . . . . . . . . . . . . . . . . . . . . . 150 Figure 7.9 Linear acceleration bias compared with [80: Solà 2017] and imu acceleration measure for Cir-3. . . . . . . . . . . . . . . . . . . . . . . . 150 Figure 7.10 Linear acceleration bias compared with [80: Solà 2017] and imu acceleration measure for Sq-1. . . . . . . . . . . . . . . . . . . . . . . . 151 Figure 7.11 Linear acceleration bias compared with [80: Solà 2017] and imu acceleration measure for Sq-2. . . . . . . . . . . . . . . . . . . . . . . . 151 Figure 7.12 Linear acceleration bias compared with [80: Solà 2017] and imu acceleration measure for Sq-3. . . . . . . . . . . . . . . . . . . . . . . . 152 Figure 7.13 The visualization of 3D trjectories from VLP-16 measurement (blue dashed line), ESKF-Q (red to yellow dots), and ESKF [80: Solà 2017] (green to yellow dots) for flight Cir-2 in the space fixed frame {I}. . . . 153 Figure 7.14 The visualization of 3D trjectories from VLP-16 measurement (blue dashed line), ESKF-Q (red to yellow dots), and ESKF [80: Solà 2017] (green to yellow dots) for flight Cir-3 in the space fixed frame {I}. . . . 153 Figure 7.15 The visualization of 3D trjectories from VLP-16 measurement (blue dashed line), ESKF-Q (red to yellow dots), and ESKF [80: Solà 2017] (green to yellow dots) for flight Sq-1 in the space fixed frame {I}. . . . . 154 Figure 7.16 The visualization of 3D trjectories from VLP-16 measurement (blue dashed line), ESKF-Q (red to yellow dots), and ESKF [80: Solà 2017] (green to yellow dots) for flight Sq-2 in the space fixed frame {I}. . . . . 154 Figure 7.17 The visualization of 3D trjectories from VLP-16 measurement (blue dashed line), ESKF-Q (red to yellow dots), and ESKF [80: Solà 2017] (green to yellow dots) for flight Sq-3 in the space fixed frame {I}. . . . . 155 Figure 7.18 Comparison of estimated position from VLP-16 measurement (blue dashed line), ESKF-Q (red), and ESKF [80: Solà 2017] (green) for flight Cir-2 in the space fixed frame {I}. . . . . . . . . . . . . . . . . . . . . . 155 Figure 7.19 Comparison of estimated position from VLP-16 measurement (blue dashed line), ESKF-Q (red), and ESKF [80: Solà 2017] (green) for flight Cir-3 in the space fixed frame {I}. . . . . . . . . . . . . . . . . . . . . . 156 Figure 7.20 Comparison of estimated position from VLP-16 measurement (blue dashed line), ESKF-Q (red), and ESKF [80: Solà 2017] (green) for flight Sq-1 in the space fixed frame {I}. . . . . . . . . . . . . . . . . . . . . . 156 Figure 7.21 Comparison of estimated position from VLP-16 measurement (blue dashed line), ESKF-Q (red), and ESKF [80: Solà 2017] (green) for flight Sq-2 in the space fixed frame {I}. . . . . . . . . . . . . . . . . . . . . . 157 Figure 7.22 Comparison of estimated position from VLP-16 measurement (blue dashed line), ESKF-Q (red), and ESKF [80: Solà 2017] (green) for flight Sq-3 in the space fixed frame {I}. . . . . . . . . . . . . . . . . . . . . . 157 Figure 7.23 The visualization of 3D trajectories of drones from VLP-16 (dashed lines) and multi-drone VIO (markers) in the space fixed frame {I0}. . . . 159 Figure 7.24 Comparison of estimated position of drones from multi-drone VIO (markers) with the ground-truth position from VLP-16 (dashed lines) in the space fixed frame {I0}. . . . . . . . . . . . . . . . . . . . . . . . . 159 Figure 7.25 Position error profiles of drones between multi-drone VIO and the ground-truth position from VLP-16 in the space fixed frame {I0}. . . . . 160 Figure 7.26 The commanded linear and angular velocity of drone 0 (blue dashed line) and the estimated velocity ̇pB0 0,t (red line) of drone 0 in its body frame {B0}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Figure 7.27 The commanded linear and angular velocity of drone 1 (blue dashed line) and the estimated velocity ̇pB1 1,t (red line) of drone 1 in its body frame {B1}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Figure 7.28 The commanded linear and angular velocity of drone 2 (blue dashed line) and the estimated velocity ̇pB2 2,t (red line) of drone 2 in its body frame {B2}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Figure 7.29 Comparison between the estimated linear acceleration aB0 0,bt and angular velocity bias ωB0 0,bt of drone 0 (red dashed line) and the estimated linear acceleration aB0 0,t and angular velocity ωB0 0,t (blue dashed line) of drone 0 in its body frame {B0}. . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Figure 7.30 Comparison between the estimated linear acceleration aB1 1,bt and angular velocity bias ωB1 1,bt of drone 1 (red dashed line) and the estimated linear acceleration aB1 1,t and angular velocity ωB1 1,t (blue dashed line) of drone 1 in its body frame {B1}. . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Figure 7.31 Comparison between the estimated linear acceleration aB2 2,bt and angular velocity bias ωB2 2,bt of drone 2 (red dashed line) and the estimated linear acceleration aB2 2,t and angular velocity ωB2 2,t (blue dashed line) of drone 2 in its body frame {B2}. . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Figure 7.32 The visualization of 3D trajectories of drones from VLP-16 (dashed lines) and multi-drone VIO (markers) in the space fixed frame {I0}. . . . 166 Figure 7.33 Comparison of estimated position of drones from multi-drone VIO (markers) with the ground-truth position from VLP-16 (dashed lines) in the space fixed frame {I0}. . . . . . . . . . . . . . . . . . . . . . . . . 167 Figure 7.34 Position error profiles of drones between multi-drone VIO and the ground-truth position from VLP-16 in the space fixed frame {I0}. . . . . 167 Figure 7.35 The commanded linear and angular velocity of drone 0 (blue dashed line) and the estimated velocity ̇pB0 0,t (red line) of drone 0 in its body frame {B0}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Figure 7.36 The commanded linear and angular velocity of drone 1 (blue dashed line) and the estimated velocity ̇pB1 1,t (red line) of drone 1 in its body frame {B1}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Figure 7.37 The commanded linear and angular velocity of drone 2 (blue dashed line) and the estimated velocity ̇pB2 2,t (red line) of drone 2 in its body frame {B2}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Figure 7.38 Comparison between the estimated linear acceleration aB0 0,bt and angular velocity bias ωB0 0,bt of drone 0 (red dashed line) and the estimated linear acceleration aB0 0,t and angular velocity ωB0 0,t (blue dashed line) of drone 0 in its body frame {B0}. . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Figure 7.39 Comparison between the estimated linear acceleration aB1 1,bt and angular velocity bias ωB1 1,bt of drone 1 (red dashed line) and the estimated linear acceleration aB1 1,t and angular velocity ωB1 1,t (blue dashed line) of drone 1 in its body frame {B1}. . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Figure 7.40 Comparison between the estimated linear acceleration aB2 2,bt and angular velocity bias ωB2 2,bt of drone 2 (red dashed line) and the estimated linear acceleration aB2 2,t and angular velocity ωB2 2,t (blue dashed line) of drone 2 in its body frame {B2}. . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Figure 7.41 The schematics of face odometry experiment with two drones and a human actor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Figure 7.42 The visualization of 3D trajectories of drones and the human actor from VLP-16 (dashed lines), multi-drone VIO and face odometry (markers) in the space fixed frame {I0}. . . . . . . . . . . . . . . . . . . . . . 174 Figure 7.43 Comparison of estimated position of drones from multi-drone VIO and the face odometry (markers) with the ground-truth position from VLP-16 (dashed lines) in the space fixed frame {I0}. . . . . . . . . . . . . . 174 Figure 7.44 Position error profiles of drones and the human actor between multidrone VIO and face odometry position estimates and the ground-truth position from VLP-16 in the space fixed frame {I0}. . . . . . . . . . . . . 175 Figure 7.45 The commanded linear and angular velocity of drone 0 (blue dashed line) and the estimated velocity ̇pB0 0,t (red line) of drone 0 in its body frame {B0}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Figure 7.46 The commanded linear and angular velocity of drone 1 (blue dashed line) and the estimated velocity ̇pB1 1,t (red line) of drone 1 in its body frame {B1}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Figure 7.47 The estimated inertial frame linear velocity ̇pIF f,t and face frame angular velocity ωF f,t of the face of the human actor (red line). . . . . . . 177 Figure 7.48 Comparison between the estimated linear acceleration aB0 0,bt and angular velocity bias ωB0 0,bt of drone 0 (red dashed line) and the estimated linear acceleration aB0 0,t and angular velocity ωB0 0,t (blue dashed line) of drone 0 in its body frame {B0}. . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Figure 7.49 Comparison between the estimated linear acceleration aB1 1,bt and angular velocity bias ωB1 1,bt of drone 1 (red dashed line) and the estimated linear acceleration aB1 1,t and angular velocity ωB1 1,t (blue dashed line) of drone 1 in its body frame {B1}. . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Figure 7.50 The schematics of the simulation for the formation control with three drones and a human actor. . . . . . . . . . . . . . . . . . . . . . . 180 Figure 7.51 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . 183 Figure 7.52 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . . . 184 Figure 7.53 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −0.5I6). . . . . . . . . 184 Figure 7.54 Rotation error profiles of drones and the desired orientation in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Figure 7.55 The linear velocity of drones in the inertial frame {I0} in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Figure 7.56 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . 186 Figure 7.57 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Figure 7.58 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −I6). . . . 187 Figure 7.59 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −I6). . . . . 188 Figure 7.60 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −I6). . . . . . . . . . 188 Figure 7.61 Rotation error profiles of drones and the desired orientation in simulation (K = −1.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Figure 7.62 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Figure 7.63 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . 191 Figure 7.64 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . . . 192 Figure 7.65 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −2.0I6). . . . . . . . . 192 Figure 7.66 Rotation error profiles of drones and the desired orientation in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Figure 7.67 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Figure 7.68 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . 194 Figure 7.69 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . . . 195 Figure 7.70 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −0.5I6). . . . . . . . . 195 Figure 7.71 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Figure 7.72 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −I6). . . . 196 Figure 7.73 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −I6). . . . . 197 Figure 7.74 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −I6). . . . . . . . . . 197 Figure 7.75 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Figure 7.76 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . 199 Figure 7.77 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . . . 199 Figure 7.78 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −2.0I6). . . . . . . . . 200 Figure 7.79 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Figure 7.80 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . 202 Figure 7.81 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . . . 203 Figure 7.82 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −0.5I6). . . . . . . . . 203 Figure 7.83 The linear velocity of drones in the inertial frame {I0} in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Figure 7.84 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . 204 Figure 7.85 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Figure 7.86 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −I6). . . . 206 Figure 7.87 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −I6). . . . . 206 Figure 7.88 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −I6). . . . . . . . . . 207 Figure 7.89 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Figure 7.90 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . 208 Figure 7.91 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . . . 209 Figure 7.92 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −2.0I6). . . . . . . . . 209 Figure 7.93 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Figure 7.94 The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . 211 Figure 7.95 Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . . . 212 Figure 7.96 Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −0.5I6). . . . . . . . . 212 Figure 7.97 The linear velocity of drones in the inertial frame {I0} in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Figure 7.98 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . 214 Figure 7.99 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Figure 7.100The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −I6). . . . 215 Figure 7.101Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −I6). . . . . 215 Figure 7.102Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −I6). . . . . . . . . . 216 Figure 7.103The linear velocity of drones in the inertial frame {I0} in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Figure 7.104The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Figure 7.105The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Figure 7.106The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . 218 Figure 7.107Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . . . 219 Figure 7.108Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −2.0I6). . . . . . . . . 219 Figure 7.109The linear velocity of drones in the inertial frame {I0} in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Figure 7.110The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . 220 Figure 7.111 The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Figure 7.112The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . 222 Figure 7.113Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −0.5I6). . . . 223 Figure 7.114Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −0.5I6). . . . . . . . . 223 Figure 7.115The linear velocity of drones in the inertial frame {I0} in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Figure 7.116The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . 225 Figure 7.117The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Figure 7.118The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −1.0I6). . 226 Figure 7.119Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −1.0I6). . . . 226 Figure 7.120Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −1.0I6). . . . . . . . . 227 Figure 7.121The linear velocity of drones in the inertial frame {I0} in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Figure 7.122The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Figure 7.123The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Figure 7.124The visualization of 3D trajectories of drones and the human actor in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . 229 Figure 7.125Comparison of the position of drones with the desired position in simulation with respect to the space fixed frame {I0} (K = −2.0I6). . . . 230 Figure 7.126Position error profiles of drones and the desired position in simulation with respet to space fixed frame {I0} (K = −2.0I6). . . . . . . . . 230 Figure 7.127The linear velocity of drones in the inertial frame {I0} in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Figure 7.128The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . 232 Figure 7.129The norm of commanded linear velocity and angular velocity and the Lyapunov candidates hi, i = 0, 1, 2 for each drone in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Figure C.1 Simplified Kalman filter block diagram [147: Haykin 2014]. . . . 278 Figure D.1 A serie of snapshots of onboard camera of drone 0 in the hover experiment of multi-drone VIO. . . . . . . . . . . . . . . . . . . . . . . 279 Figure D.2 A serie of snapshots of onboard camera of drone 0 in the moving experiment of multi-drone VIO. . . . . . . . . . . . . . . . . . . . . . . 280 Figure D.3 A serie of snapshots of onboard camera of drone 0 and drone 1 in the experiment of face odometry. . . . . . . . . . . . . . . . . . . . . . . 281 Figure D.4 The linear velocity of drones in the inertial frame {I0} in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Figure D.5 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Figure D.6 The linear velocity of drones in the inertial frame {I0} (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Figure D.7 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Figure D.8 The linear velocity of drones in the inertial frame {I0} (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Figure D.9 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 (K = −0.5I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Figure D.10 The linear velocity of drones in the inertial frame {I0} in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Figure D.11 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Figure D.12 The linear velocity of drones in the inertial frame {I0} in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Figure D.13 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . 287 Figure D.14 The linear velocity of drones in the inertial frame {I0} in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Figure D.15 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −I6). . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Figure D.16 The linear velocity of drones in the inertial frame {I0} in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Figure D.17 The angular velocity of drones in their body frame {Bi}, i = 0, 1, 2 in simulation (K = −2.0I6). . . . . . . . . . . . . . . . . . . . . . . . . 289 Table 2.1 The table of techniques involved in the surveyed literature are classified in three groups: perception, planning, and control. . . . . . . . . . 21 Table 4.1 The table of camera positioning parameters involved in the surveyed literature and the proposed method. . . . . . . . . . . . . . . . . . . . . . 83 Table 5.1 Face landmark coordinate constructed by [141: Yang and Yu 2002], and [142: Chen et al. 2014] and [143: Chung et al. 2015] with self-measured data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Table 7.1 Odometry estimation comparison of ESKF-Q and ESKF [80: Solà 2017]. Position estimates are compared with Equation Equation (7.4), and orientation estimates are compared with IMU of Ryze Tello EDU (Tello). Outperforming results are marked in bold. . . . . . . . . . . . . . . . . . 144 | - |
| dc.language.iso | en | - |
| dc.subject | 卡爾曼濾波器 | zh_TW |
| dc.subject | 視覺慣性里程計 | zh_TW |
| dc.subject | 隊形控制 | zh_TW |
| dc.subject | 幾何控制 | zh_TW |
| dc.subject | 自動空中攝影 | zh_TW |
| dc.subject | Formation Control | en |
| dc.subject | Kalman Filter | en |
| dc.subject | Autonomous Aerial Cinematography | en |
| dc.subject | Geometric Control | en |
| dc.subject | Visual Inertial Odometry | en |
| dc.title | 基於誤差狀態卡爾曼濾波器之視覺慣性里程計算之無人機群分散式隊形控制於追蹤人物空中攝影 | zh_TW |
| dc.title | Error-State Kalman Filter Based Visual-Inertial Odometry and Distributive Formation Control of Unmanned Aerial Vehicles for Tracking Human Target in Aerial Cinematography | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 江明理;簡忠漢 | zh_TW |
| dc.contributor.oralexamcommittee | Ming-Li Chiang;Jong-Hann Jean | en |
| dc.subject.keyword | 卡爾曼濾波器,視覺慣性里程計,隊形控制,幾何控制,自動空中攝影, | zh_TW |
| dc.subject.keyword | Kalman Filter,Visual Inertial Odometry,Formation Control,Geometric Control,Autonomous Aerial Cinematography, | en |
| dc.relation.page | 292 | - |
| dc.identifier.doi | 10.6342/NTU202400587 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-02-18 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-1.pdf 未授權公開取用 | 50.57 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
