請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89884
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 呂東武 | zh_TW |
dc.contributor.advisor | Tung-Wu Lu | en |
dc.contributor.author | 李尚宸 | zh_TW |
dc.contributor.author | Shang-Chen Li | en |
dc.date.accessioned | 2023-09-22T16:32:12Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-09-22 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-11 | - |
dc.identifier.citation | 1. Furtado, J.S., et al., Comparative analysis of optitrack motion capture systems, in Advances in Motion Sensing and Control for Robotic Applications. 2019, Springer. p. 15-31.
2. Benoit, D.L., et al., Effect of skin movement artifact on knee kinematics during gait and cutting motions measured in vivo. 2006, Gait Posture. 24(2): p. 152-64. 3. Van Der Kruk, E., et al., Accuracy of human motion capture systems for sport applications; state-of-the-art review. 2018, European journal of sport science. 18(6): p. 806-819. 4. Shotton, J., et al., Real-time human pose recognition in parts from single depth images, in machine learning for computer vision. 2013, Springer. p. 119-135. 5. Cao, Z., et al., OpenPose: realtime multi-person 2D pose estimation using part affinity fields. 2019, IEEE transactions on pattern analysis and machine intelligence. 43(1): p. 172-186. 6. Güler, R.A., et al., Densepose: dense human pose estimation in the wild. 2018, Proceedings of the IEEE conference on computer vision and pattern recognition. 7. Leutenegger, S., et al., Keyframe-based visual–inertial odometry using nonlinear optimization. 2015, The International Journal of Robotics Research. 34(3): p. 314-334. 8. Schmidhuber, J., Deep learning in neural networks: an overview. 2015, Neural networks. 61: p. 85-117. 9. Smilkov, D., et al., Tensorflow. js: machine learning for the web and beyond. 2019, arXiv preprint arXiv:1901.05350. 10. Wang, X., et al., In-edge ai: intelligentizing mobile edge computing, caching and communication by federated learning. 2019, IEEE Network. 33(5): p. 156-165. 11. Kreiss, S., et al., Pifpaf: composite fields for human pose estimation. 2019, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12. Babu, S.C.J.A., et al., A 2019 guide to human pose estimation with deep learning. 2019. 13. Chen, X., et al., Articulated pose estimation by a graphical model with image dependent pairwise relations. 2014. 27. 14. Zhang, Z., A flexible new technique for camera calibration. 2000, IEEE Transactions on pattern analysis and machine intelligence. 22(11): p. 1330-1334. 15. Sani, M.F., et al., Automatic navigation and landing of an indoor AR. drone quadrotor using ArUco marker and inertial sensors. 2017, in 2017 international conference on computer and drone applications (IConDA). 16. Romero-Ramirez, F.J., et al., Speeded up detection of squared fiducial markers. 2018, Image and vision Computing. 76: p. 38-47. 17. Bolme, D.S., et al. Visual object tracking using adaptive correlation filters. 2010, in 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE. 18. Bochkovskiy, A., et al., Yolov4: optimal speed and accuracy of object detection. 2020, arXiv preprint arXiv:2004.10934. 19. Bajpai, R., et al., Movenet: a deep neural network for joint profile prediction across variable walking speeds and slopes. 2021, Journal of Biomechanics. 70: p. 1-11. 20. Lu, T.W., et al., Bone position estimation from skin marker co-ordinates using global optimisation with joint constraints. 1999, Journal of Biomechanics. 32(2): p. 129-134. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89884 | - |
dc.description.abstract | 動作捕捉系統是一項活躍在生物力學、電腦動畫、機器人學以及軍事等領域的技術,可以紀錄物體或人體的三維運動學資訊,並用於量化分析與姿態重現。雖然以高畫素紅外線攝影機搭配光學標記的動作捕捉系統已趨成熟,並有著毫米級的精度,然而其高昂的成本、資料分析複雜度以及硬體設置難度較高,難以在專業實驗室以外的場域操作,對於各種使用情境的彈性亦相當有限。
隨著人工智慧技術在影像分析領域的日益進步,以深度學習方式訓練之人體姿態估計模型可在低成本的硬體設置下達成人體關節點估計;輸入來源可為一般相機所拍攝的彩色影像或是深度攝影機所取得的深度影像。相對於紅外線立體攝影術而言,這類動作捕捉技術在使用上的彈性更高、成本更低,且省去了安裝光學標記及後續標註的程序,但提高泛用性的同時也會造成較低的捕捉精度。 鑒於現今移動裝置的普及率與硬體等級逐漸提高,這些裝置多具有高精度的相機、慣性感測器、連線協作以及深度學習模型運算能力,可在系統中以邊緣運算的方式將複雜計算分工進行,提高運作效率。 本研究旨在評估以多台移動裝置共同實現的動作捕捉系統之精度與參數最佳化。裝置分別自不同角度、不同距離等參數,使用深度學習模型來估計二維人體關節點,重建出各關節點之三維座標,並以全域最佳化方式修正人體姿態的重建誤差,提高估計精度。 | zh_TW |
dc.description.abstract | Motion capture system is widely used in the fields of robotics, biomechanics, computer graphics animation and game industries. It is capable of recording the kinematics of humans or objects for the purposes of analysis and reproduction.
Although the motion capture system based on infrared stereoscopic photography and optical markers has been considered mature and boasts millimeter-level accuracy, the high cost, complexity of data analysis, and difficulty in operating make it difficult to use in non-laboratory scenarios. The human body pose estimation algorithms may achieve 3D human pose reconstruction without optical markers and with low-cost hardware settings, but the accuracy is low and the information provided is quite limited. With the increased popularity and advancement of mobile technologies nowadays, developing a mobile device-based motion capture system has become possible. This research aims to collect images and inertial data we need from multiple mobile devices, recognize the planar markers on the ground and estimate specific keypoints of the human body by a deep learning model, and reconstruct the motion of the subject in 3D. The purpose of this study is to evaluate the accuracy and parameter optimization of a motion capture system implemented by multiple mobile devices. The devices, using different angles, distances, and deep learning models, estimate the 2D joint points of the human body, reconstruct the 3D coordinates of each joint point, and correct the reconstruction error of the human body posture through global optimization to improve estimation accuracy. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-22T16:32:12Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-09-22T16:32:12Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 致謝 I
中文摘要 II Abstract III 目錄 IV 圖目錄 VI 表目錄 VIII 第一章 緒論 1 第一節 研究背景 1 第二節 動作捕捉系統 2 第三節 移動裝置 3 第四節 人體姿態估計 4 第五節 OpenCap 4 第六節 研究目的 4 第二章 材料與方法 6 第一節 移動裝置姿態估計 6 一、相機參數 6 二、廣義座標系統定義 7 三、建立標記地圖 11 四、慣性量測單元姿態估計 12 第二節 關節點估計 12 第三節 多裝置同步與三維重建 13 一、裝置資料同步 13 二、三維重建 14 三、全域最佳化 14 第四節 系統架構 15 第五節 實驗流程 16 第三章 結果 22 第四章 討論 30 第五章 結論 31 參考文獻 32 | - |
dc.language.iso | zh_TW | - |
dc.title | 以行動裝置與人工智慧為基礎之三維動作捕捉系統之精度評估與參數最佳化 | zh_TW |
dc.title | Accuracy Assessment and Parameter Optimization of a 3D AI-Based Mobile Motion Capture System | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 林正忠;彭志維 | zh_TW |
dc.contributor.oralexamcommittee | Zheng-Zhong Lin;Chih-Wei Peng | en |
dc.subject.keyword | 動作捕捉,行動裝置,相機姿態估計,三維重建,深度學習模型,全域最佳化,參數最佳化, | zh_TW |
dc.subject.keyword | Motion capture,mobile device,camera pose estimation,3D reconstruction,deep learning models,global optimization method,parameter optimization, | en |
dc.relation.page | 33 | - |
dc.identifier.doi | 10.6342/NTU202303689 | - |
dc.rights.note | 同意授權(限校園內公開) | - |
dc.date.accepted | 2023-08-12 | - |
dc.contributor.author-college | 工學院 | - |
dc.contributor.author-dept | 醫學工程學系 | - |
dc.date.embargo-lift | 2028-08-08 | - |
顯示於系所單位: | 醫學工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 目前未授權公開取用 | 2.29 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。