Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/52254
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成(Li-Chen Fu)
dc.contributor.authorYi-Lian Chenen
dc.contributor.author陳奕璉zh_TW
dc.date.accessioned2021-06-15T16:10:22Z-
dc.date.available2023-08-11
dc.date.copyright2020-08-26
dc.date.issued2020
dc.date.submitted2020-08-12
dc.identifier.citation[1] A. Aristidou and J. Lasenby, 'Real-time marker prediction and CoR estimation in optical motion capture,' The Visual Computer, vol. 29, no. 1, pp. 7-26, 2013.
[2] K. LaBelle, 'Evaluation of Kinect joint tracking for clinical and in-home stroke rehabilitation tools,' Undergraduate Thesis, University of Notre Dame, 2011.
[3] H. Mousavi Hondori and M. Khademi, 'A review on technical and clinical impact of microsoft kinect on physical therapy and rehabilitation,' Journal of medical engineering, vol. 2014, 2014.
[4] D. Roetenberg, H. Luinge, and P. Slycke, 'Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors,' Xsens Motion Technologies BV, Tech. Rep, vol. 1, 2009.
[5] A. Struzik, G. Konieczny, K. Grzesik, M. Stawarz, S. Winiarski, and A. Rokita, 'Relationship between lower limbs kinematic variables and effectiveness of sprint during maximum velocity phase,' Acta of Bioengineering and Biomechanics, vol. 17, no. 4, pp. 131--138, 2015.
[6] G. Yu, Y. J. Jang, J. Kim, J. H. Kim, H. Y. Kim, K. Kim, and S. B. Panday, 'Potential of IMU sensors in performance analysis of professional alpine skiers,' Sensors, vol. 16, no. 4, p. 463, 2016.
[7] G. Welch and G. Bishop, 'An introduction to the Kalman filter,' 1995.
[8] C. Chen, J. Ma, Y. Susilo, Y. Liu, and M. Wang, 'The promises of big data and small data for travel behavior (aka human mobility) analysis,' Transportation research part C: emerging technologies, vol. 68, pp. 285-299, 2016.
[9] G. Guerra-Filho, 'Optical Motion Capture: Theory and Implementation,' RITA, vol. 12, no. 2, pp. 61-90, 2005.
[10] D. A. G. Jauregui, '3D motion capture by computer vision and virtual rendering,' 2011.
[11] A. Toshev and C. Szegedy, 'Deeppose: Human pose estimation via deep neural networks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1653-1660.
[12] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, 'OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields,' arXiv preprint arXiv:1812.08008, 2018.
[13] T. B. Moeslund and E. Granum, 'A survey of computer vision-based human motion capture,' Computer vision and image understanding, vol. 81, no. 3, pp. 231-268, 2001.
[14] M. Sun and S. Savarese, 'Articulated part-based model for joint object detection and pose estimation,' in 2011 International Conference on Computer Vision, 2011: IEEE, pp. 723-730.
[15] M. Eichner, M. Marin-Jimenez, A. Zisserman, and V. Ferrari, '2d articulated human pose estimation and retrieval in (almost) unconstrained still images,' International journal of computer vision, vol. 99, no. 2, pp. 190-214, 2012.
[16] M. Eichner and V. Ferrari, 'We are family: Joint pose estimation of multiple persons,' in European conference on computer vision, 2010: Springer, pp. 228-242.
[17] M. R. I. Hossain and J. J. Little, 'Exploiting temporal information for 3d human pose estimation,' in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 68-84.
[18] S. Li and A. B. Chan, '3d human pose estimation from monocular images with deep convolutional neural network,' in Asian Conference on Computer Vision, 2014: Springer, pp. 332-347.
[19] S. Park, J. Hwang, and N. Kwak, '3d human pose estimation using convolutional neural networks with 2d pose information,' in European Conference on Computer Vision, 2016: Springer, pp. 156-169.
[20] C. Zimmermann and T. Brox, 'Learning to estimate 3d hand pose from single rgb images,' in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4903-4911.
[21] B. Xiao, H. Wu, and Y. Wei, 'Simple baselines for human pose estimation and tracking,' in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 466-481.
[22] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[23] 'Trivisio Prototyping GmbH.' http://www.trivisio.com .
[24] X. T. B.V. 'Xsens.' https://www.xsens.com/ .
[25] M. Kok, J. D. Hol, and T. B. Schön, 'Using inertial sensors for position and orientation estimation,' arXiv preprint arXiv:1704.06053, 2017.
[26] Q. Zhong, 'Development and Experimental Evaluation of a State Dependent Coefficient Based State Estimator for Functional Electrical Stimulation-Elicited Tasks,' University of Pittsburgh, 2016.
[27] F. Wittmann, O. Lambercy, and R. Gassert, 'Magnetometer-based drift correction during rest in IMU arm motion tracking,' Sensors, vol. 19, no. 6, p. 1312, 2019.
[28] A. Filippeschi, N. Schmitz, M. Miezal, G. Bleser, E. Ruffaldi, and D. Stricker, 'Survey of motion tracking methods based on inertial sensors: A focus on upper limb human motion,' Sensors, vol. 17, no. 6, p. 1257, 2017.
[29] W. T. Ang, P. K. Khosla, and C. N. Riviere, 'Kalman filtering for real-time orientation tracking of handheld microsurgical instrument,' in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), 2004, vol. 3: IEEE, pp. 2574-2580.
[30] Y. S. Suh, 'Orientation estimation using a quaternion-based indirect Kalman filter with adaptive estimation of external acceleration,' IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 12, pp. 3296-3305, 2010.
[31] Q. Zhang, X. Niu, H. Zhang, and C. Shi, 'Algorithm improvement of the low-end GNSS/INS systems for land vehicles navigation,' Mathematical Problems in Engineering, vol. 2013, 2013.
[32] A. D. Young, 'Use of body model constraints to improve accuracy of inertial motion capture,' in 2010 International Conference on Body Sensor Networks, 2010: IEEE, pp. 180-186.
[33] G. Cooper, I. Sheret, L. McMillian, K. Siliverdis, N. Sha, D. Hodgins, L. Kenney, and D. Howard, 'Inertial sensor-based knee flexion/extension angle estimation,' Journal of biomechanics, vol. 42, no. 16, pp. 2678-2685, 2009.
[34] R. Zhu and Z. Zhou, 'A real-time articulated human motion tracking using tri-axis inertial/magnetic sensors package,' IEEE Transactions on Neural systems and rehabilitation engineering, vol. 12, no. 2, pp. 295-302, 2004.
[35] C. M. Brigante, N. Abbate, A. Basile, A. C. Faulisi, and S. Sessa, 'Towards miniaturization of a MEMS-based wearable motion capture system,' IEEE Transactions on industrial electronics, vol. 58, no. 8, pp. 3234-3241, 2011.
[36] Y. Jung, D. Kang, and J. Kim, 'Upper body motion tracking with inertial sensors,' in 2010 IEEE International Conference on Robotics and Biomimetics, 2010: IEEE, pp. 1746-1751.
[37] L. Peppoloni, A. Filippeschi, E. Ruffaldi, and C. A. Avizzano, 'A novel 7 degrees of freedom model for upper limb kinematic reconstruction based on wearable sensors,' in 2013 IEEE 11th international symposium on intelligent systems and informatics (SISY), 2013: IEEE, pp. 105-110.
[38] M. Kok, J. Hol, and T. Schön, 'An optimization-based approach to human body motion capture using inertial sensors,' in 19th World Congress of the International Federation of Automatic Control (IFAC), Cape Town, South Africa, August 24-29, 2014, 2014: International Federation of Automatic Control, pp. 79-85.
[39] M. Miezal, B. Taetz, and G. Bleser, 'On inertial body tracking in the presence of model calibration errors,' Sensors, vol. 16, no. 7, p. 1132, 2016.
[40] S. M. LaValle, A. Yershova, M. Katsev, and M. Antonov, 'Head tracking for the Oculus Rift,' in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014: IEEE, pp. 187-194.
[41] R. Takeda, G. Lisco, T. Fujisawa, L. Gastaldi, H. Tohyama, and S. Tadano, 'Drift removal for improving the accuracy of gait parameters using wearable sensor systems,' Sensors, vol. 14, no. 12, pp. 23230-23247, 2014.
[42] M. El-Gohary and J. McNames, 'Shoulder and elbow joint angle tracking with inertial sensors,' IEEE Transactions on Biomedical Engineering, vol. 59, no. 9, pp. 2635-2641, 2012.
[43] V. Joukov, M. Karg, and D. Kulic, 'Online tracking of the lower body joint angles using IMUs for gait rehabilitation,' in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014: IEEE, pp. 2310-2313.
[44] M. Miezal, G. Bleser, N. Schmitz, and D. Stricker, 'A generic approach to inertial tracking of arbitrary kinematic chains,' in Proceedings of the 8th international conference on body area networks, 2013, pp. 189-192.
[45] T. Zimmermann, B. Taetz, and G. Bleser, 'IMU-to-segment assignment and orientation alignment for the lower body using deep learning,' Sensors, vol. 18, no. 1, p. 302, 2018.
[46] M. Mundt, A. Koeppe, S. David, T. Witter, F. Bamer, W. Potthast, and B. Markert, 'Estimation of Gait Mechanics Based on Simulated and Measured IMU Data Using an Artificial Neural Network,' Frontiers in Bioengineering and Biotechnology, vol. 8, 2020.
[47] J. K. Aggarwal and Q. Cai, 'Human motion analysis: A review,' Computer vision and image understanding, vol. 73, no. 3, pp. 428-440, 1999.
[48] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik, 'Recurrent network models for human dynamics,' in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 4346-4354.
[49] P. Ghosh, J. Song, E. Aksan, and O. Hilliges, 'Learning human motion models for long-term predictions,' in 2017 International Conference on 3D Vision (3DV), 2017: IEEE, pp. 458-466.
[50] J. Liu, A. Shahroudy, D. Xu, and G. Wang, 'Spatio-temporal lstm with trust gates for 3d human action recognition,' in European conference on computer vision, 2016: Springer, pp. 816-833.
[51] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena, 'Structural-RNN- Deep Learning on Spatio-Temporal Graphs,' presented at the Computer Vision and Pattern Recognition(CVPR), 2016.
[52] J. Butepage, M. J. Black, D. Kragic, and H. Kjellstrom, 'Deep representation learning for human motion prediction and classification,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 6158-6166.
[53] S. Wang, J. Cao, and P. S. Yu, 'Deep learning for spatio-temporal data mining: A survey,' arXiv preprint arXiv:1906.04928, 2019.
[54] J. M. Wang, D. J. Fleet, and A. Hertzmann, 'Gaussian process dynamical models for human motion,' IEEE transactions on pattern analysis and machine intelligence, vol. 30, no. 2, pp. 283-298, 2007.
[55] A. M. Lehrmann, P. V. Gehler, and S. Nowozin, 'Efficient nonlinear markov models for human motion,' in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1314-1321.
[56] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena, 'Structural-rnn: Deep learning on spatio-temporal graphs,' in Proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 5308-5317.
[57] J. Martinez, M. J. Black, and J. Romero, 'On human motion prediction using recurrent neural networks,' in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2891-2900.
[58] X. Guo and J. Choi, 'Human Motion Prediction via Learning Local Structure Representations and Temporal Dependencies,' in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 2580-2587.
[59] S. Cho and H. Foroosh, 'A temporal sequence learning for action recognition and prediction,' in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018: IEEE, pp. 352-361.
[60] K. Simonyan and A. Zisserman, 'Very deep convolutional networks for large-scale image recognition,' arXiv preprint arXiv:1409.1556, 2014.
[61] C. Li, Z. Zhang, W. Sun Lee, and G. Hee Lee, 'Convolutional sequence to sequence model for human dynamics,' in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5226-5234.
[62] W. Mao, M. Liu, M. Salzmann, and H. Li, 'Learning trajectory dependencies for human motion prediction,' in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9489-9497.
[63] X. Liu, J. Yin, J. Liu, P. Ding, J. Liu, and H. Liu, 'TrajectoryNet: a new spatio-temporal feature learning network for human motion prediction.,' arXiv: Computer Vision and Pattern Recognition, 2020 2020.
[64] D. Pavllo, D. Grangier, and M. Auli, 'Quaternet: A quaternion-based recurrent model for human motion,' arXiv preprint arXiv:1805.06485, 2018.
[65] E. Leffens, F. L. Markley, and M. D. Shuster, 'Kalman filtering for spacecraft attitude estimation,' Journal of Guidance, Control, and Dynamics, vol. 5, no. 5, pp. 417-429, 1982.
[66] H. Hendriks, C. Spoor, A. De Jong, and R. Goossens, 'Stability of sitting postures: the influence of degrees of freedom,' Ergonomics, vol. 49, no. 15, pp. 1611-1626, 2006.
[67] T. Koritnik, T. Bajd, and M. Munih, 'A simple kinematic model of a human body for virtual environments,' in Advances in Robot Kinematics: Motion in Man and Machine: Springer, 2010, pp. 401-408.
[68] B. Kwolek, T. Krzeszowski, A. Michalczuk, and H. Josinski, '3D gait recognition using spatio-temporal motion descriptors,' in Asian Conference on Intelligent Information and Database Systems, 2014: Springer, pp. 595-604.
[69] J. C. John, 'Introduction to robotics: mechanics and control,' Reading: Addison-Wesley, 1989.
[70] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' in Advances in neural information processing systems, 2012, pp. 1097-1105.
[71] R. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Rich feature hierarchies for accurate object detection and semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[72] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, 'You only look once: Unified, real-time object detection,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[73] D. G. Lowe, 'Distinctive image features from scale-invariant keypoints,' International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004.
[74] J. A. Suykens and J. Vandewalle, 'Least squares support vector machine classifiers,' Neural processing letters, vol. 9, no. 3, pp. 293-300, 1999.
[75] A. Cleeremans, D. Servan-Schreiber, and J. L. McClelland, 'Finite state automata and simple recurrent networks,' Neural computation, vol. 1, no. 3, pp. 372-381, 1989.
[76] S. Hochreiter, 'The vanishing gradient problem during learning recurrent neural nets and problem solutions,' International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 6, no. 02, pp. 107-116, 1998.
[77] S. Hochreiter and J. Schmidhuber, 'Long short-term memory,' Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[78] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, 'Spectral networks and locally connected networks on graphs,' arXiv preprint arXiv:1312.6203, 2013.
[79] T. N. Kipf and M. Welling, 'Semi-supervised classification with graph convolutional networks,' arXiv preprint arXiv:1609.02907, 2016.
[80] D. K. Duvenaud et al., 'Convolutional networks on graphs for learning molecular fingerprints,' in Advances in neural information processing systems, 2015, pp. 2224-2232.
[81] P. Gui, L. Tang, and S. Mukhopadhyay, 'MEMS based IMU for tilting measurement: Comparison of complementary and kalman filter based data fusion,' in 2015 IEEE 10th conference on Industrial Electronics and Applications (ICIEA), 2015: IEEE, pp. 2004-2009.
[82] I. W. Selesnick and C. S. Burrus, 'Generalized digital Butterworth filter design,' IEEE Transactions on signal processing, vol. 46, no. 6, pp. 1688-1694, 1998.
[83] 'BoostFix 智復寶.' 仁寶電腦工業股份有限公司. https://www.shennonacorp.com/boostfix (accessed.
[84] J. F. Lin and D. Kulić, 'Human pose recovery using wireless inertial measurement units,' Physiological measurement, vol. 33, no. 12, p. 2099, 2012.
[85] X. Zhang, H. Wang, Y. Tian, L. Peyrodie, and X. Wang, 'Model-free based neural network control with time-delay estimation for lower extremity exoskeleton,' Neurocomputing, vol. 272, pp. 178-188, 2018.
[86] R. Kianifar, V. Joukov, A. Lee, S. Raina, and D. Kulić, 'Inertial measurement unit-based pose estimation: Analyzing and reducing sensitivity to sensor placement and body measures,' Journal of rehabilitation and assistive technologies engineering, vol. 6, p. 2055668318813455, 2019.
[87] M. Harrington, A. Zavatsky, S. Lawson, Z. Yuan, and T. Theologis, 'Prediction of the hip joint centre in adults, children, and patients with cerebral palsy based on magnetic resonance imaging,' Journal of biomechanics, vol. 40, no. 3, pp. 595-602, 2007.
[88] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, 'Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments,' IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1325-1339, 2013.
[89] S. Yan, Y. Xiong, and D. Lin, 'Spatial temporal graph convolutional networks for skeleton-based action recognition,' in Thirty-second AAAI conference on artificial intelligence, 2018.
[90] R. Dabral, A. Mundhada, U. Kusupati, S. Afaque, A. Sharma, and A. Jain, 'Learning 3d human pose from structure and motion,' in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 668-683.
[91] C.-H. Lin, W.-M. Lien, W.-W. Wang, S.-H. Chen, C.-H. Lo, S.-Y. Lin, L.-C. Fu, and J.-S. Lai, 'NTUH-II robot arm with dynamic torque gain adjustment method for frozen shoulder rehabilitation,' in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014: IEEE, pp. 3555-3560.
[92] M. M. Hamdi, M. I. Awad, M. M. Abdelhameed, and F. A. Tolbah, 'Lower limb motion tracking using IMU sensor network,' in 2014 Cairo International Biomedical Engineering Conference (CIBEC), 2014: IEEE, pp. 28-33.
[93] P. Merriaux, Y. Dupuis, R. Boutteau, P. Vasseur, and X. Savatier, 'A study of vicon system positioning performance,' Sensors, vol. 17, no. 7, p. 1591, 2017.
[94] T.-W. Lu and J. O’connor, 'Bone position estimation from skin marker co-ordinates using global optimisation with joint constraints,' Journal of biomechanics, vol. 32, no. 2, pp. 129-134, 1999.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/52254-
dc.description.abstract近年來,動作捕捉系統受到大量的發展與關注,因為其應用場域廣泛,像是動畫的製作、體育和醫療上的應用。在復健領域中,也時常透過動作捕捉系統,擷取病患參與復健時的動作資訊,作為量化患者復健前後變化的依據,提供更客觀的數據供醫師參考。
由於以光學原理為主的動作捕捉系統,需要在固定場域架設相機,故多半只適用於實驗室內。而以慣性傳感器為系統的主軸,雖然它不受應用場域限制,但他相較於光學原理的方法,容易產生量測結果較不準確,若要提高準確度,需要更高規格硬體與更多數量的傳感器以還原更全面性姿態,成本也會增加。
於相關文獻中,多半使用不同傳感器融合技術與演算法,結合運動學模型,及人體關節限制來改善慣性感測器追蹤不準確的問題,但不同做法與應用,難以套用在不同系統架構上。最後,我們分析了當前最先進的人體姿態追蹤期刊後我們發現,目前大多數的研究會因傳感器飄移問題,而造成追蹤不準確。
此外,傳感器會因佩帶時間過長,受到訊號雜訊、陀螺儀估測值不準確的影響,而產生飄移問題。基於上述原因,本論文提出了一種深度學習模型架構,透過慣性傳感器蒐集到的有效資訊,預測在未來動作序列3D空間各關節點的相對位置,藉由模型生成的資訊與原始訊號做對照,以縮減在只依賴傳感器而產生的追蹤誤差。並在最終實驗,以走路的下肢運動軌跡應用,作為驗證,證明本論文提出的方法,其表現可以減緩傳感器飄移問題。於未來展望,為驗證此模型與系統之效能,將本研究方法應用於實際復健場域中。
zh_TW
dc.description.abstractIn recent years, the motion capture system has received a lot of attention because of its wide application, such as movie animation, sport and medical applications. In the field of rehabilitation, motion capture systems are often used to collect the motion information of the patient when he/she is performing rehabilitation tasks. It can be used as a method to quantify patient rehabilitation effectiveness, and also provide more objective data for physician as reference.
However, the current commercial motion capture systems are not widely used. Since they are based on principle of optics and need to be set up in fixed region, they are mostly suitable for use in the laboratory. Besides the optical systems, there is another system which adopts inertial sensors as its core. Although it is not limited by the fields of application, it is less accurate in terms of the produced data than the optical motion capture system. If one wants to improve the accuracy of the inertial sensors, higher specification and more sensors are required to reconstruct more comprehensive postures. However, doing so will also increase the costs to users.
In the related literature of using inertial sensors, most of them use different sensor fusion technologies and algorithms, which together with kinematic models and human joint constraints, to increase the accuracy of assessed tracking data. After a comprehensive survey of the literature in human posture tracking, we also found that most of the current researches suffer from sensor drift problem, which unfortunately causes inaccurate tracking.
In addition, the sensor measurements will be affected by signal noise, bias and inaccurate gyroscope estimation during long-term wearing, which may cause drift problems. Based on the aforementioned reason, this thesis proposes a deep learning architecture that uses effective information collected by inertial sensors to predict the relative position of each joint of human’s lower limbs in 3D space of the future motion sequence. The motion prediction result from the deep learning model is used to alleviate the tracking error caused by reliance only on inertial sensor. In the final experiment, the application of walking lower limb motion trajectory is used as a verification to prove that the method proposed can perform more accurately. In the future work, to verify the effectiveness of the proposed model and system for rehabilitation is required to be validated through clinical studies.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T16:10:22Z (GMT). No. of bitstreams: 1
U0001-0708202010521400.pdf: 12079371 bytes, checksum: 039247704cd22801b99eefa0877ad3fd (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents致謝 I
摘要 II
ABSTRACT III
TABLE OF CONTENTS V
LIST OF FIGURES VII
LIST OF TABLES XI
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Literature Review 3
1.2.1 Motion tracking 3
1.2.2 Human motion tracking based on IMU issue 14
1.2.3 Deep learning model for motion prediction 20
1.3 Contributions 28
1.4 Thesis Organization 28
Chapter 2 Preliminaries 30
2.1 Inertial Measurement Units (IMU) sensors 30
2.2 Kinematic Model 34
2.2.1 Reality human degree of freedom and joints 35
2.2.2 Kinematic chain of human body 37
2.2.3 Denavit–Hartenberg parameters 39
2.3 Deep Learning Model 44
2.3.1 Convolutional neural network 44
2.3.2 Recurrent neural network 46
2.3.3 Graph convolution network 50
Chapter 3 Design Motion Trajectory Prediction framework 53
3.1 Model Lower Limbs 53
3.1.1 IMU system and signal pre-processing 54
3.1.2 Kinematic model of lower limb 56
3.2 Deep Learning for Motion Prediction 60
3.2.1 Training dataset 61
3.2.2 Graph convolution network model 62
3.3 Loss Function for Training 65
3.4 Fine Tuning of the Model 68
Chapter 4 Experimental and Results 70
4.1 IMU Sensor Setting and Data Collecting 70
4.2 Signal Pre-processing 71
4.3 Kinematic Model Experiment 73
4.4 Deep Learning Model Experiment 76
4.4.1 Configuration 77
4.4.2 Human3.6m dataset 77
4.4.3 Implementation detail 78
4.4.4 Deep learning model experiment 80
4.5 IMU data combine with deep learning model 83
Chapter 5 Conclusion 87
REFERENCE 89
dc.language.isoen
dc.subject姿態估測zh_TW
dc.subject深度學習zh_TW
dc.subject動作追蹤軌跡zh_TW
dc.subject下肢關節軌跡估測zh_TW
dc.subjectlower limb joint trajectory estimationen
dc.subjectposture estimationen
dc.subjectposture tracking trajectoryen
dc.subjectdeep learningen
dc.title基於慣性傳感器的圖卷積網路應用於下肢運動軌跡估測
zh_TW
dc.titleIMU-based Estimation of Lower Limb Motion Trajectory with Graph Convolution Networken
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳文翔(Wen-Shiang Chen),賴金鑫(Jin-Shin Lai),梁蕙雯(Huey-Wen Liang),盧璐(Lu Lu)
dc.subject.keyword深度學習,姿態估測,動作追蹤軌跡,下肢關節軌跡估測,zh_TW
dc.subject.keyworddeep learning,posture estimation,posture tracking trajectory,lower limb joint trajectory estimation,en
dc.relation.page94
dc.identifier.doi10.6342/NTU202002604
dc.rights.note有償授權
dc.date.accepted2020-08-13
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
U0001-0708202010521400.pdf
  未授權公開取用
11.8 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved