Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 機械工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79461
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor詹魁元(Kuei-Yuan Chan)
dc.contributor.authorYu-Ting Chenen
dc.contributor.author陳宥廷zh_TW
dc.date.accessioned2022-11-23T09:01:05Z-
dc.date.available2021-11-03
dc.date.available2022-11-23T09:01:05Z-
dc.date.copyright2021-11-03
dc.date.issued2021
dc.date.submitted2021-10-27
dc.identifier.citation[1] M. B. Holte, C. Tran, M. M. Trivedi, and T. B. Moeslund, “Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments,” IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 5, pp. 538–552, 2012. [2] W. Ma, S. Xia, J. K. Hodgins, X. Yang, C. Li, and Z. Wang, “Modeling style and variation in human motion,” in Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ser. SCA ’10. Goslar, DEU: Eurographics Association, 2010, p. 21–30. [3] S. N. Muralikrishna, B. Muniyal, U. D. Acharya, and R. Holla, “Enhanced human action recognition using fusion of skeletal joint dynamics and structural features,” Journal of Robotics, vol. 2020, p. 3096858, Aug 2020. [Online]. Available: https://doi.org/10.1155/2020/3096858 [4] S. Blair, M. J. Lake, R. Ding, and T. Sterzing, “Magnitude and variability of gait characteristics when walking on an irregular surface at different speeds,” Human Movement Science, vol. 59, pp. 112–120, jun 2018. [Online]. Available: https://doi.org/10.1016/j.humov.2018.04.003 [5] C. Xu, Y. Makihara, G. Ogi, X. Li, Y. Yagi, and J. Lu, “The ou-isir gait database comprising the large population dataset with age and performance evaluation of age estimation,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, p. 24, Dec 2017. [6] W. Wei and A. Yunxiao, “Vision-based human motion recognition: A survey,” in 2009 Second International Conference on Intelligent Networks and Intelligent Systems, 2009, pp. 386–389. [7] D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer Vision and Image Understanding, vol. 115, no. 2, pp. 224–241, 2011. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1077314210002171 [8] D. R. Beddiar, B. Nini, M. Sabokrou, and A. Hadid, “Vision-based human activity recognition: a survey,” Multimedia Tools and Applications, vol. 79, no. 41, pp. 30 509–30 555, Nov 2020. [Online]. Available: https://doi.org/10.1007/s11042-020-09004-3 [9] L. M. Dang, K. Min, H. Wang, M. Piran, H. Lee, and H. Moon, “Sensor-based and vision-based human activity recognition: A comprehensive survey,” Pattern Recognition, vol. 108, 07 2020. [10] P.-z. Chen, J. Li, M. Luo, and N.-h. Zhu, “Real-time human motion capture driven by a wireless sensor network,” Int. J. Comput. Games Technol., vol. 2015, 2015 [Online]. Available: https://doi.org/10.1155/2015/695874 [11] S. Liu, J. Zhang, Y. Zhang, and R. Zhu, “A wearable motion capture device able to detect dynamic motion of human limbs,” Nature Communications, vol. 11, no. 1, Nov. 2020. [Online]. Available: https://doi.org/10.1038/s41467-020-19424-2 [12] A. D. Young, “Use of body model constraints to improve accuracy of inertial motion capture,” in 2010 International Conference on Body Sensor Networks, 2010, pp.180–186. [13] H. Wang and C. Schmid, “Action recognition with improved trajectories,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 3551–3558. [14] S. Herath, M. T. Harandi, and F. Porikli, “Going deeper into action recognition: A survey,” CoRR, vol. abs/1605.04988, 2016. [Online]. Available: http://arxiv.org/abs/1605.04988 [15] A. Bobick and J. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257–267, 2001. [16] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, vol. 2, 2005, pp. 1395–1402 Vol. 2. [17] S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” CoRR, vol. abs/1801.07455, 2018. [Online]. Available: http://arxiv.org/abs/1801.07455 [18] O. Taheri, H. Salarieh, and A. Alasti, “Human leg motion tracking by fusing imus and rgb camera data using extended kalman filter,” CoRR, vol. abs/2011.00574, 2020. [Online]. Available: https://arxiv.org/abs/2011.00574 [19] T. Ito, K. Ayusawa, E. Yoshida, and H. Kobayashi, “Evaluation of active wearable assistive devices with human posture reproduction using a humanoid robot,” Advanced Robotics, vol. 32, no. 12, pp. 635–645, 2018. [Online]. Available: https://doi.org/10.1080/01691864.2018.1490200 [20] E. Papi, Y. N. Bo, and A. H. McGregor, “A flexible wearable sensor for knee flexion assessment during gait,” Gait and Posture, vol. 62, pp. 480–483, 2018. [21] L. Fan, Z. Wang, and H. Wang, “Human activity recognition model based on decision tree,” in 2013 International Conference on Advanced Cloud and Big Data, 2013, pp. 64–68. [22] A. Glandon, L. Vidyaratne, N. Sadeghzadehyazdi, N. K. Dhar, J. O. Familoni, S. T. Acton, and K. M. Iftekharuddin, “3d skeleton estimation and human identity recognition using lidar full motion video,” in 2019 International Joint Conference on Neural Networks (IJCNN), 2019, pp. 1–8. [23] J. Zhao, J. Zhou, Y. Yao, D.-a. Li, and L. Gao, “Rf-motion: A devicefree rf-based human motion recognition system,” Wireless Communications and Mobile Computing, vol. 2021, p. 1497503, Mar 2021. [Online]. Available: https://doi.org/10.1155/2021/1497503 [24] G. Hu, B. Cui, and S. Yu, “Joint learning in the spatio-temporal and frequency domains for skeleton-based action recognition,” IEEE Transactions on Multimedia, vol. 22, no. 9, pp. 2207–2220, Sep. 2020. [25] D. Weinland, E. Boyer, and R. Ronfard, “Action recognition from arbitrary views using 3d exemplars,” in 2007 IEEE 11th International Conference on Computer Vision, 2007, pp. 1–7. [26] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, “Sequence of the most informative joints (smij): A new representation for human skeletal action recognition,” Journal of Visual Communication and Image Representation, vol. 25, no. 1, pp. 24–38, 2014, visual Understanding and Applications with RGB-D Cameras. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1047320313000680 [27] C. Ott, D. Lee, and Y. Nakamura, “Motion capture based human motion recognition and imitation by direct marker control,” in Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots, 2008, pp. 399–405. [28] J. P. Vox and F. Wallhoff, “Preprocessing and normalization of 3d-skeleton-data for human motion recognition,” in 2018 IEEE Life Sciences Conference (LSC), 2018, pp. 279–282. [29] Q. Zhang, Y. Yao, D. Zhou, and R. Liu, “Motion key-frame extraction by using optimized t-stochastic neighbor embedding,” Symmetry, vol. 7, no. 2, pp. 395–411, 2015. [Online]. Available: https://www.mdpi.com/2073-8994/7/2/395 [30] A. Richard and J. Gall, “Temporal action detection using a statistical language model,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3131–3140, 2016. [31] G. Evangelidis, G. Singh, and R. Horaud, “Skeletal quads: Human action recognition using joint quadruples,” in 2014 22nd International Conference on Pattern Recognition, 2014, pp. 4513–4518. [32] X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context and appearance distribution features,” in CVPR 2011, 2011, pp. 489–496. [33] J. Javed, H. Yasin, and S. F. Ali, “Human movement recognition using euclidean distance: A tricky approach,” in 2010 3rd International Congress on Image and Signal Processing, vol. 1, 2010, Conference Proceedings, pp. 317–321. [34] E. Ohn-Bar and M. M. Trivedi, “Joint angles similarities and hog2 for action recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2013. [35] S. Sempena, Nur Ulfa Maulidevi, and Peb Ruswono Aryan, “Human action recognition using dynamic time warping,” in Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, 2011, pp. 1–5. [36] Y.-H. Chou, H.-C. Cheng, C.-H. Cheng, K.-H. Su, and C.-Y. Yang, “Dynamic time warping for imu based activity detection,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016, pp. 003 107–003 112. [37] L. Brun, P. Foggia, A. Saggese, and M. Vento, “Recognition of human actions using edit distance on aclet strings,” in VISAPP, 2015. [38] F. Zhou and F. Torre, “Canonical time warping for alignment of human behavior,” in Advances in Neural Information Processing Systems, Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, Eds., vol. 22. Curran Associates, Inc., 2009. [Online]. Available: https://proceedings.neurips.cc/paper/2009/file/2ca65f58e35d9ad45bf7f3ae5cfd08f1-Paper.pdf [39] Q. Xiao and S. Liu, “Motion retrieval based on dynamic bayesian network and canonical time warping,” in 2015 7th International Conference on Intelligent HumanMachine Systems and Cybernetics, vol. 2, 2015, pp. 182–185. [40] C. Yuan, W. Hu, X. Li, S. Maybank, and G. Luo, “Human action recognition under log-euclidean riemannian metric,” in Computer Vision – ACCV 2009, H. Zha, R.-i. Taniguchi, and S. Maybank, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 343–353. [41] K. Yang and C. Shahabi, “A pca-based similarity measure for multivariate time series,” in Proceedings of the 2nd ACM International Workshop on Multimedia Databases, ser. MMDB ’04. New York, NY, USA: Association for Computing Machinery, 2004, p. 65–74. [Online]. Available: https://doi.org/10.1145/1032604.1032616 [42] M. F. Abdelkader, W. Abd-Almageed, A. Srivastava, and R. Chellappa, “Silhouette-based gesture and action recognition via modeling trajectories on riemannian shape manifolds,” Computer Vision and Image Understanding, vol. 115, no. 3, pp. 439–455, 2011, special issue on Feature-Oriented Image and Video Computing for Extracting Contexts and Semantics. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1077314210002377 [43] F. Zhou, F. De la Torre, and J. K. Hodgins, “Hierarchical aligned cluster analysis for temporal clustering of human motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 3, pp. 582–596, 2013. [44] S. H. Joshi, J. Su, Z. Zhang, and B. Ben Amor, Elastic Shape Analysis of Functions, Curves and Trajectories. Cham: Springer International Publishing, 2016, pp. 211–231. [Online]. Available: https://doi.org/10.1007/978-3-319-22957-7_10 [45] F. Zhou and F. De la Torre, “Generalized canonical time warping,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 279-294, 2016. [46] H. C. Mandhare and S. R. Idate, “A comparative study of cluster based outlier detection, distance based outlier detection and density based outlier detection techniques,” in 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), 2017, pp. 931–935. [47] E. Hsu, K. Pulli, and J. Popović, “Style translation for human motion,” in ACM SIGGRAPH 2005 Papers on - SIGGRAPH '05. ACM Press, 2005. [Online]. Available: https://doi.org/10.1145/1186822.1073315 [48] J. W. Davis and H. Gao, “An expressive three-mode principal components model for gender recognition,” Journal of Vision, vol. 4, no. 5, pp. 2–2, May 2004. [Online]. Available: https://doi.org/10.1167/4.5.2 [49] A. Elgammal and C.-S. Lee, “Separating style and content on a nonlinear manifold,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, 2004. [Online]. Available: https://doi.org/10.1109/cvpr.2004.1315070 [50] J.-M. Chiou, Y.-T. Chen, and Y.-F. Yang, “Multivariate functional principal component analysis: A normalization approach,” Statistica Sinica, vol. 24, no. 4, pp. 1571–1596, 2014. [Online]. Available: http://www.jstor.org/stable/24310959 [51] H. Su, S. Liu, B. Zheng, X. Zhou, and K. Zheng, “A survey of trajectory distance measures and performance evaluation,” The VLDB Journal, vol. 29, no. 1, pp. 3–32, Jan 2020. [Online]. Available: https://doi.org/10.1007/s00778-019-00574-9 [52] Difference in matching between Euclidean and Dynamic Time Warping, Wikipedia. [Online]. Available: https://commons.wikimedia.org/wiki/File:Euclidean_vs_DTW.jpg [53] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 1, pp. 43–49, 1978. [54] A. Stefan, V. Athitsos, and G. Das, “The move-split-merge metric for time series,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1425–1438, June 2013. [55] W. Zhao, Z. Xu, W. Li, and W. Wu, “Modeling and analyzing neural signals with phase variability using fisher-rao registration,” Journal of Neuroscience Methods, vol. 346, p. 108954, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0165027020303770 [56] A. Srivastava, W. Wu, S. Kurtek, E. Klassen, and J. S. Marron, “Registration of functional data using fisher-rao metric,” 2011. [57] J. D. Tucker, W. Wu, and A. Srivastava, “Generative models for functional data using phase and amplitude separation,” pp. 50–66, 2013. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167947312004227 [58] H. Akima, “A new method of interpolation and smooth curve fitting based on local procedures,” J. ACM, vol. 17, no. 4, p. 589–602, Oct. 1970. [Online]. Available: https://doi.org/10.1145/321607.321609 [59] H. L. Shang, “A survey of functional principal component analysis,” AStA Advances in Statistical Analysis, vol. 98, no. 2, pp. 121–142, Apr 2014. [Online]. Available: https://doi.org/10.1007/s10182-013-0213-1 [60] Z. Wang, Y. Sun, and P. Li, “Functional principal components analysis of shanghai stock exchange 50 index,” Discrete Dynamics in Nature and Society, vol. 2014, p. 365204, Jul 2014. [Online]. Available: https://doi.org/10.1155/2014/365204 [61] A. Ohsato, Y. Sasaki, and H. Mizoguchi, “Real-time 6dof localization for a mobile robot using pre-computed 3d laser likelihood field,” in 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2015, pp. 2359–2364. [62] IMPULSE X2 SYSTEM, PhaseSpace Motion Capture. [Online]. Available: https://www.phasespace.com/impulse-motion-capture.html [63] P. Merriaux, Y. Dupuis, R. Boutteau, P. Vasseur, and X. Savatier, “A study of vicon system positioning performance,” Sensors, vol. 17, no. 7, 2017. [Online]. Available: https://www.mdpi.com/1424-8220/17/7/1591 [64] P. Eichelberger, M. Ferraro, U. Minder, T. Denton, A. Blasimann, F. Krause, and H. Baur, “Analysis of accuracy in optical motion capture –a protocol for laboratory setup evaluation,” Journal of Biomechanics, vol. 49, no. 10, pp. 2085–2088, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0021929016305681 [65] Nexus 2.6 Documentation: Full body modeling with Plug in Gait, Vicon Motion Systems. [Online]. Available: https://docs.vicon.com/display/Nexus26/Full+body+modeling+with+Plug-in+Gait [66] J. George, M. Heller, and M. Kuzel, “Effect of shoe type on descending a curb,” Work, vol. 41, no. IEA 2012: 18th World congress on Ergonomics-Designing a sustainable future, p. 3333–3338, 2012. [Online]. Available: https://doi.org/10.3233/WOR-2012-0601-3333 [67] W.-L. HSU, Y.-J. CHEN, T.-W. LU, K.-H. HO, and J.-H. WANG, “Changes in interjoint coordination pattern in anterior cruciate ligament reconstructed knee during stair walking,” Journal of Biomechanical Science and Engineering, vol. 12, no. 2, pp. 16–00 694–16–00 694, 2017. [68] S. L. Delp, F. C. Anderson, A. S. Arnold, P. Loan, A. Habib, C. T. John, E. Guendelman, and D. G. Thelen, “Opensim: Open-source software to create and analyze dynamic simulations of movement,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 11, pp. 1940–1950, 2007. [Online]. Available: https://ieeexplore.ieee.org/document/4352056/ [69] A. Seth, J. L. Hicks, T. K. Uchida, A. Habib, C. L. Dembia, J. J. Dunne, C. F. Ong, M. S. DeMers, A. Rajagopal, M. Millard, S. R. Hamner, E. M. Arnold, J. R. Yong, S. K. Lakshmikanth, M. A. Sherman, J. P. Ku, and S. L. Delp, “Opensim: Simulating musculoskeletal dynamics and neuromuscular control to study human and animal movement,” PLOS Computational Biology, vol. 14, no. 7, p. e1006223, 2018. [Online]. Available: https://app.dimensions.ai/details/publication/pub.1105865798andhttps://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006223 type=printable [70] Musculoskeletal Models: Full Body Running Model, OpenSim Documentation. [Online]. Available: https://simtk-confluence.stanford.edu:8443/display/OpenSim/Full+Body+Running+Model [71] S. Rice, “Mathematical analysis of random noise,” Bell System Technical Journal, vol. 23, pp. 282–332, 1944. [72] L. Ye and E. Keogh, “Time series shapelets: A new primitive for data mining,” in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’09. New York, NY, USA: Association for Computing Machinery, 2009, p. 947–956. [Online]. Available: https://doi.org/10.1145/1557019.1557122 [73] G. M. James and T. J. Hastie, “Functional linear discriminant analysis for irregularly sampled curves,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 63, no. 3, pp. 533–550, 2001. [Online]. Available: https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/1467-9868.00297 [74] J. Park, J. Ahn, and Y. Jeon, “Sparse functional linear discriminant analysis,” Biometrika, 06 2021, asaa107. [Online]. Available: https://doi.org/10.1093/biomet/asaa107
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79461-
dc.description.abstract人體動作識別可以應用於復健、長照、監測、娛樂與人機互動等多個領域,其資料大多是以時間序列的方式呈現,根據資料來源可分為基於影像和基於穿戴式感測器,在大數據與人工智慧領域中,是熱門的研究主題。本研究希望以一種直觀的方式,不依賴神經網路或機器學習方法,透過分析人體運動的關節角度變化曲線,來了解人體如何動作。本研究關注於相似度量測與時間歸整(Temperal Alignment)處理,使用時間序列的距離量度,包含歐氏距離、動態時間扭曲、Move-Split-Merge 和費雪拉奧度量,並使用多變量泛函主成分分析來分析動作曲線。全文可以分成三個部分。 第一部分是動作資料的蒐集,透過 Vicon 動作捕捉系統蒐集運動資料,在 OpenSim 中建立人體模型,計算下肢六個關節在屈曲-伸展軸的旋轉角度,以這六個角度的時間序列來描述人體下肢動作。第二部分是關於動作模板的建立與分析,本研究包含 10 個常見的下肢動作,利用時間歸整方法,依據不同的距離量度來調整動作樣本,時間歸整的目的在降低樣本的時間偏移和速度變異,對齊樣本中的主要輪廓。接著以歸整後的樣本建立動作模板,分析不同樣本類別的距離分布。第三部分是模板匹配的試驗,本研究提出一個相似度評分方法,基於時間序列的距離量測,結合 softmax 函數與鐘型函數,將動作進行分類並同時能有效去除離群值。本研究設計了 4 組動作情境做為測試,其中包含由人工生成的動作情境,透過主成分分析和時間扭曲方法可以隨機生成動作樣本。結果顯示,本研究提出的相似度評分方法是可行的,並以動態時間扭曲(DTW)的效果最佳,即使在包含雜訊的情境中,也能維持表現。zh_TW
dc.description.provenanceMade available in DSpace on 2022-11-23T09:01:05Z (GMT). No. of bitstreams: 1
U0001-1210202123024300.pdf: 26158530 bytes, checksum: 3806232a4a775a6d1c8731a25c0a1cf1 (MD5)
Previous issue date: 2021
en
dc.description.tableofcontents口試委員會審定書 i 誌謝 ii 摘要 iv Abstract v 目錄 vii 圖目錄 xi 表目錄 xv 第一章 緒論 1 1.1 前言 1 1.2 人體動作變異 2 1.3 研究目的 3 1.4 論文架構 4 第二章 文獻回顧 6 2.1 人體動作識別 6 2.1.1 資料量測與形式 7 2.1.2 特徵提取 8 2.1.3 學習與分類方法 9 2.1.4 主要的挑戰 10 2.2 時間序列分析 10 2.2.1 距離與相似度 11 2.2.2 時間歸整 11 2.2.3 離群偵測 12 2.3 人體動作分析 13 2.3.1 動作風格 13 2.3.2 階層式人體模型 14 2.3.3 運動合成 14 第三章 研究方法 15 3.1 模板匹配 16 3.1.1 基於時間序列的模板匹配 16 3.1.2 相似度評分 17 3.2 距離量度 19 3.2.1 歐幾里得距離 19 3.2.2 動態時間歸整 21 3.2.3 Move-Split-Merge 22 3.2.4 費雪拉奧度量 23 3.3 時間歸整 26 3.3.1 歸整路徑 27 3.3.2 時間扭曲函數 28 3.3.3 Elastic Shape Analysis 29 3.4 泛函主成分分析 30 第四章 實驗與數據處理 33 4.1 實驗架設 34 4.1.1 動作捕捉系統 34 4.1.2 動作實驗 35 4.2 人體運動分析 38 4.2.1 人體骨骼模型 38 4.2.2 逆向運動學 39 4.2.3 動作樣本 40 4.2.4 誤差討論 43 第五章 模板建立與分析 44 5.1 樣本歸整 45 5.2 樣本距離分布 54 5.2.1 小結 58 第六章 情境分析 60 6.1 動作生成 61 6.1.1 mFPCA 樣本生成 61 6.1.2 時間扭曲 61 6.2 動作情境 62 6.3 模板匹配結果 63 6.3.1 小結 68 第七章 結論與未來展望 69 7.1 結論與貢獻 69 7.2 未來研究建議 71 參考文獻 72 附錄 A OpenSim 參數 83 附錄 B 動作模板主成分 85
dc.language.isozh-TW
dc.title基於距離之時間序列分析與模板匹配應用於人體下肢動作識別zh_TW
dc.titleLower Body Action Recognition Using Distance-Based Time Series Analysis and Template Matchingen
dc.date.schoolyear109-2
dc.description.degree碩士
dc.contributor.oralexamcommittee顏家鈺(Hsin-Tsai Liu),徐瑋勵(Chih-Yang Tseng)
dc.subject.keyword人體下肢運動,動作識別,相似度量測,時間序列,模板匹配,動態時間扭曲,Move-Split-Merge,費雪拉奧度量,時間歸整,多變量泛函主成分分析,人體運動生成,zh_TW
dc.subject.keywordHuman Lower Limb Motion,Action Recognition,Similarity Measurement,Time Series,Template Matching,Dynamic Time Warping,Move-Split-Merge,Fisher-Rao Metric,Temporal Alignment,Multivariate Functional Principal Component Analysis,Human Motion Generation,en
dc.relation.page89
dc.identifier.doi10.6342/NTU202103677
dc.rights.note同意授權(全球公開)
dc.date.accepted2021-10-28
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept機械工程學研究所zh_TW
顯示於系所單位:機械工程學系

文件中的檔案:
檔案 大小格式 
U0001-1210202123024300.pdf25.55 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved