Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96136
標題: 舞蹈視界:自動化舞姿分析系統
Dance Vision: Automated Dance Posture Analysis System
作者: 李秉澤
Bing-Ze Li
指導教授: 丁建均
Jian-Jiun Ding
關鍵字: 舞蹈評分系統,姿態識別,數據增強,多模型融合,深度學習,
Dance Scoring System,Pose Recognition,Data Augmentation,Multi-Model Fusion,Deep Learning,
出版年 : 2024
學位: 碩士
摘要: 在當前數位化的時代,自主學習的能力已成為個體提升知識和技能的重要機制,特別是在體育教育領域,自我導向的舞蹈學習尤為受到重視。然而,由於缺乏專業的評估,學習者往往難以掌握舞蹈動作的細微之處,也難以有效地評估自己與標準表現之間的差異。為了解決這一問題,本研究提出了兩種評估方法:第一種是多模型融合的自動化舞蹈評分系統,用於評估舞者的表現並提供針對性的改進建議;第二種則是通過比較關節角度和關節覆蓋面積與參考影片進行評分。
在第一種方法中,系統主要利用姿態識別技術來捕捉舞者的關節運動數據,並基於收集到的關節數據,開發出一系列評估指標,包括歐幾里得距離、動態時間規劃(DTW)距離以及各種統計特徵差異,用來量化舞蹈動作的表現。為了增強數據集的多樣性,還使用了時間偏移、數據混合和平滑加噪音等數據增強技術。這些增強策略不僅能模擬多種場景下的舞蹈動作,還能增強模型在不同背景下的泛化能力。為了進一步提高評估的精確性,本研究結合了三種深度學習模型:長短期記憶網絡(LSTM)、門控循環單元(GRUs)和一維卷積神經網絡(1D CNNs)。這些模型的預測結果通過加權平均法進行融合,並根據模型在訓練數據集上的表現動態調整權重,確保融合後的預測能夠充分發揮每個模型的優勢。
在第二種方法中,本研究提出了一種基於人體關節點分析的替代自動舞蹈動作評估系統。該系統的核心是通過特定公式計算出關節面積分數和角度分數。面積分數是根據關節所形成的多邊形面積計算的,通過分析關節之間的距離和位置變化,評估舞者的動作是否符合既定標準。相比之下,角度分數則根據關節之間的角度變化進行計算,反映出舞者動作的精確性和流暢性。這些分數隨後會經過加權處理,最終生成綜合的動作評估結果。通過這兩種系統,學習者可以獲得詳細的舞蹈動作評估,從而在自主學習過程中進行更精確的改進。
In the contemporary digital landscape, the ability to engage in autonomous learning has become a crucial mechanism for individuals seeking to enhance their knowledge and skills, particularly in the field of sports education, where self-directed learning in dance performance is highly valued. However, the lack of professional assessment often impedes learners' understanding of the nuances of dance movements and complicates their efforts to evaluate the differences between their performances and established benchmarks. To address this challenge, the present study introduces two evaluative methodologies: the first is a multi-model fusion automated dance scoring system designed to assess dancers' performances and provide targeted recommendations for improvement, while the second involves comparing joint angles and joint coverage areas against a reference video to generate scores.
In the initial methodology, the system primarily utilizes pose recognition technology to capture data on the dancer's joint movements. Using the collected joint data, a variety of evaluative metrics are developed, including Euclidean distance, Dynamic Time Warping (DTW) distance, and various statistical feature differences, to quantify the execution of dance movements. The dataset is further enhanced through data augmentation techniques such as time shifting, data mixing, and noise smoothing. These augmentation strategies not only replicate dance movements across diverse scenarios but also strengthen the model's generalization capabilities across different contexts. To enhance the accuracy of the evaluations, the study incorporates three deep learning models: Long Short-Term Memory networks (LSTM), Gated Recurrent Units (GRUs), and One-Dimensional Convolutional Neural Networks (1D CNNs). The predictive outcomes from these models are subsequently combined using a weighted average approach, with the weights assigned to each model dynamically adjusted based on their performance on the training dataset. This ensures that the integrated predictions effectively leverage the strengths of each individual model.
In the second methodology, the study presents an alternative automated dance movement evaluation system based on the analysis of human joint points. This system is founded on the computation of joint area scores and angle scores using specific formulas. The area score is derived from the polygonal area defined by the joints, requiring a thorough analysis of the variations in distance and position among the joints to assess the alignment of the dancer's movements with established standards. In contrast, the angle score is calculated based on the changes in the angles between joints, reflecting the accuracy and fluidity of the dancer's movements. These scores are then weighted, resulting in a comprehensive evaluation of the movement. By implementing these two systems, learners receive detailed assessments of their dance movements, facilitating more precise improvements during their self-directed learning efforts.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96136
DOI: 10.6342/NTU202404521
全文授權: 同意授權(全球公開)
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf2.26 MBAdobe PDF檢視/開啟
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved