Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資料科學學位學程
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66805
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor鄭士康(Shyh-Kang Jeng)
dc.contributor.authorTing-Hao Liaoen
dc.contributor.author廖廷浩zh_TW
dc.date.accessioned2021-06-17T01:08:40Z-
dc.date.available2021-06-30
dc.date.copyright2021-03-11
dc.date.issued2021
dc.date.submitted2021-02-05
dc.identifier.citation[1] T.­T. N. Amir Shahroudy, Jun Liu and G. Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[2] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh. Openpose: Realtime multi­ person 2d pose estimation using part affinity fields. CoRR, abs/1812.08008, 2018.
[3] J. Deng, W. Dong, R. Socher, L.­J. Li, K. Li, and L. Fei­Fei. ImageNet: A Large­ Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[4] Y. Du, W. Wang, and L. Wang. Hierarchical recurrent neural network for skele­ ton based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1110–1118. IEEE Computer Society, 2015.
[5] H. Fang, S. Xie, and C. Lu. RMPE: regional multi­person pose estimation. CoRR, abs/1612.00137, 2016.
[6] G.,B.Cui,andS.Yu.Skeleton­basedactionrecognitionwithsynchronouslocaland non­local spatio­temporal learning and frequency attention. CoRR, abs/1811.04237, 2018.
[7] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation, 2015.
[8] J. C. H. L. L. Shi, Y. Zhang. Skeleton­based action recognition with directed graph neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[9] C. Li, Q. Zhong, D. Xie, and S. Pu. Skeleton­based action recognition with convo­lutional neural networks. CoRR, abs/1704.07595, 2017.
[10] C. Li, Q. Zhong, D. Xie, and S. Pu. Co­occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. CoRR, abs/ 1804.06055, 2018.
[11] S. Li, W. Li, C. Cook, C. Zhu, and Y. Gao. Independently recurrent neural network (indrnn): Building A longer and deeper RNN. CoRR, abs/1803.04831, 2018.
[12] S. Lin, Y. Lin, C. Chen, and Y. Hung. Recognizing human actions with outlier frames by observation filtering and completion. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 13(3):28, 2017.
[13] C. Liu, Y. Hu, . Li, S. Song, and J. Liu. Pku­mmd: A large scale benchmark for con­tinuous multi­modal human action understanding. arXiv preprint arXiv:1703.07475, 2017.
[14] J. Liu, A. Shahroudy, D. Xu, and G. Wang. Spatio­temporal LSTM with trust gates for 3d human action recognition. CoRR, abs/1607.07043, 2016.
[15] Z. Liu, H. Zhang, Z. Chen, Z. Wang, and W. Ouyang. Disentangling and unify­ ing graph convolutions for skeleton­based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
[16] Y. X. S. Yan and D. Lin. Spatial temporal graph convolutional networks for skeleton­based action recognition. In Association for the Advancement of Artificial Intelligence, 2018.
[17] L. Shi, Y. Zhang, J. Cheng, and H. Lu. Two­stream adaptive graph convolutional networks for skeleton­based action recognition. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[18] S. Song, C. Lan, J. Xing, W. Zeng, and J. Liu. An end­to­end spatio­temporal atten­tion model for human action recognition from skeleton data. CoRR, abs/1611.06067, 2016.
[19] S. B. L. B. R. G. J. H. T. Lin, M. Maire and others. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), 2014.
[20] K.S.B.Z.C.H.S.V.F.V.T.G.T.B.P.N.W.Kay,J.Carreiraandothers. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
[21] J.Wang,Z.Liu,Y.Wu, and J.Yuan.Miningactionletensembleforactionrecognition with depth cameras. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1290–1297, 2012.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66805-
dc.description.abstract人體動作基於人體骨架之識別(Skeleton­based Action Recognition)目前有許多的資料集。但是每一個資料集之間存在著非常多的相異之處。像是拍攝方向、不同的人體關鍵點、以及不同種類的行為等等。我們通常都是直接讓同一個資料集當作訓練及測試集,因此常常不會用到其他資料集的知識。為了解決這個問題,我們提出了一個跨域知識遷移學習基於梯度翻轉層(Gradient Reversal Layer)與圖形卷積網路的模型來有效的轉換並利用不同資料集來的知識來影響其他資料集的結果。根據NTU­RGB+D 60遷移到其他資料集的實驗,我們提出的方法可以讓這些其他資料集的表現大幅的增加準確度並且超過目前只做在目標資料集基於時空圖型卷積網路的最佳演算法,並且證明我們提出的方法的影響力。zh_TW
dc.description.abstractFor skeleton­-based action recognition, there are many different datasets; however, since there also exist many differences between skeleton action datasets, including view­points, the number of available joints for a skeleton, the type of actions, etc, we can only train an individual model for each dataset respectively and cannot effectively leverage the knowledge from one dataset to another. To address this issue, we propose a cross­domainknowledge transfer module based on gradient reversal layer for graph convolutional net­work to effectively transfer the knowledge from one domain to another. With extensive experiments from NTU­RGB+D 60 to other datasets, the proposed approach achieves significantly improved results using different state­of­the­art spatio­temporal graph con­volutional networks as compared with those trained on the target dataset only, and this also demonstrates the effectiveness of the proposed approach.en
dc.description.provenanceMade available in DSpace on 2021-06-17T01:08:40Z (GMT). No. of bitstreams: 1
U0001-0202202109514600.pdf: 2931975 bytes, checksum: 09cf8de77644f3db34b9ff3113863daa (MD5)
Previous issue date: 2021
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Research Objective 3
1.3 Contribution 4
Chapter 2 Related Work 5
2.1 Skeleton-based Action Recognition 5
2.2 Graph Convolutional Network on Skeleton Graphs 6
2.3 Domain Adaptation 6
Chapter 3 The Proposed Approach 7
3.1 Skeletal Adjacency Matrix and Graph Convolution Network 7
3.2 Algorithm Overview 9
3.3 Cross­Domain Knowledge Transfer Layer 10
3.4 Domain Adaptation 10
3.5 Action Classification 11
3.6 Overall Performance 11
Chapter 4 Experiment 12
4.1 Datasets 12
4.2 Implementation Detail 13
4.3 Overall Performance 15
4.4 Feature Alignment 16
4.5 Ablation Study 16
4.6 Relationship of Skeletons 18
Chapter 5 Conclusion 20
References 21
dc.language.isoen
dc.subject動作識別zh_TW
dc.subject跨域zh_TW
dc.subject圖形卷積zh_TW
dc.subject骨架zh_TW
dc.subject遷移學習zh_TW
dc.subjectAction Recognitionen
dc.subjectSkeletonen
dc.subjectTransferen
dc.subjectCross-domainen
dc.title跨域知識遷移學習於人體骨架動作辨識zh_TW
dc.titleCross-Domain Knowledge Transfer for Skeleton-Based Action Recognitionen
dc.typeThesis
dc.date.schoolyear109-1
dc.description.degree碩士
dc.contributor.coadvisor陳駿丞(Jun-Cheng Chen)
dc.contributor.oralexamcommittee歐陽明(Ouh-Young Ming),王鈺強(Yu-Chiang Wang),傅楸善(Chiou-Shann Fuh)
dc.subject.keyword骨架,動作識別,跨域,遷移學習,圖形卷積,zh_TW
dc.subject.keywordAction Recognition,Skeleton,Transfer,Cross-domain,en
dc.relation.page23
dc.identifier.doi10.6342/NTU202100362
dc.rights.note有償授權
dc.date.accepted2021-02-07
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資料科學學位學程zh_TW
顯示於系所單位:資料科學學位學程

文件中的檔案:
檔案 大小格式 
U0001-0202202109514600.pdf
  未授權公開取用
2.86 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved