Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8423
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳銘憲(Ming-Syan Chen)
dc.contributor.authorChih-En Huangen
dc.contributor.author黃志恩zh_TW
dc.date.accessioned2021-05-20T00:54:08Z-
dc.date.available2025-08-20
dc.date.available2021-05-20T00:54:08Z-
dc.date.copyright2020-08-24
dc.date.issued2020
dc.date.submitted2020-08-19
dc.identifier.citationM. Baccouche, F. Mamalet, C. Wolf, C. Garcia, and A. Baskurt. Sequential deep learning for human action recognition. In International workshop on human behavior understanding, pages 29–39. Springer, 2011.
M. Barekatain, M. Martí, H.-F. Shih, S. Murray, K. Nakayama, Y. Matsuo, and H. Prendinger. Okutama-action: An aerial view video dataset for concurrent human action detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 28–35, 2017.
Z. Cai, L. Wang, X. Peng, and Y. Qiao. Multi-view super vector for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 596–603, 2014.
J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299–6308, 2017.
L.-C.Chen,G.Papandreou,F.Schroff,andH.Adam.Rethinkingatrousconvolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
S. Danafar and N. Gheissari. Action recognition for surveillance applications using optic flow and svm. In Asian Conference on Computer Vision, pages 457–466. Springer, 2007.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
C. Gan, N. Wang, Y. Yang, D.-Y. Yeung, and A. G. Hauptmann. Devnet: A deep event network for multimedia event detection and evidence recounting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2568–2577, 2015.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2462–2470, 2017.
S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1): 221–231, 2012.
V. Kantorov and I. Laptev. Efficient feature extraction, encoding and classification for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2593–2600, 2014.
A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014.
W.-S. Lai, J.-B. Huang, and M.-H. Yang. Semi-supervised learning for optical flow with generative adversarial networks. In Advances in neural information processing systems, pages 354–364, 2017.
I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, 2008.
K. Liu, W. Liu, C. Gan, M. Tan, and H. Ma. T-c3d: Temporal convolutional 3d network for real-time action recognition. In Thirty-second AAAI conference on artificial intelligence, 2018.
J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten. Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), pages 181– 196, 2018.
J. Y.-H. Ng, J. Choi, J. Neumann, and L. S. Davis. Actionflownet: Learning motion representation for action recognition. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1616–1624. IEEE, 2018.
A.Ranjan and M. J. Black. Optical flow estimation using a spatial pyramid network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4161–4170, 2017.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018.
K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pages 568–576, 2014.
K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI conference on artificial intelligence, 2017.
H. Touvron, A. Vedaldi, M. Douze, and H. Jégou. Fixing the train-test resolution discrepancy: Fixefficientnet. arXiv preprint arXiv:2003.08237, 2020.
D.Tran,L.Bourdev,R.Fergus,L.Torresani,andM.Paluri.Learningspatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497, 2015.
D. Tran, J. Ray, Z. Shou, S.-F. Chang, and M. Paluri. Convnet architecture search for spatiotemporal feature learning. arXiv preprint arXiv:1708.05038, 2017.
D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 6450–6459, 2018.
H.Wang and C. Schmid. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, pages 3551–3558, 2013.
L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks for action recognition in videos. IEEE transactions on pattern analysis and machine intelligence, 41(11):2740–2755, 2018.
X. Wang, A. Farhadi, and A. Gupta. Actions~transformations. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2658–2667, 2016.
B. Zhang, L. Wang, Z. Wang, Y. Qiao, and H. Wang. Real-time action recognition with enhanced motion vector cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2718–2726, 2016.
H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.
Y. Zhu, Z. Lan, S. Newsam, and A. Hauptmann. Hidden two-stream convolutional networks for action recognition. In Asian Conference on Computer Vision, pages 363–378. Springer, 2018.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/8423-
dc.description.abstract近年來,雙流式架構的神經網路在影片人類行為辨識的任務中展現了很強大的表現。雙流式網路的核心架構在於透過兩個子網路去抽取時間與空間的資訊,並透過這兩個資訊去做最後的行為辨識。然而,時間流的子網路依賴傳統的光流評估方法去抽取時間資訊,這是需要非常大量的運算資源以及對於儲存空間的要求也非常的高。為了解決這個問題,我們採用機器學習的技術去取代傳統的光流評估方法。在這篇論文,我們實現一個輕量化且低推論時間的雙流動作辨識模型。我們實現的光流評估模型利用對抗式網路的技術達到無監督式學習,我們的空間和時間子網路可以更進一步利用深度可分離卷積去減少模型的參數與運算複雜度,實驗結果顯示我們的方法達到即時的動作辨識並保有具有競爭力的結果。zh_TW
dc.description.abstractRecently, the two-stream architecture of neural networks has shown strong performance for human action recognition in video tasks. The key idea of two-stream structure is to extract temporal information and spatial information from two sub-networks and fuses the information to recognize the actions. However, the temporal stream model relies on the traditional optical flow estimation methods to extract temporal information. It is computationally-expensive and storage-demanding. In order to address this problem, we use neural networks to replace traditional optical flow estimation methods. In this paper, we propose a light-weight and low inference time two-stream action recognition model. Our proposed optical flow estimation model achieves unsupervised learning by leveraging the techniques of Generative Adversarial Net (GAN). We can further reduce the number of parameters as well as computational complexity by using the depthwise separable convolution structure. The experimental results show that our method achieves real-time action recognition and retains competitive performance.en
dc.description.provenanceMade available in DSpace on 2021-05-20T00:54:08Z (GMT). No. of bitstreams: 1
U0001-2007202022064000.pdf: 2584830 bytes, checksum: 6b371d37787dc16665f5940a69dc7d8b (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents口試委員審定書 i
致謝 ii
摘要 iii
Abstract iv
Contents vi
List of Figures viii
List of Tables ix
1 Introduction 1
2 Related Work 4
3 Methodology 7
3.1 Unsupervised Learning Approach for Optical Flow Estimation . . . . 7
3.2 Action Recognition Models ...................... 9
3.2.1 Spatial Stream Sub-network. ..................... 10
3.2.2 Temporal Stream Sub-network..................... 10
3.2.3 Fusion Strategy............................. 11
3.3 The End-to-End Training of Temporal Stream Sub-network . . . . . 11
4 Experiments 13
4.1 Dataset and Evaluation Protocol.................... 13
4.2 Implementation Details......................... 14
4.2.1 Hyperparameter Settings........................ 14
4.2.2 Data Processing............................. 14
4.2.3 Spatial Stream Sub-network. ..................... 14
4.2.4 OF-GAN and Temporal Stream Sub-network. . . . . . . . . . . . . 15
4.2.5 Testing Method............................. 16
4.3 Result Comparison........................... 16
4.3.1 The Result of UCF101. ........................ 16
4.3.2 Ablation Study. ............................ 17
4.3.3 Compared with State-Of-The-Art Models. . . . . . . . . . . . . . . 19
5 Conclusion 22
References 23
dc.language.isoen
dc.title基於生成式雙流模型之行為辨識zh_TW
dc.titleGenerative-based Two-Stream Model for Action Recognitionen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee楊得年(De-Nian Yang),帥宏翰(Hong-Han Shuai),葉彌妍(Mi-Yen Yeh)
dc.subject.keyword行為辨識,輕量化模型,無監督學習,對抗式網路,光流,zh_TW
dc.subject.keywordAction recognition,Light weight model,Unsupervised learning,GAN,Optical flow,en
dc.relation.page27
dc.identifier.doi10.6342/NTU202001666
dc.rights.note同意授權(全球公開)
dc.date.accepted2020-08-20
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
dc.date.embargo-lift2025-08-20-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
U0001-2007202022064000.pdf2.52 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved