Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95934
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor丁建均zh_TW
dc.contributor.advisorJian-Jiun Dingen
dc.contributor.authorLau Davy Tec-Hinhzh_TW
dc.contributor.authorLau Davy Tec-Hinhen
dc.date.accessioned2024-09-25T16:12:51Z-
dc.date.available2024-09-26-
dc.date.copyright2024-09-25-
dc.date.issued2024-
dc.date.submitted2024-08-02-
dc.identifier.citation[1] H Li, H Niu, Z Zhu and F Zhao. 2023. Intensity-Aware Loss for Dynamic Facial Expression Recognition in the Wild. In AAAI, volume 37, 67-75
[2] A Psaroudakis and D Kollias. 2022. MixAugment & Mixup: Augmentation Methods for Facial Expression Recognition. In CVPR, 2367-2375
[3] Y Wang, Y Sun, Y Huang, Z Liu, S Gao, W Zhang, W Ge and W Zhang. and Zhang, W. 2022. FERV39k: A Large-Scale Multi-Scene Dataset for Facial Expression Recognition in Videos. In CVPR, 20922-20931
[4] Z Zhao and Q Liu. 2021. Former-DFER: Dynamic Facial Expression Recognition Transformer. In ACM, 1553-1561
[5] R Kawamura, H Hayashi, N Takemura and H Nagahara. 2024. MIDAS: Mixing Ambiguous Data With Soft Labels for Dynamic Facial Expression Recognition. In WACV, 6552-6562
[6] B Lee, H Shin, B Ku and H Ko. 2023. Frame level emotion guided dynamic facial expression recognition with emotion grouping. In CVPR, 5681-5691
[7] Z Zhang, X Sun, J Li, M Wang. 2022. MAN: Mining Ambiguity and Noise for Facial Expression Recognition in the Wild. In Pattern Recognition Letters, volume 164, 23-29
[8] Z Wen, W Lin, T Wang and G Xu. 2023. Distract your attention: Multi-head cross attention network for facial expression recognition. In Biomimetics, volume 8, 199
[9] B Li and D Lima. 2021. Facial expression recognition via ResNet-50. In International Journal of Cognitive Computing in Engineering, volume 2, 57-64
[10] H Wang, B Li, S Wu, S Shen, F Liu, S Ding and A Zhou. 2023. Rethinking the learning paradigm for dynamic facial expression recognition. In CVPR, 17958-17968
[11] X Jiang, Y Zong, W Zheng, C Tang, W Xia, C Lu and J Liu. 2020. DFEW: A large-scale database for recognizing dynamic facial expressions in the wild. In MM '20: Proceedings of the 28th ACM International Conference on Multimedia, 2881-2889
[12]H Li, M Sui, Z Zhu. 2022. NR-DFERNet: Noise-Robust Network for Dynamic Facial Expression Recognition. In arXiv:2206.04975 [cs.CV] 10 Jun 2022.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95934-
dc.description.abstractnonezh_TW
dc.description.abstractFacial Expression Recognition (FER) is a meaningful field of research in computer vision and its development could enhance human-computer interactions. Although, some models have been performing very well in laboratory conditions where the intensity of the expression is big and constant, they are not performing that well when put in ‘in-the-wild’ conditions which is closer to real life situations. Dynamic Facial Expression Recognition (DFER) models try to tackle this issue with recognition task on video sequences closer to natural scenes. In those scenes, the intensity of the expression varies a lot and can lead to a bias in the model caused by large intra-class and small inter-class differences.
To tackle this issue, the Intensity Aware Loss (IAL) was adopted and it helps the model put extra attention on low intensity samples to prevent the confusion that they may cause. The principle of intensity is illustrated by the difference between the target logit xt and the largest logit excluding the target xmax. When the intensity is low, the expression is more likely to be to confused as another one as they all tend toward the neutral expression when the intensity tends toward 0. Whereas, when the intensity is big, the expressions are easy to differentiate. That is why the intensity can be illustrated by the difference between xt and xmax. A big difference means that the prediction is clear and therefore that the intensity is high, whereas a small difference would mean that the model cannot fully distinguish the expression and therefore that the intensity is low.
Using this concept, We thought of using the Euclidian distance between xt and xmax to quantify the intensity of the expression. The idea is that by combining the Euclidian distance with a negative log, the new IAL would be able to emphasize more on the low intensity sample than the original IAL. In this work, we will study the influence of this new IAL while varying its effect on the model over the epochs and combining it with a pretrained model.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-09-25T16:12:50Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-09-25T16:12:51Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsACKNOWLEDGMENTS I
ABSTRACT II
CONTENT IV
LIST OF FIGURES VII
LIST OF TABLES VIII
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Thesis Organization 2
Chapter 2 Related Works 2
2.1 FER Methods 2
2.1.1 Former DFER 2
2.1.2 Mining Ambiguity and Noise (MAN) 3
2.1.3 Distract your Attention Network (DAN) 5
2.1.4 ResNet-50 7
2.1.5 M3DFEL 8
2.1.6 Affectivity Extraction Network (AEN) 9
2.1.7 NR-DFERNet 11
2.2 Data augmentation methods 13
2.2.1 MixUp and MixAugment 13
2.2.2 MIDAS 14
2.3 Dataset 15
2.3.1 FERV39K 15
2.3.2 DFEW 17
Chapter 3 Original Intensity Aware Loss model 18
3.1 Notion of Intensity 18
3.2 Mathematical Expression 19
3.3 Global Convolution-Attention 20
3.4 Performance Evaluation and Comparison 21
3.4.1 Experiment details 21
3.4.2 Efficiency of the IAL and GCA 22
3.4.3 Comparison with the other models 22
Chapter 4 Proposed Improved Intensity-Aware Loss 23
4.1 Analysis of the original IAL 24
4.1.1 Simplification of the problem 24
4.1.2 Function analysis 24
4.2 Proposed Loss Function 26
4.2.1 Mathematical Expression 26
4.2.2 Function analysis 27
4.3 Further optimization 29
4.3.1 Exponential decrease 29
4.3.2 Pre-training 29
4.3.3 Combination of models 32
4.4 Experiments 34
4.4.1 Evaluation of the hyper-parameters 34
4.4.2 Comparison 40
Chapter 5 Conclusion and future work 44
5.1 Conclusion 44
5.2 Future Work 45
References 46
-
dc.language.isoen-
dc.title優化亮度感知損失的動態臉部表情識別法zh_TW
dc.titleOptimizing the Intensity Aware Loss for Dynamic Facial Expression Recognitionen
dc.typeThesis-
dc.date.schoolyear112-2-
dc.description.degree碩士-
dc.contributor.oralexamcommitteeGuillaume Muller;曾易聰;夏至賢zh_TW
dc.contributor.oralexamcommitteeGuillaume Muller;Yi Chong Zeng;Chih Hsien Hsiaen
dc.subject.keyword動態臉部表情識別,亮度感知損失,zh_TW
dc.subject.keywordDynamic Facial Expression Recognition,Intensity Aware Loss,en
dc.relation.page56-
dc.identifier.doi10.6342/NTU202403144-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2024-08-02-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電信工程學研究所-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-112-2.pdf1.64 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved