請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95934完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 丁建均 | zh_TW |
| dc.contributor.advisor | Jian-Jiun Ding | en |
| dc.contributor.author | Lau Davy Tec-Hinh | zh_TW |
| dc.contributor.author | Lau Davy Tec-Hinh | en |
| dc.date.accessioned | 2024-09-25T16:12:51Z | - |
| dc.date.available | 2024-09-26 | - |
| dc.date.copyright | 2024-09-25 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-02 | - |
| dc.identifier.citation | [1] H Li, H Niu, Z Zhu and F Zhao. 2023. Intensity-Aware Loss for Dynamic Facial Expression Recognition in the Wild. In AAAI, volume 37, 67-75
[2] A Psaroudakis and D Kollias. 2022. MixAugment & Mixup: Augmentation Methods for Facial Expression Recognition. In CVPR, 2367-2375 [3] Y Wang, Y Sun, Y Huang, Z Liu, S Gao, W Zhang, W Ge and W Zhang. and Zhang, W. 2022. FERV39k: A Large-Scale Multi-Scene Dataset for Facial Expression Recognition in Videos. In CVPR, 20922-20931 [4] Z Zhao and Q Liu. 2021. Former-DFER: Dynamic Facial Expression Recognition Transformer. In ACM, 1553-1561 [5] R Kawamura, H Hayashi, N Takemura and H Nagahara. 2024. MIDAS: Mixing Ambiguous Data With Soft Labels for Dynamic Facial Expression Recognition. In WACV, 6552-6562 [6] B Lee, H Shin, B Ku and H Ko. 2023. Frame level emotion guided dynamic facial expression recognition with emotion grouping. In CVPR, 5681-5691 [7] Z Zhang, X Sun, J Li, M Wang. 2022. MAN: Mining Ambiguity and Noise for Facial Expression Recognition in the Wild. In Pattern Recognition Letters, volume 164, 23-29 [8] Z Wen, W Lin, T Wang and G Xu. 2023. Distract your attention: Multi-head cross attention network for facial expression recognition. In Biomimetics, volume 8, 199 [9] B Li and D Lima. 2021. Facial expression recognition via ResNet-50. In International Journal of Cognitive Computing in Engineering, volume 2, 57-64 [10] H Wang, B Li, S Wu, S Shen, F Liu, S Ding and A Zhou. 2023. Rethinking the learning paradigm for dynamic facial expression recognition. In CVPR, 17958-17968 [11] X Jiang, Y Zong, W Zheng, C Tang, W Xia, C Lu and J Liu. 2020. DFEW: A large-scale database for recognizing dynamic facial expressions in the wild. In MM '20: Proceedings of the 28th ACM International Conference on Multimedia, 2881-2889 [12]H Li, M Sui, Z Zhu. 2022. NR-DFERNet: Noise-Robust Network for Dynamic Facial Expression Recognition. In arXiv:2206.04975 [cs.CV] 10 Jun 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95934 | - |
| dc.description.abstract | none | zh_TW |
| dc.description.abstract | Facial Expression Recognition (FER) is a meaningful field of research in computer vision and its development could enhance human-computer interactions. Although, some models have been performing very well in laboratory conditions where the intensity of the expression is big and constant, they are not performing that well when put in ‘in-the-wild’ conditions which is closer to real life situations. Dynamic Facial Expression Recognition (DFER) models try to tackle this issue with recognition task on video sequences closer to natural scenes. In those scenes, the intensity of the expression varies a lot and can lead to a bias in the model caused by large intra-class and small inter-class differences.
To tackle this issue, the Intensity Aware Loss (IAL) was adopted and it helps the model put extra attention on low intensity samples to prevent the confusion that they may cause. The principle of intensity is illustrated by the difference between the target logit xt and the largest logit excluding the target xmax. When the intensity is low, the expression is more likely to be to confused as another one as they all tend toward the neutral expression when the intensity tends toward 0. Whereas, when the intensity is big, the expressions are easy to differentiate. That is why the intensity can be illustrated by the difference between xt and xmax. A big difference means that the prediction is clear and therefore that the intensity is high, whereas a small difference would mean that the model cannot fully distinguish the expression and therefore that the intensity is low. Using this concept, We thought of using the Euclidian distance between xt and xmax to quantify the intensity of the expression. The idea is that by combining the Euclidian distance with a negative log, the new IAL would be able to emphasize more on the low intensity sample than the original IAL. In this work, we will study the influence of this new IAL while varying its effect on the model over the epochs and combining it with a pretrained model. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-09-25T16:12:50Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-09-25T16:12:51Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | ACKNOWLEDGMENTS I
ABSTRACT II CONTENT IV LIST OF FIGURES VII LIST OF TABLES VIII Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Thesis Organization 2 Chapter 2 Related Works 2 2.1 FER Methods 2 2.1.1 Former DFER 2 2.1.2 Mining Ambiguity and Noise (MAN) 3 2.1.3 Distract your Attention Network (DAN) 5 2.1.4 ResNet-50 7 2.1.5 M3DFEL 8 2.1.6 Affectivity Extraction Network (AEN) 9 2.1.7 NR-DFERNet 11 2.2 Data augmentation methods 13 2.2.1 MixUp and MixAugment 13 2.2.2 MIDAS 14 2.3 Dataset 15 2.3.1 FERV39K 15 2.3.2 DFEW 17 Chapter 3 Original Intensity Aware Loss model 18 3.1 Notion of Intensity 18 3.2 Mathematical Expression 19 3.3 Global Convolution-Attention 20 3.4 Performance Evaluation and Comparison 21 3.4.1 Experiment details 21 3.4.2 Efficiency of the IAL and GCA 22 3.4.3 Comparison with the other models 22 Chapter 4 Proposed Improved Intensity-Aware Loss 23 4.1 Analysis of the original IAL 24 4.1.1 Simplification of the problem 24 4.1.2 Function analysis 24 4.2 Proposed Loss Function 26 4.2.1 Mathematical Expression 26 4.2.2 Function analysis 27 4.3 Further optimization 29 4.3.1 Exponential decrease 29 4.3.2 Pre-training 29 4.3.3 Combination of models 32 4.4 Experiments 34 4.4.1 Evaluation of the hyper-parameters 34 4.4.2 Comparison 40 Chapter 5 Conclusion and future work 44 5.1 Conclusion 44 5.2 Future Work 45 References 46 | - |
| dc.language.iso | en | - |
| dc.title | 優化亮度感知損失的動態臉部表情識別法 | zh_TW |
| dc.title | Optimizing the Intensity Aware Loss for Dynamic Facial Expression Recognition | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | Guillaume Muller;曾易聰;夏至賢 | zh_TW |
| dc.contributor.oralexamcommittee | Guillaume Muller;Yi Chong Zeng;Chih Hsien Hsia | en |
| dc.subject.keyword | 動態臉部表情識別,亮度感知損失, | zh_TW |
| dc.subject.keyword | Dynamic Facial Expression Recognition,Intensity Aware Loss, | en |
| dc.relation.page | 56 | - |
| dc.identifier.doi | 10.6342/NTU202403144 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2024-08-02 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電信工程學研究所 | - |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf | 1.64 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
