Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84582
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宏銘(Homer H. Chen)
dc.contributor.authorYu-Chieh Huangen
dc.contributor.author黃郁傑zh_TW
dc.date.accessioned2023-03-19T22:16:28Z-
dc.date.copyright2022-10-19
dc.date.issued2022
dc.date.submitted2022-09-20
dc.identifier.citation[1] B. Kamgar-Parsi, W. Lawson and B. Kamgar-Parsi, “Toward Development of a Face Recognition System for Watchlist Surveillance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 10, pp. 1925-1937, Oct. 2011. [2] M. E. Fathy, V. M. Patel, and R. Chellappa, “Face-based active authentication on mobile devices,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Apr. 2015, pp. 1687–1691. [3] R. W. Frischholz and U. Dieckmann, “BioID: a multimodal biometric identification system,” IEEE Comput., vol. 33, no. 2, pp.64-68, Feb. 2000. [4] Y. Huang et al., “CurricularFace: Adaptive curriculum learning loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 5900-5909. [5] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 815–823. [6] H. Wang et al., “CosFace: Large margin cosine loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5265–5274. [7] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 4690–4699. [8] A. Anwar and A. Raychowdhury, “Masked face recognition for secure authentication,” 2020, arXiv:2008.11104. [Online]. Available: http://arxiv.org/abs/2008.11104 [9] D. Montero, M. Nieto, P. Leskovsky, and N. Aginako, “Boosting masked face recognition with multi-task ArcFace,” 2021, arXiv:2104.09874. [Online]. Available: http://arxiv.org/abs/2104.09874 [10] H. Deng, Z. Feng, G. Qian, X. Lv, H. Li, and G. Li, “MFcosface: A masked-face recognition algorithm based on large margin cosine loss,” Appl. Sci., vol. 11, no. 16, p. 7310, 2021. [11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778. [12] A. Vaswani et al., “Attention is all you need” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), 2017, pp. 5998–6008. [13] Y. Ganin et al., “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 59, pp. 1–35, 2016. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst., pp. 1097-1105, 2012 [15] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “SphereFace: Deep hypersphere embedding for face recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 6738–6746. [16] Y. Li, K. Guo, Y. Lu, and L. Liu, “Cropping and attention based approach for masked face recognition,” Appl. Intell., vol. 51, no. 5, pp. 3012–3025, 2021. [17] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” Univ. Massachusetts, Amherst, Tech. Rep. TR-07-49, Oct. 2007. [18] J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2015. [19] Z. Wang et al., “Masked face recognition dataset and application,” 2020, arXiv:2003.09093. [Online]. Available: http://arxiv.org/abs/2003.09093 [20] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Learning face representation from scratch,” 2014, arXiv:1411.7923. [Online]. Available: http://arxiv.org/abs/1411.7923. [21] I. Goodfellow et al., “Generative adversarial nets,” in Proc. Conf. Neural Inf. Process. Syst. (NeurIPS), 2014, pp. 2672–2680. [22] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Process. Lett., vol. 23, no. 10, pp. 1499–1503, Oct. 2016. [23] S. Ruder, “An overview of gradient descent optimization algorithms,” 2016, arXiv:1609.04747. [Online]. Available: https://arxiv.org/abs/1609.04747 [24] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” J. Big Data, vol. 6, p. 60, Dec. 2019. [25] C.-Y. Yang and H. H. Chen, “Efficient face detection in the fisheye image domain,” IEEE Trans. Image Process., vol. 30, pp. 5641-5651, 2021. [26] I.-C. Lo, K.-T. Shih and H. H. Chen, “Efficient and accurate stitching for 360° dual-fisheye images and videos,” IEEE Trans. Image Process., vol. 31, pp. 251-262, 2022. [27] X. Xie, Z. Cao, Y. Xiao, M. Zhu, and H. Lu, “Blurred image recognition using domain adaptation,” in Proc. IEEE Int. Conf. Image Process., 2015, pp. 532–536. [28] L. Zhang, W. Zuo, and D. Zhang, “LSDT: Latent sparse domain transfer learning for visual adaptation,” IEEE Trans. Image Process., vol. 25, no. 3, pp. 1177–1191, Mar. 2016. [29] Y.-C. Huang, L.-H. Tsao and H. H. Chen, “Robust masked face recognition via balanced feature matching,” in Proc. IEEE Int. Conf. Consum. Electron. (ICCE), 2022, pp. 1-6. [30] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 248–255. [31] N. Zhu et al., “A novel coronavirus from patients with pneumonia in China, 2019,” New England J. Med., vol. 382, no. 8, pp. 727–733, 2020. [32] M. Wang, W. Deng, “Deep face recognition: A survey,” 2018, arXiv:1804.06655. [Online]. Available: http://arxiv.org/abs/1804.06655 [33] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 1701–1708. [34] I. T. Jolliffe and J. Cadima, “Principal component analysis: A review and recent developments,” Philos. Trans. Roy. Soc. A, Math. Phys. Eng. Sci., vol. 374, no. 2065, 2016. [35] Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 1891–1898. [36] Y. Sun, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Proc. Int. Conf. Neural Inf. Process. Syst. (NeurIPS), 2014, pp. 1988–1996. [37] Y. Sun, D. Liang, X. Wang, and X. Tang, “DeepID3: Face recognition with very deep neural networks,” 2015, arXiv:1502.00873. [Online]. Available: http://arxiv.org/abs/1502.00873 [38] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436–444, May 2015. [39] M. Kan, J. Wu, S. Shan, and X. Chen, “Domain adaptation for face recognition: Targetize source domain bridged by common subspace,” Int. J. Comput. Vis., vol. 109, no. 1, pp. 94–109, 2014. [40] B. Geng, D. Tao, and C. Xu, “DAML: Domain adaptation metric learning,” IEEE Trans. Image Process., vol. 20, no. 10, pp. 2980–2989, Oct. 2011.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84582-
dc.description.abstract由於新冠肺炎(COVID-19)疫情在全球大流行,口罩已成為我們日常生活中的必需品。然而,帶口罩嚴重降低人臉識別系統的性能。由於相機拍攝到的是戴口罩人臉影像,而資料庫中儲存的是沒戴口罩人臉影像,兩者在特徵激活區域以及特徵分佈上存在差異,使得兩者特徵無法正確地匹配。在本論文中,我們提出了一個創新的人臉識別系統來解決此問題。此系統整合了域適應層以及特徵精煉層。特徵精煉層基於自注意力機制的結構,將沒戴口罩人臉影像的特徵激活區域與戴口罩人臉影像的特徵激活區域對齊。域適應層使系統從沒戴口罩人臉域適應到合成以及真實戴口罩人臉域。我們透過人臉驗證和人臉識別任務來測驗系統在真實資料集上的表現。在RMFD_FV和MFR2資料集上,我們的方法能分別提高6.83%和4.2%的人臉驗證準確率。在MFRFI資料集上,人臉識別準確率則提高了15.43%。zh_TW
dc.description.abstractWearing facial masks has become a must in our daily life due to the global COVID-19 pandemic. However, it severely degrades the performance of a face recognition system. The performance degradation is mainly due to the fact that the face images in the gallery are unmasked faces while the probe face images captured by the camera are masked faces, which makes the probe face images different from gallery face images in activated region and distribution domain. In this thesis, we propose a novel face recognition system to address the issue. The system is integrated with a domain adaptation layer and a feature refinement layer. The feature refinement layer is based on the structure of the self-attention mechanism to align activated regions of unmasked faces with those of masked faces. The domain adaptation layer works by adapting the system from the unmasked face domain to the synthetically masked face domain and the real-world masked face domain. The system is tested on real-world data through face verification and face identification. The face verification accuracy is improved by 6.83% for the RMFD_FV dataset and 4.2% for the MFR2 dataset, and the face identification accuracy is improved by 15.43% for the MFRFI dataset.en
dc.description.provenanceMade available in DSpace on 2023-03-19T22:16:28Z (GMT). No. of bitstreams: 1
U0001-1609202220420600.pdf: 2204921 bytes, checksum: 77f6e6f76e549ffe1d0577c3c09676bf (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS iv LIST OF FIGURES vi LIST OF TABLES viii Chapter 1 Introduction 1 Chapter 2 Related Work 4 2.1 Conventional Face Recognition for Unmasked Faces 4 2.2 Masked Face Recognition 5 2.3 Domain Adaptation 5 Chapter 3 Performance Analysis of Masked Face Recognition 7 3.1 Performance of Face Recognition for Masked Faces 7 3.2 The Impact of Facial Masks on Face Recognition 9 3.3 Model Retraining Using Synthetically Masked Faces 11 Chapter 4 Proposed Method 13 4.1 Feature Refinement Layer 14 4.2 Domain Adaptation Layer 15 4.3 Loss function 17 4.3.1 Attention Score Loss 17 4.3.2 Mask Classification Loss 19 4.3.2 Identity Classification Loss 21 4.3.4 Total Loss 22 Chapter 5 Experiments 23 5.1 Experimental Setup 23 5.2 Evaluation Tests 23 5.3 Dataset 24 5.4 Implementation Details 26 5.5 Experimental Results 28 5.6 Effectiveness of Feature Refinement Layer 32 5.7 Effectiveness of Domain Adaptation Layer 33 5.8 Application to Fisheye Images 34 5.9 Sensitivity Test of Image Resolution 34 Chapter 6 Conclusion 36 REFERENCE 37
dc.language.isoen
dc.subject機器學習zh_TW
dc.subject深度學習zh_TW
dc.subject域適應zh_TW
dc.subject戴口罩人臉影像zh_TW
dc.subject人臉辨識zh_TW
dc.subjectdeep learningen
dc.subjectface recognitionen
dc.subjectmasked face imagesen
dc.subjectdomain adaptationen
dc.subjectmachine learningen
dc.title利用域適應之戴口罩人臉辨識zh_TW
dc.titleMasked Face Recognition Using Domain Adaptationen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee莊仁輝(Jen-Hui Chuang),莊永裕(Yung-Yu Chuang),黃朝宗(Chao-Tsung Huang),于天立(Tian-Li Yu)
dc.subject.keyword人臉辨識,戴口罩人臉影像,域適應,機器學習,深度學習,zh_TW
dc.subject.keywordface recognition,masked face images,domain adaptation,machine learning,deep learning,en
dc.relation.page41
dc.identifier.doi10.6342/NTU202203492
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-09-21
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
dc.date.embargo-lift2027-09-19-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
U0001-1609202220420600.pdf
  未授權公開取用
2.15 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved