請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83225
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 謝宏昀 | zh_TW |
dc.contributor.advisor | Hung-Yun Hsieh | en |
dc.contributor.author | 李政旻 | zh_TW |
dc.contributor.author | Cheng-Min Lee | en |
dc.date.accessioned | 2023-01-10T17:26:59Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-01-10 | - |
dc.date.issued | 2022 | - |
dc.date.submitted | 2002-01-01 | - |
dc.identifier.citation | [1]F. Marattukalam and W. H. Abdulla, “On Palm Vein as a Contactless Identification Technology,” in 2019 Australian New Zealand Control Conference (ANZCC), 2019, pp. 270–275. doi: 10.1109/ANZCC47194.2019.8945589. [2]A.-S. Ungureanu, S. Salahuddin, and P. Corcoran, “Toward Unconstrained Palmprint Recognition on Consumer Devices: A Literature Review,” IEEE Access, vol. 8, pp. 86130–86148, 2020, doi: 10.1109/ACCESS.2020.2992219. [3]D. Zhong, X. Du, and K. Zhong, “Decade progress of palmprint recognition: A brief survey,” Neurocomputing, vol. 328, pp. 16–28, 2019, doi: https://doi.org/10.1016/j.neucom.2018.03.081. [4]D. Zhang, Z. Guo, G. Lu, L. Zhang, and W. Zuo, “An Online System of Multispectral Palmprint Verification,” IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 2, pp. 480–490, 2010, doi: 10.1109/TIM.2009.2028772. [5]L. Zhang, L. Li, A. Yang, Y. Shen, and M. Yang, “Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach,” Pattern Recognition, vol. 69, pp. 199–212, 2017, doi: https://doi.org/10.1016/j.patcog.2017.04.016. [6]Tongji Contactless Palmprint Dataset. [Online]. Available: https://cslinzhang.github.io/ContactlessPalm/ [7]L. Zhang, Z. Cheng, Y. Shen, and D. Wang, “Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset,” Symmetry, vol. 10, no. 4, 2018, doi: 10.3390/sym10040078. [8]Tongji Contactless Palmvein Dataset. [Online]. Available: https://sse.tongji.edu.cn/linzhang/contactlesspalmvein/ [9]CASIA Palmprint Database. [Online]. Available: http://biometrics.idealtest.org/ [10]The Hong Kong Polytechnic University Contact-free 3D/2D Hand Images Database version 1.0’. [Online]. Available: http://www.comp.polyu.edu.hk/~csajaykr/myhome/database_request/3dhand/Hand3D.htm [11]S. Chen, Z. Guo, J. Feng, and J. Zhou, “An Improved Contact-Based High-Resolution Palmprint Image Acquisition System,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 9, pp. 6816–6827, 2020. [12]IIT Delhi Palmprint Image Database version 1.0. [Online]. Available: https://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Palm.htm [13]Q. Xiao, J. Lu, W. Jia, and X. Liu, “Extracting Palmprint ROI From Whole Hand Image Using Straight Line Clusters,” IEEE Access, vol. 7, pp. 74327–74339, 2019, doi: 10.1109/ACCESS.2019.2918778. [14]Y. Zhang, L. Zhang, R. Zhang, S. Li, J. Li, and F. Huang, “Towards Palmprint Verification On Smartphones,” ArXiv, vol. abs/2003.13266, 2020. [15]Tongji Mobile Palmprint Dataset. [Online]. Available: https://cslinzhang.github.io/MobilePalmPrint/ [16]H. Shao, D. Zhong, and X. Du, “Efficient Deep Palmprint Recognition via Distilled Hashing Coding,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 714–723. doi: 10.1109/CVPRW.2019.00098. [17]K. Luo and D. Zhong, “Robust and adaptive region of interest extraction for unconstrained palmprint recognition,” Journal of Electronic Imaging, vol. 30, no. 3, pp. 1–18, 2021, doi: 10.1117/1.JEI.30.3.033005. [18]W. Wu, S. J. Elliott, S. Lin, S. Sun, and Y. Tang, “Review of palm vein recognition,” IET Biometrics, vol. 9, no. 1, pp. 1–10, 2020. [19]Y. Zhou and A. Kumar, “Human Identification Using Palm-Vein Images,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 4, pp. 1259–1274, 2011, doi: 10.1109/TIFS.2011.2158423. [20]C.-L. Lin, T. C. Chuang, and K.-C. Fan, “Palmprint Verification Using Hierarchical Decomposition,” Pattern Recogn., vol. 38, no. 12, pp. 2639–2652, Dec. 2005, doi: 10.1016/j.patcog.2005.04.001. [21]L. Leng, F. Gao, Q. Chen, and C. Kim, “Palmprint Recognition System on Mobile Devices with Double-Line-Single-Point Assistance,” Personal Ubiquitous Comput., vol. 22, no. 1, pp. 93–104, Feb. 2018, doi: 10.1007/s00779-017-1105-2. [22]L. Leng, “Palmprint Recognition System with Double-assistant-point on iOS Mobile Devices,” 2018. [23]A. Genovese, V. Piuri, K. N. Plataniotis, and F. Scotti, “PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 12, pp. 3160–3174, 2019, doi: 10.1109/TIFS.2019.2911165. [24]X. Bao and Z. Guo, “Extracting region of interest for palmprint by convolutional neural networks,” in 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), 2016, pp. 1–6. doi: 10.1109/IPTA.2016.7820994. [25]M. Izadpanahkakhk, S. M. Razavi, M. Taghipour-Gorjikolaie, S. H. Zahiri, and A. Uncini, “Deep Region of Interest and Feature Extraction Models for Palmprint Verification Using Convolutional Neural Networks Transfer Learning,” Applied Sciences, vol. 8, no. 7, 2018, doi: 10.3390/app8071210. [26]X. Ma, X. Jing, H. Huang, Y. Cui, and J. Mu, “Palm vein recognition scheme based on an adaptive Gabor filter,” IET Biom., vol. 6, pp. 325–333, 2017. [27]O. Nikisins, T. Eglitis, M. Pudzs, and M. Greitans, “Algorithms for a novel touchless bimodal palm biometric system,” in 2015 International Conference on Biometrics (ICB), 2015, pp. 436–443. doi: 10.1109/ICB.2015.7139107. [28]M. Afifi, “11K Hands: gender recognition and biometric identification using a large dataset of hand images,” Multimedia Tools and Applications, 2019, doi: 10.1007/s11042-019-7424-8. [29]R. S. Kuzu, E. Maiorana, and P. Campisi, “Vein-based Biometric Verification using Transfer Learning,” in 2020 43rd International Conference on Telecommunications and Signal Processing (TSP), 2020, pp. 403–409. doi: 10.1109/TSP49548.2020.9163491. [30]A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 4–20, 2004, doi: 10.1109/TCSVT.2003.818349. [31]D. Zhong and J. Zhu, “Centralized Large Margin Cosine Loss for Open-Set Deep Palmprint Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 6, pp. 1559–1568, 2020, doi: 10.1109/TCSVT.2019.2904283. [32]D. Zhang, W.-K. Kong, J. You, and M. Wong, “Online palmprint identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1041–1050, 2003, doi: 10.1109/TPAMI.2003.1227981. [33]A. W.-K. Kong and D. Zhang, “Competitive coding scheme for palmprint verification,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., 2004, vol. 1, pp. 520-523 Vol.1. doi: 10.1109/ICPR.2004.1334184. [34]W. Jia, D.-S. Huang, and D. Zhang, “Palmprint Verification Based on Robust Line Orientation Code,” Pattern Recognition, vol. 41, no. 5, pp. 1504–1513, May 2008. [35]Z. Guo, D. Zhang, L. Zhang, and W. Zuo, “Palmprint verification using binary orientation co-occurrence vector,” Pattern Recognition Letters, vol. 30, no. 13, pp. 1219–1227, 2009, doi: https://doi.org/10.1016/j.patrec.2009.05.010. [36]Z. Sun, T. Tan, Y. Wang, and S. Z. Li, “Ordinal palmprint represention for personal identification [represention read representation],” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, vol. 1, pp. 279–284 vol. 1. doi: 10.1109/CVPR.2005.267. [37]J. Chen and Z. Guo, “Palmprint Matching by Minutiae and Ridge Distance,” in Cloud Computing and Security, 2016, pp. 371–382. [38]X. Liang, J. Yang, G. Lu, and D. Zhang, “CompNet: Competitive Neural Network for Palmprint Recognition Using Learnable Gabor Kernels,” IEEE Signal Processing Letters, vol. 28, pp. 1739–1743, 2021, doi: 10.1109/LSP.2021.3103475. [39]Y. Wang, Q. Ruan, and X. Pan, “Palmprint recognition method using Dual-Tree Complex Wavelet Transform and Local Binary Pattern Histogram,” in 2007 International Symposium on Intelligent Signal Processing and Communication Systems, 2007, pp. 646–649. doi: 10.1109/ISPACS.2007.4445970. [40]X. Bai, N. Gao, Z. Zhang, and D. Zhang, “3D Palmprint Identification Combining Blocked ST and PCA,” Pattern Recogn. Lett., vol. 100, no. C, pp. 89–95, Dec. 2017, doi: 10.1016/j.patrec.2017.10.008. [41]C. Kha Vu, “Deep Metric Learning: A (Long) Survey,” 2021. https://hav4ik.github.io/articles/deep-metric-learning-survey [42]W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, SphereFace: Deep Hypersphere Embedding for Face Recognition. arXiv, 2017. doi: 10.48550/ARXIV.1704.08063. [43]R. Ranjan, C. D. Castillo, and R. Chellappa, L2-constrained Softmax Loss for Discriminative Face Verification. arXiv, 2017. doi: 10.48550/ARXIV.1703.09507. [44]F. Wang, X. Xiang, J. Cheng, and A. L. Yuille, “NormFace,” Oct. 2017. doi: 10.1145/3123266.3123359. [45]H. Wang et al., CosFace: Large Margin Cosine Loss for Deep Face Recognition. arXiv, 2018. doi: 10.48550/ARXIV.1801.09414. [46]J. Deng, J. Guo, N. Xue, and S. Zafeiriou, ArcFace: Additive Angular Margin Loss for Deep Face Recognition. arXiv, 2018. doi: 10.48550/ARXIV.1801.07698. [47]S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, vol. 1, pp. 539–546 vol. 1. doi: 10.1109/CVPR.2005.202. [48]F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” Jun. 2015. doi: 10.1109/cvpr.2015.7298682. [49]AngeloUNIMI, “Keras implementation of ArcFace, CosFace, and SphereFace.” https://github.com/4uiiurz1/keras-arcface [50]Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A Discriminative Feature Learning Approach for Deep Face Recognition,” in Computer Vision – ECCV 2016, 2016, pp. 499–515. [51]4uiiurz1, “Palmprint Segmentation.” https://github.com/AngeloUNIMI/PalmSeg [52]K. Ito, T. Sato, S. Aoyama, S. Sakai, S. Yusa, and T. Aoki, “Palm region extraction for contactless palmprint recognition,” in 2015 International Conference on Biometrics (ICB), 2015, pp. 334–340. doi: 10.1109/ICB.2015.7139058. [53]N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979, doi: 10.1109/TSMC.1979.4310076. [54]R. A. Kirsch, “Computer determination of the constituent structure of biological images,” Computers and Biomedical Research, vol. 4, no. 3, pp. 315–328, 1971, doi: https://doi.org/10.1016/0010-4809(71)90034-6. [55]G. K. Ong Michael, T. Connie, and A. B. Jin Teoh, “Touch-less palm print biometrics: Novel design and implementation,” Image and Vision Computing, vol. 26, no. 12, pp. 1551–1560, 2008, doi: https://doi.org/10.1016/j.imavis.2008.06.010. [56]K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition. arXiv, 2015. doi: 10.48550/ARXIV.1512.03385. [57]M. Lin, Q. Chen, and S. Yan, Network In Network. arXiv, 2013. doi: 10.48550/ARXIV.1312.4400. [58]H. Cui, L. Zhu, J. Li, Y. Yang, and L. Nie, “Scalable Deep Hashing for Large-Scale Social Image Retrieval,” IEEE Transactions on Image Processing, vol. 29, pp. 1271–1284, 2020, doi: 10.1109/TIP.2019.2940693. [59]Z. Cheng, X. Zhu, and S. Gong, “Face re-identification challenge: Are face recognition models good enough?,” Pattern Recognition, vol. 107, p. 107422, 2020, doi: https://doi.org/10.1016/j.patcog.2020.107422. [60]Y. Liu and A. Kumar, “Contactless Palmprint Identification Using Deeply Learned Residual Features,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 2, no. 2, pp. 172–181, 2020, doi: 10.1109/TBIOM.2020.2967073. [61]J. Zhu, D. Zhong, and K. Luo, “Boosting Unconstrained Palmprint Recognition With Adversarial Metric Learning,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 2, no. 4, pp. 388–398, 2020, doi: 10.1109/TBIOM.2020.3003406. [62]H. Shao and D. Zhong, “Learning With Partners to Improve the Multi-Source Cross-Dataset Palmprint Recognition,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 5182–5194, 2021, doi: 10.1109/TIFS.2021.3125612. [63]X. Du, D. Zhong, and H. Shao, “Cross-Domain Palmprint Recognition via Regularized Adversarial Domain Adaptive Hashing,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 6, pp. 2372–2385, 2021, doi: 10.1109/TCSVT.2020.3024593. [64]E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, Adversarial Discriminative Domain Adaptation. arXiv, 2017. doi: 10.48550/ARXIV.1702.05464. [65]H. Shao and D. Zhong, “One-shot cross-dataset palmprint recognition via adversarial domain adaptation,” Neurocomputing, vol. 432, pp. 288–299, 2021, doi: https://doi.org/10.1016/j.neucom.2020.12.072. [66]S. Motiian, Q. Jones, S. M. Iranmanesh, and G. Doretto, Few-Shot Adversarial Domain Adaptation. arXiv, 2017. doi: 10.48550/ARXIV.1711.02536. [67]H. Shao, D. Zhong, and Y. Li, “PalmGAN for Cross-Domain Palmprint Recognition,” in 2019 IEEE International Conference on Multimedia and Expo (ICME), 2019, pp. 1390–1395. doi: 10.1109/ICME.2019.00241. [68]J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2242–2251. doi: 10.1109/ICCV.2017.244. [69]H. Shao and D. Zhong, “Towards Cross-Dataset Palmprint Recognition Via Joint Pixel and Feature Alignment,” IEEE Transactions on Image Processing, vol. 30, pp. 3764–3777, 2021, doi: 10.1109/TIP.2021.3065220. [70]Y. Huang et al., CurricularFace: Adaptive Curriculum Learning Loss for Deep Face Recognition. arXiv, 2020. doi: 10.48550/ARXIV.2004.00288. [71]H. Zhang et al., ResNeSt: Split-Attention Networks. arXiv, 2020. doi: 10.48550/ARXIV.2004.08955. [72]S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated Residual Transformations for Deep Neural Networks,” arXiv preprint arXiv:1611.05431, 2016. [73]X. Li, W. Wang, X. Hu, and J. Yang, Selective Kernel Networks. arXiv, 2019. doi: 10.48550/ARXIV.1903.06586. [74]C. Szegedy et al., Going Deeper with Convolutions. arXiv, 2014. doi: 10.48550/ARXIV.1409.4842. [75]K. He, R. Girshick, and P. Dollár, Rethinking ImageNet Pre-training. arXiv, 2018. doi: 10.48550/ARXIV.1811.08883. [76]D. Hendrycks, K. Lee, and M. Mazeika, Using Pre-Training Can Improve Model Robustness and Uncertainty. arXiv, 2019. doi: 10.48550/ARXIV.1901.09960. [77]A. Chowdhury, M. Jiang, S. Chaudhuri, and C. Jermaine, “Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9425–9434. doi: 10.1109/ICCV48922.2021.00931. [78]L. McInnes, J. Healy, and J. Melville, UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv, 2018. doi: 10.48550/ARXIV.1802.03426. [79]C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, p. 60, 2019, doi: 10.1186/s40537-019-0197-0. [80]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, 2012, pp. 1097–1105. [81]P. J. Huber, “Robust Estimation of a Location Parameter,” The Annals of Mathematical Statistics, vol. 35, no. 1, pp. 73–101, 1964, doi: 10.1214/aoms/1177703732. [82]T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, Focal Loss for Dense Object Detection. arXiv, 2017. doi: 10.48550/ARXIV.1708.02002. [83]T. Huang, “Huber Loss and Focal loss.” https://chih-sheng-huang821.medium.com/%5C%25E6%5C%25A9%5C%259F%5C%25E5%5C%2599%5C%25A8-%5C%25E6%5C%25B7%5C%25B1%5C%25E5%5C%25BA%5C%25A6%5C%25E5%5C%25AD%5C%25B8%5C%25E7%5C%25BF%5C%2592-%5C%25E6%5C%2590%5C%258D%5C%25E5%5C%25A4%5C%25B1%5C%25E5%5C%2587%5C%25BD%5C%25E6%5C%2595%5C%25B8-loss-function-huber-loss%5C%25E5%5C%2592%5C%258C-focal-loss-bb757494f85e [84]T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, Optuna: A Next-generation Hyperparameter Optimization Framework. arXiv, 2019. doi: 10.48550/ARXIV.1907.10902. [85]M. O. Ahmed and S. Prince, “Tutorial 8: Bayesian optimization.” https://www.borealisai.com/research-blogs/tutorial-8-bayesian-optimization/ [86]J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, “Algorithms for Hyper-Parameter Optimization,” in Proceedings of the 24th International Conference on Neural Information Processing Systems, 2011, pp. 2546–2554. [87]D. R. Jones, “A Taxonomy of Global Optimization Methods Based on Response Surfaces,” J. of Global Optimization, vol. 21, no. 4, pp. 345–383, Dec. 2001, doi: 10.1023/A:1012771025575. [88]F. Hutter, H. Hoos, and K. Leyton-Brown, “An Efficient Approach for Assessing Hyperparameter Importance,” in Proceedings of the 31st International Conference on Machine Learning, Jun. 2014, vol. 32, no. 1, pp. 754–762. [Online]. Available: https://proceedings.mlr.press/v32/hutter14.html | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83225 | - |
dc.description.abstract | 基於生物特徵信息的身份認證系統在近年被廣泛的運用,大多數人已經接受它帶來的便利性、安全性和隱私性,可以避免使用者的密碼被盜或忘記。其中掌紋識別是人性化且衛生的生物識別,無需觸摸設備,對手勢也沒有太大的限制。隨著當今影像識別技術的飛速發展,深度學習方法在多樣化的影像資料處理上表現優於傳統方法,然而大多數掌紋識別中的深度學習方法僅在單個資料集上進行訓練和測試,一旦使用不同的成像設備,辨識的效果就會嚴重受到影像。但在實際應用場景中,輸入影像來自不同的相機是很常見的,因此如何解決不同成像設備帶來的影像差距至關重要。因此我們提出了三種影像增強策略和 ResNeSt 的縮減版本來解決跨域的掌紋識別。首先我們在訓練期間使用 Optuna 框架和 TPE 採樣器對影像隨機轉換進行超參數優化搜索。其次,將 ROI 影像旋轉到四個方向作為不同類別的訓練影像來 oversample 訓練資料集。這也可以用於增強我們提出的基於 test-time augmentation 的多轉換特徵比對方法。在有約束的資料集 (PolyU-M) 上訓練並在無約束的資料集 (MPD) 上測試的困難條件下,三種增強方法可以分別提高準確度 12.55%、5.34% 和 4.66%,總共可以實現 22.55% 的準確度提升。 此外,使用 MPD 訓練的模型在所有測試資料集中都可以達到 99.66% 以上的準確度。 | zh_TW |
dc.description.abstract | Identity authentication systems based on biometric information have been widely used in recent years. Most people accept its convenience, security, and privacy, avoiding the user's password being stolen or forgotten. Among them, palmprint recognition is user-friendly and hygienic biometric, without touching the device and too many pose restrictions. With the rapid development of today's image recognition technology, deep learning methods perform better than traditional methods on diverse image data. Although most deep learning methods in palmprint recognition perform well on a single dataset, it will seriously affect performance once applied to different image acquisition devices. In practical application scenarios, it is common for input images to come from different cameras, so how to bridge the gap between different imaging conditions is extremely crucial. Therefore, we propose three data augmentation strategies and a reduced version of ResNeSt to solve the cross-domain palmprint recognition problem. First, we perform a hyperparameter optimization search for random transformations during training with the Optuna framework and the TPE sampler. Second, the training dataset is oversampling augmented, which the ROI images are rotated into four orientations as training images of different classes. This can also be used to enhance our proposed multi-transform matching based on a test-time augmentation technique. Under the difficult conditions of training on a constrained acquisition dataset (PolyU-M) and testing on an unconstrained dataset (MPD), the three augmentation methods can improve by 12.55\%, 5.34\%, and 4.66\%, respectively, so a total of 22.55\% improvement can be achieved. Furthermore, the model trained on both MPD can achieve more than 99.66\% accuracy in all test datasets. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-01-10T17:26:59Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-01-10T17:26:59Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | ABSTRACT .ii LISTOFTABLES .vi LISTOFFIGURES .vii CHAPTER1 INTRODUCTION 1 CHAPTER 2 BACKGROUND AND RELATED WORK 4 2.1 PalmprintRecognitionSystem 4 2.1.1 Imageacquisition 5 2.1.2 Regionofinterest(ROI) 8 2.1.3 FeatureExtraction .12 2.1.4 FeatureMatching 14 2.2 Evaluation of the Palmprint Recognition System 15 2.2.1 VerificationandIdentification 15 2.2.2 EvaluationMethods 16 2.3 DeepMetricLearning 18 2.4 RelatedWork 21 2.4.1 MetricLoss 22 2.4.2 DomainAdaptation 23 2.4.3 GenerativeAdversarialNetwork 24 CHAPTER 3 IMPLEMENTATION OF THE BASELINE SYSTEM 26 3.1 Regionofinterest(ROI) 26 3.2 FeatureExtraction 28 3.2.1 DataAugmentation 28 3.2.2 ModelStructure 31 3.2.3 Centralized Large Margin Cosine Loss (C-LMCL) 33 3.3 FeatureMatching 36 3.4 ObservationandDiscussion 37 3.4.1 Feature Matching by Concatenated Features 37 3.4.2 Cross-subjectObservation 39 3.4.3 Cross-datasetObservation 40 3.4.4 SummaryandMotivation 41 CHAPTER 4 DATA AUGMENTATION ON TRAINING AND RECOGNITION 44 4.1 PalmprintROIAugmentation 46 4.1.1 Oversamplingwithrotation 47 4.1.2 DataWarpingTransformations 49 4.1.3 Hyper-ParameterOptimization 51 4.1.4 LossFunctionforExtremeSamples 59 4.2 ReducedResNeSt-50 63 4.2.1 ResNeSt 63 4.2.2 ReducedResNeSt 65 4.2.3 Pre-trainedModel 67 4.3 Multi-TransformMatching 68 4.3.1 EnsembledMatching 68 4.3.2 Transformations for Recognition Time 71 CHAPTER5 PERFORMANCEEVALUATION 73 5.1 Datasets 73 5.1.1 PolyU Multispectral Palmprint Dataset 73 5.1.2 Tongji Contactless Palmprint Dataset 73 5.1.3 TongjiMobilePalmprintDataset75 5.2 EvaluationoftheReducedResNeSt-50 76 5.2.1 Pre-trainedModel 77 5.2.2 ReducedResNeSt 78 5.2.3 ModelComparison 79 5.3 EvaluationoftheDataWarpingSearch 80 5.3.1 TPESearchingResult 82 5.3.2 Cross-dataset Evaluation of Di↵erent Data Warping 85 5.4 Evaluation of the Multi-Transform Matching and Oversampling 88 5.4.1 Evaluation of the Transformations within Multi-Transform Matching 88 5.4.2 Comparison of Di↵erent Matching Methods 90 5.5 Summary 93 CHAPTER 6 CONCLUSION AND FUTURE WORK 96 REFERENCES 98 | - |
dc.language.iso | en | - |
dc.title | 跨域掌紋辨識的資料擴增 | zh_TW |
dc.title | Data Augmentation for Cross-Domain Palmprint Recognition | en |
dc.title.alternative | Data Augmentation for Cross-Domain Palmprint Recognition | - |
dc.type | Thesis | - |
dc.date.schoolyear | 110-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 林澤;高榮鴻 | zh_TW |
dc.contributor.oralexamcommittee | Che Lin;Rung-Hung Gau | en |
dc.subject.keyword | 資料擴增,掌紋辨識,深度學習, | zh_TW |
dc.subject.keyword | Data Augmentation,Palmprint Recognition,Cross-Domain, | en |
dc.relation.page | 104 | - |
dc.identifier.doi | 10.6342/NTU202203861 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2022-09-28 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 電信工程學研究所 | - |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-2209202223002700.pdf | 10.89 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。