Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49198
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成(Li-Chen Fu)
dc.contributor.authorChuan Kuoen
dc.contributor.author郭權zh_TW
dc.date.accessioned2021-06-15T11:19:02Z-
dc.date.available2023-08-11
dc.date.copyright2020-09-16
dc.date.issued2020
dc.date.submitted2020-08-13
dc.identifier.citation[1] L. Zheng, Y. Yang, and A. G. Hauptmann, “Person re­identification: Past, present and future,” ArXiv, vol. abs/1610.02984, 2016.
[2] A. Kläser, M. Marszalek, and C. Schmid, “A spatio­temporal descriptor based on 3d­gradients,” 09 2008.
[3] D. G. Lowe, “Distinctive image features from scale­invariant keypoints,” Int. J. Comput. Vision, vol. 60, pp. 91–110, Nov. 2004.
[4] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling,” in ECCV, 2018.
[5] Q. Yang, H. Yu, A. Wu, and W. Zheng, “Patch­based discriminative feature learn­ ing for unsupervised person re­identification,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3628–3637, 2019.
[6] B. Xie, X. Wu, S. Zhang, S. Zhao, and M. Li, “Learning diverse features with part­ level resolution for person re­identification,” ArXiv, vol. abs/2001.07442, 2020.
[7] F. Zheng, X. Sun, X. Jiang, X. Guo, Z. Yu, and F. Huang, “A coarse­to­fine pyra­ midal model for person re­identification via multi­loss dynamic training,” ArXiv, vol. abs/1810.12193, 2018.
[8] H. Chen, B. Lagadec, and F. Bremond, “Learning discriminative and generalizable representations by spatial­channel partition for person re­identification,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 2472– 2481, 2020.
[9] T. Chen, S. Ding, J. Xie, Y. Yuan, W. Chen, Y. Yang, Z. Ren, and Z. Wang, “Abd­ net: Attentive but diverse person re­identification,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8350–8360, 2019.
[10] M. Zheng, S. Karanam, Z. Wu, and R. J. Radke, “Re­identification with consistent attentive siamese networks,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5728–5737, 2019.
[11] P.Fang,J.Zhou,S.Roy,L.Petersson,andM.Harandi,“Bilinearattentionnetworks for person retrieval,” in 2019 IEEE/CVF International Conference on Computer Vi­ sion (ICCV), pp. 8029–8038, 2019.
[12] B. Xia, Y. Gong, Y. Zhang, and C. Poellabauer, “Second­order non­local attention networks for person re­identification,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3759–3768, 2019.
[13] S. Zhou, F. Wang, Z. Huang, and J. Wang, “Discriminative feature learning with consistent attention regularization for person re­identification,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8039–8048, 2019.
[14] G. Chen, C. Lin, L. Ren, J. Lu, and J. Zhou, “Self­critical attention learning for person re­identification,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9636–9645, 2019.
[15] K. Gong, X. Ning, H. Yu, Z. Liping, and L. Sun, “Weak reverse attention with context aware for person re­identification,” Journal of Physics: Conference Series, vol. 1487, p. 012026, 03 2020.
[16] C.­P. Tay, S. Roy, and K.­H. Yap, “Aanet: Attribute attention network for person re­identifications,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7127–7136, 2019.
[17] W. qi Li, X. Zhu, and S. Gong, “Harmonious attention network for person re­ identification,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recog­ nition, pp. 2285–2294, 2018.
[18] R.Hou,B.Ma,H.Chang,X.Gu,S.Shan,andX.Chen,“Interaction­and­aggregation network for person re­identification,” 2019 IEEE/CVF Conference on Computer Vi­ sion and Pattern Recognition (CVPR), pp. 9309–9318, 2019.
[19] D. Wu, C. Wang, Y. Wu, and D.­S. Huang, “Attention deep model with multi­scale deep supervision for person re­identification,” ArXiv, vol. abs/1911.10335, 2019.
[20] H. Wang, Y. Fan, Z. Wang, L. Jiao, and B. Schiele, “Parameter­free spatial attention network for person re­identification,” ArXiv, vol. abs/1811.12150, 2018.
[21] J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, “Attention­aware compositional network for person re­identification,” 2018 IEEE/CVF Conference on Computer Vi­ sion and Pattern Recognition, pp. 2119–2128, 2018.
[22] J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Pose­guided feature alignment for occluded person re­identification,” in The IEEE International Conference on Com­ puter Vision (ICCV), October 2019.
[23] Y. Ge, Z. Li, H. Zhao, G. Yin, S. Yi, X. Wang, and H. Li, “Fd­gan: Pose­guided feature distilling gan for robust person re­identification,” in NeurIPS, 2018.
[24] J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Pose­guided feature alignment for occluded person re­identification,” in The IEEE International Conference on Com­ puter Vision (ICCV), October 2019.
[25] A.Bhuiyan,Y.Liu,P.Siva,M.Javan,I.B.Ayed,andE.Granger,“Poseguidedgated fusion for person re­identification,” in The IEEE Winter Conference on Applications of Computer Vision (WACV), March 2020.
[26] M. M. Kalayeh, E. Basaran, M. Gokmen, M. E. Kamasak, and M. Shah, “Human semantic parsing for person re­identification,” 2018 IEEE/CVF Conference on Com­ puter Vision and Pattern Recognition, pp. 1062–1071, 2018.
[27] Y. Lan, Y. Liu, M. Tian, X. Zhou, X. Zhang, S. Yi, and H. Li, “Magnifiernet: Towards semantic regularization and fusion for person re­identification,” ArXiv, vol. abs/2002.10979, 2020.
[28] Z. Zhang, C. Lan, W. Zeng, and Z. Chen, “Densely semantically aligned person re­ identification,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recog­ nition (CVPR), pp. 667–676, 2019.
[29] L. He, Y. Wang, W. Liu, X. Liao, H. Zhao, Z. Sun, and J. Feng, “Foreground­aware pyramid reconstruction for alignment­free occluded person re­identification,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8449–8458, 2019.
[30] J. Guo, Y. Yuan, L. Huang, C. Zhang, J.­G. Yao, and K. Han, “Beyond human parts: Dual part­aligned representations for person re­identification,” 2019 IEEE/CVF In­ ternational Conference on Computer Vision (ICCV), pp. 3641–3650, 2019.
[31] Tang­Wei Hsu, Yu­Huan Yang, Tso­Hsin Yeh, An­Sheng Liu, Li­Chen Fu, and Yi­ Chong Zeng, “Privacy free indoor action detection system using top­view depth cam­ era based on key­poses,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 004058–004063, 2016.
[32] T. Tseng, A. Liu, P. Hsiao, C. Huang, and L. Fu, “Real­time people detection and tracking for indoor surveillance using multiple top­view depth cameras,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4077– 4082, 2014.
[33] S. Lin, A. Liu, T. Hsu, and L. Fu, “Representative body points on top­view depth sequences for daily activity recognition,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2968–2973, 2015.
[34] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recog­ nition (CVPR’05), vol. 1, pp. 886–893 vol. 1, 2005.
[35] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detec­ tion with discriminatively trained part­based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010.
[36] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587, 2014.
[37] M. Everingham, L. Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” Int. J. Comput. Vision, vol. 88, p. 303–338, June 2010.
[38] V.Braverman,SlidingWindowAlgorithms,pp.2006–2011.NewYork,NY:Springer New York, 2016.
[39] R. B. Girshick, “Fast r­cnn,” 2015 IEEE International Conference on Computer Vi­ sion (ICCV), pp. 1440–1448, 2015.
[40] S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster r­cnn: Towards real­time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, 2015.
[41] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real­time object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, 2016.
[42] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” 2017 IEEE Confer­ ence on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525, 2017.
[43] D. Gray, S. Brennan, and H. Tao, “Evaluating appearance models for recognition, reacquisition, and tracking,” Proc. IEEE Int. Workshop Vis. Surveill. Perform. Eval. Tracking Surveill., Oct., 01 2007.
[44] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani, “Person re­identification by symmetry­driven accumulation of local features,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2360–2367, 2010.
[45] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems ­ Volume 1, NIPS’12, (Red Hook, NY, USA), p. 1097–1105, Curran Associates Inc., 2012.
[46] O.Russakovsky,J.Deng,H.Su,J.Krause,S.Satheesh,S.Ma,Z.Huang,A.Karpa­ thy, A. Khosla, M. S. Bernstein, A. C. Berg, and L. Fei­Fei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, pp. 211–252, 2015.
[47] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional net­ works,” in ECCV, 2014.
[48] K. Simonyan and A. Zisserman, “Two­stream convolutional networks for action recognition in videos,” in NIPS, 2014.
[49] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two­stream network fusion for video action recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1933–1941, 2016.
[50] C. Cortes and V. Vapnik, “Support­vector networks,” Mach. Learn., vol. 20, p. 273– 297, Sept. 1995.
[51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large­scale image recognition,” CoRR, vol. abs/1409.1556, 2015.
[52] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Van­ houcke, and A. Rabinovich, “Going deeper with convolutions,” 2015 IEEE Confer­ ence on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015.
[53] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw., vol. 2, p. 359–366, July 1989.
[54] S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,” International Journal of Uncertainty, Fuzziness and Knowledge­Based Systems, vol. 6, pp. 107–116, 04 1998.
[55] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recogni­ tion,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
[56] X. Liang, K. Gong, X. Shen, and L. Lin, “Look into person: Joint body parsing pose estimation network and a new benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, pp. 871–885, 2019.
[57] T. Liu, T. Ruan, Z. Huang, Y. Wei, S. Wei, Y. Zhao, and T. Huang, “Devil in the details: Towards accurate single and multiple human parsing,” in AAAI, 2018.
[58] P. Li, Y. Xu, Y. Wei, and Y. Yang, “Self­correction for human parsing,” ArXiv, vol. abs/1910.09777, 2019.
[59] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823, 2015.
[60] A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re­ identification,” ArXiv, vol. abs/1703.07737, 2017.
[61] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in Computer Vision – ECCV 2016 (B. Leibe, J. Matas, N. Sebe, and M. Welling, eds.), (Cham), pp. 499–515, Springer International Pub­ lishing, 2016.
[62] Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re­ranking person re­identification with k­reciprocal encoding,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3652–3661, 2017.
[63] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re­ identification: A benchmark,” in 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1116–1124, 2015.
[64] Z.Zheng,L.Zheng,andY.Yang,“Unlabeledsamplesgeneratedbyganimprovethe person re­identification baseline in vitro,” in Proceedings of the IEEE International Conference on Computer Vision, 2017.
[65] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmenta­ tion,” in AAAI, 2020.
[66] X. Fan, W. Jiang, H. Luo, and M. Fei, “Spherereid: Deep hypersphere manifold embedding for person re­identification,” ArXiv, vol. abs/1807.00537, 2019.
[67] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the incep­ tion architecture for computer vision,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826, 2016.
[68] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detec­ tion with discriminatively trained part­based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010.
[69] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi­target, multi­camera tracking,” in European Conference on Computer Vision workshop on Benchmarking Multi­Target Tracking, 2016.
[70] H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re­identification,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1487–1495, 2019.
[71] B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, and A. Torralba, “Learning deep fea­ tures for discriminative localization,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929, 2016.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49198-
dc.description.abstract近年來,人物重新識別系統已經在計算機視覺領域引起了大量的關注,因為其具有多面向的應用:包括智慧家庭、健康照護以及監視系 統。但是實際應用上面臨著許多不同面向的挑戰:穿著相似但身份不同、照相機視角、背景雜亂、行人姿態、遮蔽等因素。而這些降低了特徵上的辨別性以及穩定性,使得人物重新識別成為了一項艱鉅的任務。
近年的研究已經嘗試結合人類語義分割或外部屬性的資訊提供,以幫助捕捉更有效的捕捉人的區域以幫助深度學習。另一部分則是想藉由注意力機制去使得深度學習模型可以自行學習,著重於人的資訊而不是背景等不重要的區域。然而,僅基於卷積網路去提取特徵以及損 失函數的設計,是很難更有效率學習到重要特徵的。最後,有許多方法已經顯示分部資訊可以進一步的改善表現,但是合理的將特徵劃分成代表豐富信息的合理分部特徵是值得近一步研究的領域。
針對上述挑戰,包括穿著相似但身份不同、照相機視角、背景雜亂、行人姿態等,我們結合了人類語意分割和注意力機制的優勢,提出了基於語意分割的注意力機制。為了提供不同的人類資訊給注意力機制,語意人類解析模型將生成五個不同的解析遮罩,包括完整、人、非人、上半和下半遮罩。在解析遮罩的引導下,注意力機制可以更好地理解信息並更有效的增強特徵。最後基於分部特徵的概念,我 們提出了空間相關特徵提取,可以有效利用人類語意分割中的信息並 生成保留空間資訊且更穩定的特徵。除此之外,透過空間相關特徵提取,也可以有效的解決遮蔽問題。
為了驗證我們方法的有效性,我們在兩個具有挑戰性的資料集上進行了一系列實驗:Market­1501 和 DukeMTMC­reID。其中我們在 Rank1/mAP 的結果,在 Market­1501 達到了 95.37% / 89.15%,在 DukeMTMC­reID 達到了 89.68% / 78.81%,這樣的表現高過於大多數現今最新的方法。
zh_TW
dc.description.abstractNowadays, person re­identification has raised lots of attention in the area of computer vision, because of its full applications, including smart home, elderly care, and surveillance systems. Due to factors such as different identi­ ties with similar human appearance, camera viewpoint changes, background clutter, posture variation, and occlusion, person re­identification is a chal­ lenging task. These elements degrade the process of extracting robust and discriminative representations, hence preventing different identities from be­ ing successfully distinguished.
Recent studies have attempted to incorporate semantic human parsing re­ sults or externally predefined attributes to help capture human parts or impor­ tant object regions to improve representation learning. Other works introduce attention mechanisms to focus on more important parts in human images in­ stead of the background. However, only based on CNN feature maps and loss function supervision, it is hard to learn well by attention without any fur­ ther information. Last, many approaches have proved that part features can further improve the performance, but how to well divide global feature into several reasonable part features that represent rich information is a field that needs to be studied more.
In this thesis, to meet the afore­mentioned challenges for re­identification, we leverage the strengths of semantic human parsing and attention mecha­ nism and propose the Parsing­based Attention Block (PAB). Specifically, in order to provide different pieces of human information to the attention mechanism, semantic human parsing model will generate five different parsing masks, including Full, Human, Nonhuman, Upper, and Lower masks. Under the guidance of parsing masks, attention can better understand the informa­ tion and enhance the feature. Next, based on the idea of part feature, we propose the Space­related Feature Extraction (SFE), which effectively uti­ lizes the knowledge from semantic human parsing and generates more stable features that preserve spatial information. Besides, by SFE, we can also elim­ inate the problem of occlusion while performing re­identification task. Last, with well­design structure and loss function, we integrate PAB and SFE into backbone model to construct a robust Re­ID model.
To verify the effectiveness of our approach, we perform a series of experi­ ments on two challenging benchmarks: Market­1501 and DukeMTMC­reID. The experimental results show that our proposed method achieves 95.37% / 89.15% in Rank1/mAP on the Market­1501 and 89.68% / 78.81% on the DukeMTMC­reID, which outperform most state­-of-­the-­art methods.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T11:19:02Z (GMT). No. of bitstreams: 1
U0001-1208202018265700.pdf: 22452589 bytes, checksum: a88a10273d27a9efd0c93961df1fa567 (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents口試委員審定書 i
誌謝 ii
摘要 iii
Abstract v
1 Introduction 1
1.1  Motivation 1
1.2  Literature Review 3
1.2.1 Human Detection 4
1.2.2 Person Re-­Identification(Re-­ID) 7
1.3  Contribution 10
1.4  Thesis Organization 10
2 Preliminaries 12
2.1  Convolutional Neural Networks 12
2.1.1 Convolutional Layers 14
2.1.2 Residual Network 16
2.2  Semantic Human Parsing 17
2.3  Loss Function 19
2.3.1 Softmax Loss 19
2.3.2 Triplet Loss with PK Batch and Batch Hard 20
2.3.3 Center Loss 22
2.4 Information Retrieval 23
2.5 Re-ranking Algorithm 24
3 Person Re-Identification (Re-ID) 26
3.1 System Overview 26
3.2 Semantic Human Parsing 27
3.2.1 Parsing Masks 27
3.2.2 Parsing Model and Input Image Size 32
3.3 Parsing-based Attention Block (PAB) 32
3.3.1 Attention Mechanism 32
3.3.2 Parsing-based Attention 34
3.4 Space-related Feature Extraction (SFE) 37
3.5 Re-ID Training and Inference Details 40
4 Experiments 44
4.1 Configuration 44
4.2 Training Details 45
4.3 Person Re-identification Dataset 46
4.3.1 Market-1501 Dataset 46
4.3.2 DukeMTMC-reID Dataset 47
4.3.3 Evaluation Metrics 49
4.4 Ablation Study 50
4.4.1 The results of parsing-based attention in different stages 51
4.4.2 The results of different parsing masks for attention block 52
4.4.3 The effectiveness of the proposed method 53
4.5 Person Re-Identification Results 54
4.5.1 The Results of Market-1501 Dataset 54
4.5.2 The Results of Duke-MTMC-reID Dataset 55
4.6 The visualization of attention heatmap 56
4.6.1 Attention to human details 56
4.6.2 Attention to background details 57
4.6.3 Invariant to different camera viewpoints and posture 58
5 Conclusion 60
6 Future Works 61
Bibliography 62
dc.language.isoen
dc.subject注意力機制zh_TW
dc.subject深度學習zh_TW
dc.subject資訊檢索zh_TW
dc.subject行人再辨識zh_TW
dc.subject人類語義分割zh_TW
dc.subjectHuman semantic parsingen
dc.subjectInformation retrievalen
dc.subjectAttention mechanismen
dc.subjectPerson re-identificationen
dc.subjectDeep learningen
dc.title基於語義的人像解析及注意機制之強化式行人再識別系統zh_TW
dc.titleEnhanced Person Re­-identification Based on Semantic Human Parsing with Attention Mechanismen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳祝嵩(Chu-Song Chen),范欽雄(Chin-Shyurng Fahn),黃正民(Cheng-Ming Huang),王鈺強(Yu-Chiang Wang)
dc.subject.keyword深度學習,資訊檢索,行人再辨識,人類語義分割,注意力機制,zh_TW
dc.subject.keywordDeep learning,Information retrieval,Person re-identification,Human semantic parsing,Attention mechanism,en
dc.relation.page69
dc.identifier.doi10.6342/NTU202003139
dc.rights.note有償授權
dc.date.accepted2020-08-14
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
U0001-1208202018265700.pdf
  未授權公開取用
21.93 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved