請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49198完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 傅立成(Li-Chen Fu) | |
| dc.contributor.author | Chuan Kuo | en |
| dc.contributor.author | 郭權 | zh_TW |
| dc.date.accessioned | 2021-06-15T11:19:02Z | - |
| dc.date.available | 2023-08-11 | |
| dc.date.copyright | 2020-09-16 | |
| dc.date.issued | 2020 | |
| dc.date.submitted | 2020-08-13 | |
| dc.identifier.citation | [1] L. Zheng, Y. Yang, and A. G. Hauptmann, “Person reidentification: Past, present and future,” ArXiv, vol. abs/1610.02984, 2016. [2] A. Kläser, M. Marszalek, and C. Schmid, “A spatiotemporal descriptor based on 3dgradients,” 09 2008. [3] D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” Int. J. Comput. Vision, vol. 60, pp. 91–110, Nov. 2004. [4] Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling,” in ECCV, 2018. [5] Q. Yang, H. Yu, A. Wu, and W. Zheng, “Patchbased discriminative feature learn ing for unsupervised person reidentification,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3628–3637, 2019. [6] B. Xie, X. Wu, S. Zhang, S. Zhao, and M. Li, “Learning diverse features with part level resolution for person reidentification,” ArXiv, vol. abs/2001.07442, 2020. [7] F. Zheng, X. Sun, X. Jiang, X. Guo, Z. Yu, and F. Huang, “A coarsetofine pyra midal model for person reidentification via multiloss dynamic training,” ArXiv, vol. abs/1810.12193, 2018. [8] H. Chen, B. Lagadec, and F. Bremond, “Learning discriminative and generalizable representations by spatialchannel partition for person reidentification,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 2472– 2481, 2020. [9] T. Chen, S. Ding, J. Xie, Y. Yuan, W. Chen, Y. Yang, Z. Ren, and Z. Wang, “Abd net: Attentive but diverse person reidentification,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8350–8360, 2019. [10] M. Zheng, S. Karanam, Z. Wu, and R. J. Radke, “Reidentification with consistent attentive siamese networks,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5728–5737, 2019. [11] P.Fang,J.Zhou,S.Roy,L.Petersson,andM.Harandi,“Bilinearattentionnetworks for person retrieval,” in 2019 IEEE/CVF International Conference on Computer Vi sion (ICCV), pp. 8029–8038, 2019. [12] B. Xia, Y. Gong, Y. Zhang, and C. Poellabauer, “Secondorder nonlocal attention networks for person reidentification,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3759–3768, 2019. [13] S. Zhou, F. Wang, Z. Huang, and J. Wang, “Discriminative feature learning with consistent attention regularization for person reidentification,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8039–8048, 2019. [14] G. Chen, C. Lin, L. Ren, J. Lu, and J. Zhou, “Selfcritical attention learning for person reidentification,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9636–9645, 2019. [15] K. Gong, X. Ning, H. Yu, Z. Liping, and L. Sun, “Weak reverse attention with context aware for person reidentification,” Journal of Physics: Conference Series, vol. 1487, p. 012026, 03 2020. [16] C.P. Tay, S. Roy, and K.H. Yap, “Aanet: Attribute attention network for person reidentifications,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7127–7136, 2019. [17] W. qi Li, X. Zhu, and S. Gong, “Harmonious attention network for person re identification,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recog nition, pp. 2285–2294, 2018. [18] R.Hou,B.Ma,H.Chang,X.Gu,S.Shan,andX.Chen,“Interactionandaggregation network for person reidentification,” 2019 IEEE/CVF Conference on Computer Vi sion and Pattern Recognition (CVPR), pp. 9309–9318, 2019. [19] D. Wu, C. Wang, Y. Wu, and D.S. Huang, “Attention deep model with multiscale deep supervision for person reidentification,” ArXiv, vol. abs/1911.10335, 2019. [20] H. Wang, Y. Fan, Z. Wang, L. Jiao, and B. Schiele, “Parameterfree spatial attention network for person reidentification,” ArXiv, vol. abs/1811.12150, 2018. [21] J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, “Attentionaware compositional network for person reidentification,” 2018 IEEE/CVF Conference on Computer Vi sion and Pattern Recognition, pp. 2119–2128, 2018. [22] J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Poseguided feature alignment for occluded person reidentification,” in The IEEE International Conference on Com puter Vision (ICCV), October 2019. [23] Y. Ge, Z. Li, H. Zhao, G. Yin, S. Yi, X. Wang, and H. Li, “Fdgan: Poseguided feature distilling gan for robust person reidentification,” in NeurIPS, 2018. [24] J. Miao, Y. Wu, P. Liu, Y. Ding, and Y. Yang, “Poseguided feature alignment for occluded person reidentification,” in The IEEE International Conference on Com puter Vision (ICCV), October 2019. [25] A.Bhuiyan,Y.Liu,P.Siva,M.Javan,I.B.Ayed,andE.Granger,“Poseguidedgated fusion for person reidentification,” in The IEEE Winter Conference on Applications of Computer Vision (WACV), March 2020. [26] M. M. Kalayeh, E. Basaran, M. Gokmen, M. E. Kamasak, and M. Shah, “Human semantic parsing for person reidentification,” 2018 IEEE/CVF Conference on Com puter Vision and Pattern Recognition, pp. 1062–1071, 2018. [27] Y. Lan, Y. Liu, M. Tian, X. Zhou, X. Zhang, S. Yi, and H. Li, “Magnifiernet: Towards semantic regularization and fusion for person reidentification,” ArXiv, vol. abs/2002.10979, 2020. [28] Z. Zhang, C. Lan, W. Zeng, and Z. Chen, “Densely semantically aligned person re identification,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recog nition (CVPR), pp. 667–676, 2019. [29] L. He, Y. Wang, W. Liu, X. Liao, H. Zhao, Z. Sun, and J. Feng, “Foregroundaware pyramid reconstruction for alignmentfree occluded person reidentification,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8449–8458, 2019. [30] J. Guo, Y. Yuan, L. Huang, C. Zhang, J.G. Yao, and K. Han, “Beyond human parts: Dual partaligned representations for person reidentification,” 2019 IEEE/CVF In ternational Conference on Computer Vision (ICCV), pp. 3641–3650, 2019. [31] TangWei Hsu, YuHuan Yang, TsoHsin Yeh, AnSheng Liu, LiChen Fu, and Yi Chong Zeng, “Privacy free indoor action detection system using topview depth cam era based on keyposes,” in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 004058–004063, 2016. [32] T. Tseng, A. Liu, P. Hsiao, C. Huang, and L. Fu, “Realtime people detection and tracking for indoor surveillance using multiple topview depth cameras,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4077– 4082, 2014. [33] S. Lin, A. Liu, T. Hsu, and L. Fu, “Representative body points on topview depth sequences for daily activity recognition,” in 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2968–2973, 2015. [34] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recog nition (CVPR’05), vol. 1, pp. 886–893 vol. 1, 2005. [35] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detec tion with discriminatively trained partbased models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010. [36] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587, 2014. [37] M. Everingham, L. Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” Int. J. Comput. Vision, vol. 88, p. 303–338, June 2010. [38] V.Braverman,SlidingWindowAlgorithms,pp.2006–2011.NewYork,NY:Springer New York, 2016. [39] R. B. Girshick, “Fast rcnn,” 2015 IEEE International Conference on Computer Vi sion (ICCV), pp. 1440–1448, 2015. [40] S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster rcnn: Towards realtime object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, 2015. [41] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, realtime object detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, 2016. [42] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” 2017 IEEE Confer ence on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525, 2017. [43] D. Gray, S. Brennan, and H. Tao, “Evaluating appearance models for recognition, reacquisition, and tracking,” Proc. IEEE Int. Workshop Vis. Surveill. Perform. Eval. Tracking Surveill., Oct., 01 2007. [44] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani, “Person reidentification by symmetrydriven accumulation of local features,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2360–2367, 2010. [45] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems Volume 1, NIPS’12, (Red Hook, NY, USA), p. 1097–1105, Curran Associates Inc., 2012. [46] O.Russakovsky,J.Deng,H.Su,J.Krause,S.Satheesh,S.Ma,Z.Huang,A.Karpa thy, A. Khosla, M. S. Bernstein, A. C. Berg, and L. FeiFei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, pp. 211–252, 2015. [47] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional net works,” in ECCV, 2014. [48] K. Simonyan and A. Zisserman, “Twostream convolutional networks for action recognition in videos,” in NIPS, 2014. [49] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional twostream network fusion for video action recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1933–1941, 2016. [50] C. Cortes and V. Vapnik, “Supportvector networks,” Mach. Learn., vol. 20, p. 273– 297, Sept. 1995. [51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” CoRR, vol. abs/1409.1556, 2015. [52] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Van houcke, and A. Rabinovich, “Going deeper with convolutions,” 2015 IEEE Confer ence on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015. [53] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw., vol. 2, p. 359–366, July 1989. [54] S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,” International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems, vol. 6, pp. 107–116, 04 1998. [55] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recogni tion,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. [56] X. Liang, K. Gong, X. Shen, and L. Lin, “Look into person: Joint body parsing pose estimation network and a new benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, pp. 871–885, 2019. [57] T. Liu, T. Ruan, Z. Huang, Y. Wei, S. Wei, Y. Zhao, and T. Huang, “Devil in the details: Towards accurate single and multiple human parsing,” in AAAI, 2018. [58] P. Li, Y. Xu, Y. Wei, and Y. Yang, “Selfcorrection for human parsing,” ArXiv, vol. abs/1910.09777, 2019. [59] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823, 2015. [60] A. Hermans, L. Beyer, and B. Leibe, “In defense of the triplet loss for person re identification,” ArXiv, vol. abs/1703.07737, 2017. [61] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in Computer Vision – ECCV 2016 (B. Leibe, J. Matas, N. Sebe, and M. Welling, eds.), (Cham), pp. 499–515, Springer International Pub lishing, 2016. [62] Z. Zhong, L. Zheng, D. Cao, and S. Li, “Reranking person reidentification with kreciprocal encoding,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3652–3661, 2017. [63] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re identification: A benchmark,” in 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1116–1124, 2015. [64] Z.Zheng,L.Zheng,andY.Yang,“Unlabeledsamplesgeneratedbyganimprovethe person reidentification baseline in vitro,” in Proceedings of the IEEE International Conference on Computer Vision, 2017. [65] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmenta tion,” in AAAI, 2020. [66] X. Fan, W. Jiang, H. Luo, and M. Fei, “Spherereid: Deep hypersphere manifold embedding for person reidentification,” ArXiv, vol. abs/1807.00537, 2019. [67] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the incep tion architecture for computer vision,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826, 2016. [68] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detec tion with discriminatively trained partbased models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010. [69] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multitarget, multicamera tracking,” in European Conference on Computer Vision workshop on Benchmarking MultiTarget Tracking, 2016. [70] H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person reidentification,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1487–1495, 2019. [71] B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, and A. Torralba, “Learning deep fea tures for discriminative localization,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929, 2016. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49198 | - |
| dc.description.abstract | 近年來,人物重新識別系統已經在計算機視覺領域引起了大量的關注,因為其具有多面向的應用:包括智慧家庭、健康照護以及監視系 統。但是實際應用上面臨著許多不同面向的挑戰:穿著相似但身份不同、照相機視角、背景雜亂、行人姿態、遮蔽等因素。而這些降低了特徵上的辨別性以及穩定性,使得人物重新識別成為了一項艱鉅的任務。 近年的研究已經嘗試結合人類語義分割或外部屬性的資訊提供,以幫助捕捉更有效的捕捉人的區域以幫助深度學習。另一部分則是想藉由注意力機制去使得深度學習模型可以自行學習,著重於人的資訊而不是背景等不重要的區域。然而,僅基於卷積網路去提取特徵以及損 失函數的設計,是很難更有效率學習到重要特徵的。最後,有許多方法已經顯示分部資訊可以進一步的改善表現,但是合理的將特徵劃分成代表豐富信息的合理分部特徵是值得近一步研究的領域。 針對上述挑戰,包括穿著相似但身份不同、照相機視角、背景雜亂、行人姿態等,我們結合了人類語意分割和注意力機制的優勢,提出了基於語意分割的注意力機制。為了提供不同的人類資訊給注意力機制,語意人類解析模型將生成五個不同的解析遮罩,包括完整、人、非人、上半和下半遮罩。在解析遮罩的引導下,注意力機制可以更好地理解信息並更有效的增強特徵。最後基於分部特徵的概念,我 們提出了空間相關特徵提取,可以有效利用人類語意分割中的信息並 生成保留空間資訊且更穩定的特徵。除此之外,透過空間相關特徵提取,也可以有效的解決遮蔽問題。 為了驗證我們方法的有效性,我們在兩個具有挑戰性的資料集上進行了一系列實驗:Market1501 和 DukeMTMCreID。其中我們在 Rank1/mAP 的結果,在 Market1501 達到了 95.37% / 89.15%,在 DukeMTMCreID 達到了 89.68% / 78.81%,這樣的表現高過於大多數現今最新的方法。 | zh_TW |
| dc.description.abstract | Nowadays, person reidentification has raised lots of attention in the area of computer vision, because of its full applications, including smart home, elderly care, and surveillance systems. Due to factors such as different identi ties with similar human appearance, camera viewpoint changes, background clutter, posture variation, and occlusion, person reidentification is a chal lenging task. These elements degrade the process of extracting robust and discriminative representations, hence preventing different identities from be ing successfully distinguished. Recent studies have attempted to incorporate semantic human parsing re sults or externally predefined attributes to help capture human parts or impor tant object regions to improve representation learning. Other works introduce attention mechanisms to focus on more important parts in human images in stead of the background. However, only based on CNN feature maps and loss function supervision, it is hard to learn well by attention without any fur ther information. Last, many approaches have proved that part features can further improve the performance, but how to well divide global feature into several reasonable part features that represent rich information is a field that needs to be studied more. In this thesis, to meet the aforementioned challenges for reidentification, we leverage the strengths of semantic human parsing and attention mecha nism and propose the Parsingbased Attention Block (PAB). Specifically, in order to provide different pieces of human information to the attention mechanism, semantic human parsing model will generate five different parsing masks, including Full, Human, Nonhuman, Upper, and Lower masks. Under the guidance of parsing masks, attention can better understand the informa tion and enhance the feature. Next, based on the idea of part feature, we propose the Spacerelated Feature Extraction (SFE), which effectively uti lizes the knowledge from semantic human parsing and generates more stable features that preserve spatial information. Besides, by SFE, we can also elim inate the problem of occlusion while performing reidentification task. Last, with welldesign structure and loss function, we integrate PAB and SFE into backbone model to construct a robust ReID model. To verify the effectiveness of our approach, we perform a series of experi ments on two challenging benchmarks: Market1501 and DukeMTMCreID. The experimental results show that our proposed method achieves 95.37% / 89.15% in Rank1/mAP on the Market1501 and 89.68% / 78.81% on the DukeMTMCreID, which outperform most state-of-the-art methods. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-15T11:19:02Z (GMT). No. of bitstreams: 1 U0001-1208202018265700.pdf: 22452589 bytes, checksum: a88a10273d27a9efd0c93961df1fa567 (MD5) Previous issue date: 2020 | en |
| dc.description.tableofcontents | 口試委員審定書 i 誌謝 ii 摘要 iii Abstract v 1 Introduction 1 1.1 Motivation 1 1.2 Literature Review 3 1.2.1 Human Detection 4 1.2.2 Person Re-Identification(Re-ID) 7 1.3 Contribution 10 1.4 Thesis Organization 10 2 Preliminaries 12 2.1 Convolutional Neural Networks 12 2.1.1 Convolutional Layers 14 2.1.2 Residual Network 16 2.2 Semantic Human Parsing 17 2.3 Loss Function 19 2.3.1 Softmax Loss 19 2.3.2 Triplet Loss with PK Batch and Batch Hard 20 2.3.3 Center Loss 22 2.4 Information Retrieval 23 2.5 Re-ranking Algorithm 24 3 Person Re-Identification (Re-ID) 26 3.1 System Overview 26 3.2 Semantic Human Parsing 27 3.2.1 Parsing Masks 27 3.2.2 Parsing Model and Input Image Size 32 3.3 Parsing-based Attention Block (PAB) 32 3.3.1 Attention Mechanism 32 3.3.2 Parsing-based Attention 34 3.4 Space-related Feature Extraction (SFE) 37 3.5 Re-ID Training and Inference Details 40 4 Experiments 44 4.1 Configuration 44 4.2 Training Details 45 4.3 Person Re-identification Dataset 46 4.3.1 Market-1501 Dataset 46 4.3.2 DukeMTMC-reID Dataset 47 4.3.3 Evaluation Metrics 49 4.4 Ablation Study 50 4.4.1 The results of parsing-based attention in different stages 51 4.4.2 The results of different parsing masks for attention block 52 4.4.3 The effectiveness of the proposed method 53 4.5 Person Re-Identification Results 54 4.5.1 The Results of Market-1501 Dataset 54 4.5.2 The Results of Duke-MTMC-reID Dataset 55 4.6 The visualization of attention heatmap 56 4.6.1 Attention to human details 56 4.6.2 Attention to background details 57 4.6.3 Invariant to different camera viewpoints and posture 58 5 Conclusion 60 6 Future Works 61 Bibliography 62 | |
| dc.language.iso | en | |
| dc.subject | 注意力機制 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 資訊檢索 | zh_TW |
| dc.subject | 行人再辨識 | zh_TW |
| dc.subject | 人類語義分割 | zh_TW |
| dc.subject | Human semantic parsing | en |
| dc.subject | Information retrieval | en |
| dc.subject | Attention mechanism | en |
| dc.subject | Person re-identification | en |
| dc.subject | Deep learning | en |
| dc.title | 基於語義的人像解析及注意機制之強化式行人再識別系統 | zh_TW |
| dc.title | Enhanced Person Re-identification Based on Semantic Human Parsing with Attention Mechanism | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 108-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳祝嵩(Chu-Song Chen),范欽雄(Chin-Shyurng Fahn),黃正民(Cheng-Ming Huang),王鈺強(Yu-Chiang Wang) | |
| dc.subject.keyword | 深度學習,資訊檢索,行人再辨識,人類語義分割,注意力機制, | zh_TW |
| dc.subject.keyword | Deep learning,Information retrieval,Person re-identification,Human semantic parsing,Attention mechanism, | en |
| dc.relation.page | 69 | |
| dc.identifier.doi | 10.6342/NTU202003139 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2020-08-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1208202018265700.pdf 未授權公開取用 | 21.93 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
