請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63642
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 丁建均(Jian-Jiun Ding) | |
dc.contributor.author | Jun-Zuo Liu | en |
dc.contributor.author | 劉俊佐 | zh_TW |
dc.date.accessioned | 2021-06-16T17:15:30Z | - |
dc.date.available | 2015-08-21 | |
dc.date.copyright | 2012-08-21 | |
dc.date.issued | 2012 | |
dc.date.submitted | 2012-08-17 | |
dc.identifier.citation | [1] I. Buciu, C. Kotropoulos and I. Pitas, “ICA and Gabor representation for facial expression recognition,” IEEE Conference on Image Processing, vol. 3, pp. 855-8.
[2] C. Shan, S. Gong and P.W. McOwan, “Facial expression recognition based on local binary pattern: A comprehensive study,” Image and Vision Computing, vol. 27, pp. 803-816. [3] P. Li, S.L. Phung, A. Bouzerdom, and F.H.C. Tivive, “Improved facial expression recognition with trainable 2-D filters and support vector machines,” IEEE Conference on Image Processing, pp. 3732-3735, 2010. [4] G. Zhao, M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, Issue. 6, pp. 915-928, 2007. [5] V. Ojansivu, J. Heikkila, “Blur insensitive texture classification using local phase quantization,” Lecture Notes in Computer Science –Image and Signal Processing, vol. 5099, pp. 236-243, 2008. [6] P. Li, S.L. Phung, and A. Bouzerdom, “Active appearance model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, Issue. 6, pp. 681-685, 2001. [7] T. Kanade, and J. J. Lien, “Automated facial expression recognition based on FACS units,” IEEE International Conference on Automatic Face and Gesture Recognition, pp. 390-395, 1998. [8] Y. I. Tian, and T. Kanade, “Recognizing action units for facial expression analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, Issue. 2, pp. 97-115, 2001. [9] T. Kanade, and J. J. Lien, “Automatic facial expression: a survey,” Pattern Recognition, vol. 36, Issue. 1, pp. 259-275, 2003. [10] T. Kanade, and J. J. Lien, “Automatic analysis of facial expression: the state of the art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, Issue. 12, pp. 1424-1445, 2000. [11] B. Zhang, and Y. Gao, “Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor,” IEEE Trans. Pattern Analysis and Image Processing, vol. 19, Issue. 2, pp. 533-544, 2010. [12] T. F. Cootes, and C. J. Taylor, “Active shape model-their training and application,” Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38-59, 1995. B. Dimensionality Reduction [13] D. Cai, X. He, J. Han, and H.-J. Zhang, “Orthogonal Laplacianfaces for Face Recognition,” IEEE Trans. Image Processing, vol. 15, no. 11, pp. 3608-3614, Nov. 2006. [14] X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face recognition using Laplacianfaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328-340, 2005. [15] A. Hyvarinen, E. Oja, “Independent component analysis: algorithms and applications,” Neural Networks, vol. 13, pp. 411–30, 2000. [16] J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, 290, 2319–2323, 2000. [17] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, 290, 2323–2326, 2000. [18] J.A. Lee and M. Verleysen. Nonlinear dimensionality reduction, Springer, New York, NY, USA, 2007. [19] X. He and P. Niyogi, “Locality Preserving Projections,” Advance in Neural Information Processing Systems, vol. 16, 2003. [20] S. Theodoridis and K. Koutroumbas, Pattern recognition, 4th ed., Academic Press, 2009. [21] M. E. Tipping and C. M. Bishop, “Probabilistic principal component analysis,” Journal of the Royal Statistical Society, B, vol. 6, no. 3, pp. 611–622, 1999. [22] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extension: A general framework for dimensionality reduction,” IEEE Transactions on PAMI, vol. 29, no. 1, 2007. [23] R. O. Duda, P. E. Hart, D. G. Stoke, Pattern classification, 2nd ed., John Wiley & Sons, 2001. [24] R. A. Horn and C. A. Johnson, Matrix Analysis, Cambridge University Press. pp. 176–180, 1985. [25] M. Vlachos, C. Domeniconi, D. Gunopulos, G. Kollios, and N. Koudas, “Non-linear dimensionality reduction techniques for classification and visualization,” In Proceedings of 8th SIGKDD, pp. 645-651, 2002. [26] X. He, D. Cai, S. Yan, and H.-J. Zhang, “Neighborhood preserving embedding,” In Proceedings of the 10th IEEE International Conference on Computer Vision, pp. 1208-1213, 2005. [27] X. Geng, D. C. Zhan, and Z. H. Zhou, “Supervised nonlinear dimensionality reduction for visualization and classification,” IEEE Transactions on Systems, Man, and Cybernetics–Part B: Cybernetics, vol. 35, pp. 1098-1107, 2005. [28] V. de Silva and J.B. Tenenbaum, “Global versus local methods in nonlinear dimensionality reduction,” In Advances in Neural Information Processing Systems, vol. 15, pp. 721-728, 2003. [29] M. H. Yang, “Face recognition using extended isomap,” in Proc. IEEE International Conference on Image Processing, vol. 2, pp. 117–120, 2002. [30] D. DeMers and G. Cottrell, “Non-linear dimensionality reduction,” In Advances in Neural Information Processing Systems, vol. 5, pp. 580–587, 1993. [31] L. K. Saul, and S. T. Roweis, “An introduction to Locally Linear Embedding,” 2001. URL: http://www.cs.toronto.edu/~roweis/lle/publications.html. [32] L. K. Saul, and S. T. Roweis, “Think globally, fit locally: unsupervised learning of low dimensional manifolds,” Journal of Machine Learning Research, vol. 4, pp. 119-155, 2003. [33] Y. Fu and T. Huang, “Locally Linear Embedded Eigenspace Analysis,” IFP-TR, Univ. of Illinois at Urbana-Champaign, Jan. 2005. [34] J. Ham, D. Lee, S. Mika, and B. Scholkopf, “A Kernel View of the Dimensionality Reduction of Manifolds,” Proc. Int’l Conf. Machine Learning, pp. 47-54, 2004. [35] D. Cai, X. He, K. Zhou, J. Han, and H. Bao, “Locality sensitive discriminant analysis,” Proc. 20th Int’l Joint Conf. Artificial Intelligence (IJCAI ’07), 2007. [36] Q. You, N. Zheng, S. Du, and Y. Wu, “Neighborhood discriminant projection for face recognition,” Pattern Recognition Letters, vol. 28, Issue 10, pp. 1156-1163, 2007. [37] Z. Zhang and H. Zha, “Principal manifolds and nonlinear dimensionality reduction via local tangent space alignment,” SIAM Journal of Scientific Computing, vol. 26, no. 1, pp. 313-338, 2004. [38] B. Scholkopf, A.J. Smola, and K.-R. Muller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10, no. 5, pp. 1299-1319, 1998. [39] M. H. Yang, “Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods,” in Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pp. 215-220, 2002. [40] Y. Bengio, J. Paiement, P. Vincent, O. Delalleau, N. Roux, and M. Ouimet, “Out-of-Sample Extensions for LLE, ISPMAP, MDS, Eigenmaps, and Spectral Clustering,” Advances in Neural Information Processing Systems, vol. 16, 2004L.J.P. van der Maaten, E.O. Postma, and H.J. van den Herik, “Dimensionality reduction: A comparative review,” Online Preprint, 2008. [41] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and spectral techniques for embedding and clustering,” In Advances in Neural Information Processing Systems, vol. 14, pp. 585–591, 2002. [42] D. Xu and S. Yan, “Marginal Fisher analysis and its variants for human gait recognition and content-based image retrieval,” in Proc. IEEE International Transactions on Image Processing, vol. 14, Issue. 11, pp. 2811–2821, 2007. [43] E. Kokiopoulou and Y. Saad, “Orthogonal neighborhood preserving projections: a projection-based dimensionality reduction technique,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, Issue. 12, pp. 2143–2156, 2007. C. Machine Learning [44] E. Alpaydin, Introduction to machine learning, 2nd ed., The MIT Press, 2010. [45] C. M. Bishop, Pattern recognition and machine learning, Springer, 2006. [46] I.K. Fodor, “A survey of dimension reduction techniques,” Technical report UCRL-ID-148494, LLNL, 2002. [47] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, 2nd ed., Springer, 2005. [48] J. L. Schafer, J. W. Graham, “Missing data: our view of the state of the art,” Psychological Methods, vol. 7, no. 2, pp. 147-177, 2002. [49] KDD Cup 2009: http://www.kddcup-orange.com/ [50] I. Guyon, V. Lemaire, G. Dror, and D. Vogel, “Analysis of the KDD cup 2009: Fast scoring on a large orange customer database,” JMLR: Workshop and Conference Proceedings, vol. 7, pp. 1-22, 2009. [51] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004. [52] C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm [53] J. Platt, “Sequential minimal optimization: A fast algorithm for training support vector machines,” in Advances in Kernel Methods - Support Vector Learning, MIT Press, pp. 185-208, 1999. [54] C. W. Hsu and C. J. Lin, “A comparison of methods for multi-class support vector machines,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 415-425, 2002. D. Facial expression recognition and analysis [55] Z. Zhang and J. F. YE, “Facial expression recognition based on gabor wavelet transformation and elastic templates matching,” IEEE Conference on Image and Graphics, pp. 254-257, 2004. [56] R. Zhi, Q. Ruan, “Fuzzy discriminant projections for facial expression recognition,” IEEE Conference on Pattern Recognition, pp. 1-4, 2008. [57] M. F. Valstar and I. Patras, “Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 76, 2005. [58] G. Littlewort, M.S. Bartlett, “Dynamics of facial expression extracted automatically from video,” Face Processing in Video Sequences, vol. 24, Issue. 6, pp. 615-625, 2006. [59] Y. Chang, “Probabilistic expression analysis on manifolds,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. II-520-II-527, 2004. [60] P. Yang, “Facial expression recognition using encoded dynamic features,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, 2008. [61] M. Yeasin, B. Bullot, and R. Srarma, “From facial expression to level of interest a spatio temporal approach,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. II-922-II-927, 2004. [62] Y. L. Tian, “Evaluation of face resolution for expression analysis,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 82, 2004. [63] I. Kotsia and I. Pitas, “Facial expression recognition in image sequences using geometric deformation features and support vector machines,” IEEE Conference on Image Processing, vol. 16, Issue. 1, pp. 172-187, 2007. [64] Y. Tong, W. Liao, “Facial action unit recognition by exploiting their dynamic and semantic relationships, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, Issue. 10, pp. 1683-1699, 2007. [65] Y. Zhang, “Active and dynamic information fusion for facial expression understanding from image sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, Issue. 5, pp. 699-714, 2005. [66] H. Li, J. Buenaposada, “Real time facial expression recognition with illumination corrected image sequences,” IEEE Conference on Automatic Face and Gesture Recognition, pp. 1-6, 2008. [67] M. Bartlett, and G. Littlewort “Real time face detection and facial expression recognition development and applications to human computer interaction,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 5, pp. 53, 2003. [68] P. Yang, “Boosting coded dynamic features for facial action units and facial expression recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–6, 2007. [69] I. Cohen, N. Sebe, “Facial expression recognition from video sequences temporal and static modeling,” Computer Vision and Image Understanding, vol. 91, Issue. 1-2, pp. 160-187, 2003. [70] C. Shan, “A comprehensive empirical study on linear subspace methods for facial expression analysis,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 153, 2006. [71] I. Buciu, C. Kotropoulos, “ICA and gabor representation for facial expression recognition,” IEEE Conference on Image Processing, vol. 2, pp. II-855-8, 2003. [72] L. Gang, and X. Li “Geometric feature based facial expression recognition using multiclass support vector machines,” IEEE Conference on Granular Computing, pp. 318-321, 2009. [73] P. Yang, “Facial expression recognition with local binary patterns and linear programming,” Pattern Recognition and Image Analysis, vol. 15, no. 2, pp. 546–548, 2004. [74] Y. Zilu, F. Xieyan, “Combining LBP and Adaboost for facial expression recognition,” IEEE Conference on Signal Processing, pp. 1461-1464, 2008. [75] G. Guo, R. Dyer, “Simultaneous feature selection and classifier training via linear programming a case study for face expression recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. I-346-I-352, 2003. [76] C. Shan, and S. Gong “Facial expression recognition based on local binary patterns a comprehensive study,” Image and Vision Computing, vol. 27, Issue. 6, pp. 803-816, 2009. [77] M. F. Valstar and B. Jiang, “The first facial expression recognition and analysis challenge,” IEEE Conference on Automatic Face and Gesture Recognition, pp. 921-926, 2008. [78] M.W. Huang and Z. W. Wang, “A new method for facial expression recognition based on sparse representation plus LBP Based On Sparse Representation Plus LBP,” International Congress on Image and Signal Processing, vol. 4, pp. 1750–1754, 2010. [79] L. B. Cai, and Z. L. Ying, “A new approach of facial expression recognition based on contourlet transform,” IEEE International Conference on Wavelet Analysis and Pattern Recognition, pp. 275-280, 2009. [80] G. Guo, R. Dyer, “Facial expression recognition based on Gabor histogram feature and MVBoost,” Journal of Computer Research and Development, vol. 44, no. 7, pp. 1089-1096, 2007. [81] C. Shan, and S. Gong “Facial expression analysis across databases,” IEEE Conference on Multimedia Technology, pp. 317-320, 2011. [82] M. F. Valstar and B. Jiang, “Facial expression recognition based on MB-LGBP feature and multi-level classification,” Advance in intelligent and soft computing, vol. 129, pp. 37–42, 2012. [83] S. K. Gupta and S. Agrwal, “A hybrid method of feature extraction for facial expression recognition,” International Conference on Signal-Image Technology and Internet-Based System, pp. 422–425, 2011. [84] X. Zhao and S. Zhang, “Facial expression recognition using local fisher discriminant analysis,” Communications in Computer and Information Science, vol. 214, pp. 443–448, 2011. [85] X. Zhao and S. Zhang, “Facial expression recognition via Fuzzy Support Vector Machines,” Advances in Intelligent and Soft Computing, vol. 168, pp. 495–500, 2012. [86] T. H. H. H. Zavaschi and A. L. Koerich, “Facial expression recognition via Fuzzy Support Vector Machines,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1489–1492, 2011. [87] T. H. H. H. Zavaschi and A. L. Koerich, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63642 | - |
dc.description.abstract | 基於可接觸資料的增加與計算技術的快速發展,過去十年,機器學習已因為人類生活中大量的自動化需求而吸引了許多注意力。現今在物件識別、機器人學、人工智慧、電腦視覺、甚至是經濟學等科學領域當中,機器學習已成為從資料當中抽取與探索重要資訊不可或缺的角色。
另一方面,在過幾十年,人臉偵測與辨識等人臉相關的主題已漸漸成為物件識別與電腦視覺中的重要研究領域。其原因來自於自動化識別與監視系統的需求、對於人類視覺系統在人臉感知上的興趣、與人機互動介面的設計開發等。 在本篇論文當中,我們專注於使用機器學習技術來測定人類的表情。我們提出了一個人類表情辨識的架構,其包含了特徵抽取、抗雜訊機制、降低維度、與表情測定等四個步驟。我們改善了傳統的區域二質化樣本特徵抽取方法 (local binary pattern, LBP) 並結合另一種近年發展出的區域相位量化特徵抽取方法 (local phase quantization, LPQ) 作為人臉表情的特徵表示式。為了讓抽取出的人臉表情特徵更具代表性並消除對於表情辨識不重要的特徵,我們特別提出了一種抗雜訊機制。這個機制可讓我們更妥善地利用擷取出的表情特徵使得之後的降維及辨識工作更具效果。不同於以往的降維方法,我們特別根據人臉表情的特性設計了針對表情辨識而做的降維方法。 最後根據降維完後的表情特徵,我們使用常見的支持向量機和某些個最近鄰居分類器 (support vector machine, SVM and K-nearest neighbor, KNN) 來判斷可能的表情。 實驗結果顯示,在普遍使用的JAFFE資料庫中,我們提出的架構和提出的演算法跟現有的其他方法比較能達到較好的辨識率。 | zh_TW |
dc.description.abstract | Based on the increasing of accessible data and the fast development of the computational technology, machine learning attracted lots of attention in the last ten year because of the great demand of automation in human life. Now in the disciplines of pattern recognition, robotics, artificial intelligences, computer vision, and even economics, machine learning has been an indispensible part to extract and discover the valuable information from data.
On the other hand, human face related topics such as face detection and recognition became important research fields in pattern recognition and computer vision during the last few decades. This is due to the needs of automatic recognition and surveillance system, the interest in the human visual system on human face perception, and the design of human-computer interface, etc. In this thesis, we focus on using machine learning techniques for facial expression recognition. A facial expression recognition framework is proposed, which includes four steps: feature extraction, denoising mechanism, dimensionality reduction, and facial expression determination. The widely-used local binary pattern feature (LBP) is modified and combined with a new feature extraction method, local phase quantization (LPQ) to represent the facial expression. Since the extracted features are noisy and contain unrelated information for expression recognition task, a denoising mechanism is proposed. Due to the denoising mechanism, the denoised features are more representative for facial expression. Different from the existing dimensionality reduction algorithms, an expression-specific dimensionality reduction algorithm is proposed based on the special properties of facial expression. Finally, the reduced features with more meaning for facial expression are fed into the widely-used Support Vector Machine (SVM) and K-nearest neighbor classifier. From the experimental results, the proposed framework and algorithms achieve the highest recognition rate against the existing methods based on the JAFFE database. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T17:15:30Z (GMT). No. of bitstreams: 1 ntu-101-R99942103-1.pdf: 3462511 bytes, checksum: c1fe8ec5a0c0d6de360142c0f3fdaeca (MD5) Previous issue date: 2012 | en |
dc.description.tableofcontents | CONTENTS
口試委員會審定書 # 誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS v LIST OF FIGURES viii LIST OF TABLES xiii Chapter 1 Introduction 1 1.1Equation Chapter 1 Section 1 Motivation 1 1.2 Main Contribution 1 1.3 Organization 2 1.4 Notation 3 Chapter 2 Automatic Facial Expression Analysis 5 2.1 Facial Feature Extraction 6 2.1.1 The Comparison between Different Types of Extraction Methods 7 2.2 Appearance-based Features 8 2.2.1 Gabor wavelet filter 8 2.2.2 Local Binary Patterns (LBP): 9 2.3 Model-based Features 10 2.3.1 Active appearance model (AAM) Derived Representations: 10 2.4 Geometric-Based Features 13 2.4.1 Feature Tracking: 13 2.5 Dynamic feature 18 2.5.1 Volume Local Binary Patterns 20 2.5.2 Local Binary Patterns from Three Orthogonal Planes 25 Chapter 3 Dimensionality Reduction 31 3.1 Linear Reduction Techniques 31 3.1.1 Principle Component Analysis 31 3.1.2 Linear Discriminant Analysis 33 3.1.3 Multidimensional Scaling (MDS) 37 3.2 Non-Linear Approaches for Dimensionality Reduction 41 3.2.1 Isomap 41 3.2.2 Locally Linear Embedding (LLE) 47 3.2.3 Laplacian Eigenmap 51 3.2.4 Locality Preservation Projections (LPP) 54 3.2.5 Orthogonal Neighborhood Preserving Projection 59 Chapter 4 Classification 63 4.1 Machine Learning 63 4.2 Linear Regression 64 4.3 Support Vector Machine 69 Chapter 5 Proposed Methods and Modification 77 5.1 The proposed Framework and Important Factors 77 5.1.1 The proposed Framework 78 5.1.2 Important Factors 79 5.1.3 The adopted database 80 5.1.4 The Adopted Face Detection Method 82 5.2 The Modified Feature Extraction Algorithm 82 5.2.1 The Proposed es-LBP Feature 83 5.3 The employed Local Phase Quantization Feature 86 5.4 The Proposed Denoising Mechanism 89 5.5 The Proposed Dimensionality Reduction Algorithm 91 5.5.1 The Concept of Opposite Relationship 91 5.5.2 The algorithm of Prior-LPP 92 5.6 Simulation Results 97 5.6.1 Comparison between the Proposed es-LBP Feature and Original LBP Feature 97 5.6.2 The Investigation on the New LPQ Feature for JAFFE Database 100 5.6.3 More Discussions about the Proposed Framework 104 5.6.4 Comparisons with Existing Algorithms 108 Chapter 6 Conclusion 111 Reference 112 | |
dc.language.iso | en | |
dc.title | 依據對稱特徵及新式區域保留投影技術的改良式表情辨識系統 | zh_TW |
dc.title | Improved Facial Expression Recognition System Based on Symmetric Features and New Locality Preserving Projection | en |
dc.type | Thesis | |
dc.date.schoolyear | 100-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 郭景明(Jing-Ming Guo),曾易聰(Yi-Chong Zeng) | |
dc.subject.keyword | 機器學習,特徵抽取,降維,流形學習,人臉表情辨識, | zh_TW |
dc.subject.keyword | Machine learning,feature extraction,dimensionality reduction,manifold learning,facial expression recognition, | en |
dc.relation.page | 121 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2012-08-19 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-101-1.pdf 目前未授權公開取用 | 3.38 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。