請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51002
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 李琳山 | |
dc.contributor.author | Bo-Hsiang Tseng | en |
dc.contributor.author | 曾柏翔 | zh_TW |
dc.date.accessioned | 2021-06-15T13:23:44Z | - |
dc.date.available | 2019-07-04 | |
dc.date.copyright | 2016-07-04 | |
dc.date.issued | 2015 | |
dc.date.submitted | 2016-06-24 | |
dc.identifier.citation | [1] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” Signal Processing Magazine, IEEE, vol.29, no. 6, pp. 82–97, 2012.
[2] Ronan Collobert and Jason Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 160–167. [3] George E Dahl, Dong Yu, Li Deng, and Alex Acero, “Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 20, no. 1, pp. 30–42, 2012. [4] Jerome R Bellegarda, “Statistical language model adaptation: review and perspectives,” Speech communication, vol. 42, no. 1, pp. 93–108, 2004. [5] Aaron Heidel and Lin-shan Lee, “Robust topic inference for latent semantic language model adaptation,” in Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE Workshop on. IEEE, 2007, pp. 177–182. [6] Tsung-Hsien Wen, Hung-Yi Lee, Tai-Yuan Chen, and Lin-Shan Lee, “Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications,” in Spoken Language Technology Workshop (SLT), 2012 IEEE. IEEE, 2012, pp. 188–193. [7] John Paolillo, “The virtual speech community: Social network and language variation on irc,” Journal of Computer-Mediated Communication, vol. 4, no. 4, pp. 0–0, 1999. [8] Devan Rosen and Margaret Corbit, “Social network analysis in virtual environments,” in Proceedings of the 20th ACM conference on Hypertext and hypermedia. ACM, 2009, pp. 317–322. [9] Christopher J Leggetter and Philip C Woodland, “Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models,” Computer Speech & Language, vol. 9, no. 2, pp. 171–185, 1995. [10] Jean-Luc Gauvain and Chin-Hui Lee, “Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains,” Speech and audio processing, ieee transactions on, vol. 2, no. 2, pp. 291–298, 1994. [11] Phil C Woodland, “Speaker adaptation for continuous density hmms: A review,” in ISCA Tutorial and Research Workshop (ITRW) on Adaptation Methods for Speech Recognition, 2001. [12] Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai, “Class-based n-gram models of natural language,” Computationa linguistics, vol. 18, no. 4, pp. 467–479, 1992. [13] Joshua T Goodman, “A bit of progress in language modeling,” Computer Speech & Language, vol. 15, no. 4, pp. 403–434, 2001. [14] William A Gale and Geoffrey Sampson, “Good-turing frequency estimation without tears*,” Journal of Quantitative Linguistics, vol. 2, no. 3, pp. 217–237, 1995. [15] Frankie James, “Modified kneser-ney smoothing of n-gram models,” Research Institute for Advanced Computer Science, Tech. Rep. 00.07, 2000. [16] Geoffrey Hinton, “A practical guide to training restricted boltzmann machines,” Momentum, vol. 9, no. 1, pp. 926, 2010. [17] Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin, “A neural probabilistic language model,” The Journal of Machine Learning Research, vol. 3, pp. 1137–1155, 2003. [18] Junho Park, Xunying Liu, Mark JF Gales, and Philip CWoodland, “Improved neural network based language modelling and adaptation.,” in INTERSPEECH, 2010, pp. 1041–1044. [19] Hai-Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and Franc¸ois Yvon, “Structured output layer neural network language model,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5524–5527. [20] Holger Schwenk and Jean-Luc Gauvain, “Connectionist language modeling for large vocabulary continuous speech recognition,” in Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on. IEEE, 2002, vol. 1, pp. I–765. [21] Holger Schwenk and Jean-Luc Gauvain, “Neural network language models for conversational speech recognition.,” in INTERSPEECH, 2004. [22] Stefan Kombrink, Tomas Mikolov, Martin Karafi´at, and Luk´as Burget, “Recurrent neural network based language modeling in meeting recognition.,” in INTERSPEECH, 2011, pp. 2877–2880. [23] Tom´aˇsMikolov, Stefan Kombrink, Luka´sˇ Burget, Jan Honza Cˇ ernocky`, and Sanjeev Khudanpur, “Extensions of recurrent neural network language model,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5528–5531. [24] Tomas Mikolov and Geoffrey Zweig, “Context dependent recurrent neural network language model.,” in SLT, 2012, pp. 234–239. [25] Tomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernock` y, “Empirical evaluation and combination of advanced language modeling techniques.,” in INTERSPEECH, 2011, number s 1, pp. 605–608. [26] Tom´aˇs Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky, “Subword language modeling with neural networks,” preprint (http://www. fit. vutbr. cz/imikolov/rnnlm/char. pdf), 2012. [27] Rukmini M Iyer and Mari Ostendorf, “Modeling long distance dependence in language: Topic mixtures versus dynamic cache models,” Speech and Audio Processing, IEEE Transactions on, vol. 7, no. 1, pp. 30–39, 1999. [28] Aaron Heidel, Hung-an Chang, and Lin-shan Lee, “Language model adaptation using latent dirichlet allocation and an efficient topic inference algorithm.,” in INTERSPEECH, 2007, pp. 2361–2364. [29] Marcello Federico, “Efficient language model adaptation through mdi estimation.,” in Eurospeech, 1999. [30] Ciprian Chelba and Frederick Jelinek, “Structured language modeling,” Computer Speech & Language, vol. 14, no. 4, pp. 283–332, 2000. [31] Anhai Doan, Raghu Ramakrishnan, and Alon Y Halevy, “Crowdsourcing systems on the world-wide web,” Communications of the ACM, vol. 54, no. 4, pp. 86–96, 2011. [32] Robert Munro, Steven Bethard, Victor Kuperman, Vicky Tzuyin Lai, Robin Melnick, Christopher Potts, Tyler Schnoebelen, and Harry Tily, “Crowdsourcing and language studies: the new generation of linguistic data,” in Proceedings of the NAACL HLT 2010Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Association for Computational Linguistics, 2010, pp. 122–130. [33] Jingjing Liu, Scott Cyphers, Panupong Pasupat, Ian McGraw, and Jim Glass, “A conversational movie search system based on conditional random fields.,” in INTERSPEECH, 2012. [34] Ian McGraw, Scott Cyphers, Panupong Pasupat, Jingjing Liu, and Jim Glass, “Automating crowd-supervised learning for spoken language systems.,” in INTERSPEECH, 2012. [35] Tsung-HsienWen, Aaron Heidel, Hung-yi Lee, Yu Tsao, and Lin-Shan Lee, “Recurrent neural network based language model personalization by social network crowdsourcing.,” in INTERSPEECH, 2013, pp. 2703–2707. [36] Abdul Manan Ahmad, Saliza Ismail, and Den Fairol Samaon, “Recurrent neural network with backpropagation through time for speech recognition,” in Communications and Information Technology, 2004. ISCIT 2004. IEEE International Symposium on. IEEE, 2004, vol. 1, pp. 98–102. [37] Joy Mazumdar and Ronald G Harley, “Recurrent neural networks trained with backpropagation through time algorithm to estimate nonlinear load harmonic currents,” Industrial Electronics, IEEE Transactions on, vol. 55, no. 9, pp. 3484–3491, 2008. [38] Yangyang Shi, Pascal Wiggers, and Catholijn M Jonker, “Towards recurrent neural networks language models with linguistic and contextual features.,” in INTERSPEECH, 2012. [39] Yoshua Bengio, Patrice Simard, and Paolo Frasconi, “Learning long-term dependencies with gradient descent is difficult,” Neural Networks, IEEE Transactions on, vol. 5, no. 2, pp. 157–166, 1994. [40] Xunying Liu, Mark JF Gales, Philip C Woodland, et al., “Improving lvcsr system combination using neural network language model cross adaptation.,” in INTERSPEECH, 2011, pp. 2857–2860. [41] Yehuda Koren, Robert Bell, and Chris Volinsky, “Matrix factorization techniques for recommender systems,” Computer, , no. 8, pp. 30–37, 2009. [42] Thomas K Landauer, Peter W Foltz, and Darrell Laham, “An introduction to latent semantic analysis,” Discourse processes, vol. 25, no. 2-3, pp. 259–284, 1998. [43] Keh-Jiann Chen and Shing-Huan Liu, “Word identification for mandarin chinese sentences,” in Proceedings of the 14th conference on Computational linguistics- Volume 1. Association for Computational Linguistics, 1992, pp. 101–107. [44] Wei-Yun Ma and Keh-Jiann Chen, “A bottom-up merging algorithm for chinese unknown word extraction,” in Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17. Association for Computational Linguistics, 2003, pp. 31–38. [45] Andreas Stolcke et al., “Srilm-an extensible language modeling toolkit.,” in INTERSPEECH, 2002. [46] Tomas Mikolov, Stefan Kombrink, Anoop Deoras, Lukar Burget, and Jan Cernocky, “Rnnlm-recurrent neural network language modeling toolkit,” in Proc. of the 2011 ASRU Workshop, 2011, pp. 196–201. [47] George A Miller, “Wordnet: a lexical database for english,” Communications of the ACM, vol. 38, no. 11, pp. 39–41, 1995. [48] Steve Young, Gunnar Evermann, Mark Gales, Thomas Hain, Dan Kershaw, Xunying Liu, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, et al., The HTK book, vol. 2, Entropic Cambridge Research Laboratory Cambridge, 1997. [49] Thomas L Griffiths and Mark Steyvers, “Finding scientific topics,” Proceedings of the National Academy of Sciences, vol. 101, no. suppl 1, pp. 5228–5235, 2004. [50] David M Blei, Andrew Y Ng, and Michael I Jordan, “Latent dirichlet allocation,” the Journal of machine Learning research, vol. 3, pp. 993–1022, 2003. [51] Gregor Heinrich, “Parameter estimation for text analysis,” Tech. Rep., Technical report, 2005. [52] Ian Porteous, David Newman, Alexander Ihler, Arthur Asuncion, Padhraic Smyth, and Max Welling, “Fast collapsed gibbs sampling for latent dirichlet allocation,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008, pp. 569–577. [53] YeeWTeh, David Newman, and MaxWelling, “A collapsed variational bayesian inference algorithm for latent dirichlet allocation,” in Advances in neural information processing systems, 2006, pp. 1353–1360. [54] George Saon, Hagen Soltau, David Nahamoo, and Michael Picheny, “Speaker adaptation of neural network acoustic models using i-vectors,” in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 55–59. [55] Vishwa Gupta, Patrick Kenny, Pierre Ouellet, and Themos Stafylakis, “I-vectorbased speaker adaptation of deep neural networks for french broadcast audio transcription,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 6334–6338. [56] K Krishna and M Narasimha Murty, “Genetic k-means algorithm,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 29, no. 3, pp. 433–439, 1999. [57] Zhexue Huang, “Extensions to the k-means algorithm for clustering large data sets with categorical values,” Data mining and knowledge discovery, vol. 2, no. 3, pp. 283–304, 1998. [58] Andrew Kachites McCallum, “Mallet: A machine learning for language toolkit,” http://mallet.cs.umass.edu, 2002. [59] Daniel D Lee and H Sebastian Seung, “Algorithms for non-negative matrix factorization,” in Advances in neural information processing systems, 2001, pp. 556–562. [60] Daniel D Lee and H Sebastian Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999. [61] Patrik O Hoyer, “Non-negative matrix factorization with sparseness constraints,” The Journal of Machine Learning Research, vol. 5, pp. 1457–1469, 2004. [62] Ian Jolliffe, Principal component analysis, Wiley Online Library, 2002. [63] Anoop Deoras, Tom´aˇs Mikolov, Stefan Kombrink, Martin Karafi´at, and Sanjeev Khudanpur, “Variational approximation of long-span language models for lvcsr,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5532–5535. [64] Ebru Arisoy, Stanley F Chen, Bhuvana Ramabhadran, and Abhinav Sethy, “Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition,” Audio, Speech, and Language Processing, IEEE/ACM Transactions on, vol. 22, no. 1, pp. 184–192, 2014. [65] Andreas Stolcke, “Entropy-based pruning of backoff language models,” arXiv preprint cs/0006025, 2000. [66] Ciprian Chelba, Thorsten Brants, Will Neveitt, and Peng Xu, “Study on interaction between entropy pruning and kneser-ney smoothing.,” in INTERSPEECH, 2010, pp. 2422–2425. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51002 | - |
dc.description.abstract | 本論文探討語音辨識中的語言模型個人化,並探討兩種不同的架構且分析其
優劣。此外,為了達到更完善的個人化語言模型,針對模型的初始化以及如何有 效運用在辨識系統中我們也提出了一些方法及分析。 在網路、手機與穿戴式裝置的普及下,人們產生了大量的資料存在各式平台 上。隨著使用者對應用程式的需求愈來愈大,個人化語音辨識系統不再是不可行 的想像。社群網路的興起如臉書等讓個人語料的蒐集不再困難,我們希望透過蒐 集使用者在社群網路上的文章更加了解使用者的用詞特性、文章主題,讓語言模 型個人化以便提升使用者之語音辨識正確率。本論文探討兩種語言模型個人化之 方法,基於模型轉換之個人化以及基於使用者特徵將通用語言模型個人化。前者 需要對每位使用者訓練其語言模型,這可能因資料稀疏性導致模型過度貼合使用 者語料。而後者不僅能順利解決資料稀疏性的問題,還能透過特徵抽取讓所有使 用者共享模型來降低訓練時的時間與資源。 此外,遞迴式類神經網路語言模型(RNNLM:Recurrent Neural Network)尚未 有一個好的初始化方法。我們使用非負矩陣因子分解(NMF:non-negative matrix factorization) 以及潛藏式主題模型(latent topic model),希望能透過事前學習到的 知識來給類神經網路一個好的初始化。再者,目前的語音辨識系統只利用了N連 文法語言模型進行辨識,原因是因為利用類神經網路會使得辨識時間大幅增加。 我們探討一種方法讓類神經網路語言模型轉換至N連文法語言模型,使辨識系統 不但可以參考類神經網路所得到的機率分佈,更能因為N連文法語言模型的形式 不讓辨識時間有所增加。 | zh_TW |
dc.description.provenance | Made available in DSpace on 2021-06-15T13:23:44Z (GMT). No. of bitstreams: 1 ntu-104-R02942037-1.pdf: 4973684 bytes, checksum: 3255759985977bd1d184c9dff191a17e (MD5) Previous issue date: 2015 | en |
dc.description.tableofcontents | 誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
中文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 一、導論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 本論文研究之主要方法與結果. . . . . . . . . . . . . . . . . . . . . . 4 1.4 章節安排. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 二、背景知識. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 個人化語音辨識系統. . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 聲學模型調適. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 N連文法語言模型(N-gram Language Model) . . . . . . . . . . . . . . 9 2.4 類神經網路語言模型. . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5 遞迴式類神經網路語言模型. . . . . . . . . . . . . . . . . . . . . . . 14 2.6 語言模型評估. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.7 語言模型調適. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 三、社群網路群力模式. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 群力模式概念介紹. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 具備語音界面的社群網路瀏覽器. . . . . . . . . . . . . . . . . . . . . 20 3.2.1 系統特點. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2.2 臉書認證機制. . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.3 系統生態系. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 四、基於模型轉換之個人化語言模型. . . . . . . . . . . . . . . . . . . . . . . 24 4.1 基於模型轉換之個人化語言模型. . . . . . . . . . . . . . . . . . . . . 24 4.1.1 模型結構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1.2 最佳化演算法- 沿時間反向傳播(BPTT) . . . . . . . . . . . . 26 4.1.3 上下文相關的遞迴式類神經網路與詞輔助特徵. . . . . . . . 28 4.1.4 三步驟調適機制. . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.1.5 使用者導向的詞特徵. . . . . . . . . . . . . . . . . . . . . . . 32 4.2 個人化遞迴式類神經網路語言模型評估. . . . . . . . . . . . . . . . . 34 4.2.1 實驗設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2.2 混淆度實驗結果與結果分析. . . . . . . . . . . . . . . . . . . 36 4.2.3 前N最佳結果重評分實驗結果. . . . . . . . . . . . . . . . . . 37 4.3 不同權重矩陣對個人化語言模型之影響. . . . . . . . . . . . . . . . . 40 4.4 權重矩陣對個人化語言模型之影響評估. . . . . . . . . . . . . . . . . 43 4.4.1 實驗設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 五、基於使用者特徵將通用語言模型個人化. . . . . . . . . . . . . . . . . . . 49 5.1 基於使用者特徵將通用語言模型個人化. . . . . . . . . . . . . . . . . 49 5.1.1 模型結構與最佳訓練演算法. . . . . . . . . . . . . . . . . . . 50 5.1.2 潛藏式主題模型之特徵抽取. . . . . . . . . . . . . . . . . . . 50 5.1.3 通用個人化語言模型之訓練. . . . . . . . . . . . . . . . . . . 54 5.1.4 基於使用者特徵之個人化效果. . . . . . . . . . . . . . . . . . 55 5.2 基於使用者特徵將通用語言模型個人化之評估. . . . . . . . . . . . . 56 5.2.1 實驗設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.2.2 使用者特徵抽取選擇之實驗結果. . . . . . . . . . . . . . . . 57 5.2.3 混淆度與重評分辨識之實驗結果. . . . . . . . . . . . . . . . 58 5.2.4 個人語料稀疏性之實驗結果. . . . . . . . . . . . . . . . . . . 61 六、遞迴式類神經網路語言模型之相關研究. . . . . . . . . . . . . . . . . . . 63 6.1 遞迴式類神經網路語言模型初始化. . . . . . . . . . . . . . . . . . . 63 6.1.1 使用非負矩陣分解之類神經網路語言模型初始化. . . . . . . 63 6.1.2 潛藏式主題模型初始化類神經網路語言模型. . . . . . . . . . 66 6.2 遞迴式類神經網路語言模型初始化評估. . . . . . . . . . . . . . . . . 67 6.3 轉換遞迴式類神經網路語言模型至N連文法語言模型. . . . . . . . . 68 6.4 轉換遞迴式類神經網路語言模型至N連文法語言模型之評估. . . . . 71 6.4.1 歷史資訊取樣數目之影響評估. . . . . . . . . . . . . . . . . . 72 6.4.2 遞迴式類神經網路模型選取與不同線性內插權重之實驗評估73 6.4.3 不同線性內插權重下模型轉換後辨識正確率的表現. . . . . . 74 七、結論與展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 7.1 語言模型之個人化. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 7.2 遞迴式類神經網路語言模型之相關研究. . . . . . . . . . . . . . . . . 77 7.2.1 遞迴式類神經網路語言模型初始化. . . . . . . . . . . . . . . 77 7.2.2 直接運用遞迴式類神經網路至語音解碼器(Decoding) . . . . . 77 參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 | |
dc.language.iso | zh-TW | |
dc.title | 基於使用者特徵將通用語言模型個人化及相關研究 | zh_TW |
dc.title | Using User Feature to Personalize A Universal Language Model and Related Stuties | en |
dc.type | Thesis | |
dc.date.schoolyear | 104-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 李宏毅,陳信宏,王小川,鄭秋豫,簡仁宗 | |
dc.subject.keyword | 語音辨識,語言模型,個人化, | zh_TW |
dc.subject.keyword | Speech Recognition,ASR,language model,personalization, | en |
dc.relation.page | 88 | |
dc.identifier.doi | 10.6342/NTU201600456 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2016-06-24 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-104-1.pdf 目前未授權公開取用 | 4.86 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。