請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71499完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳信希(Hsin-Hsi Chen) | |
| dc.contributor.author | Yang-Yin Lee | en |
| dc.contributor.author | 李昂穎 | zh_TW |
| dc.date.accessioned | 2021-06-17T06:01:55Z | - |
| dc.date.available | 2021-02-12 | |
| dc.date.copyright | 2019-02-12 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-01-31 | |
| dc.identifier.citation | Agirre, E., Alfonseca, E., Hall, K., Kravalova, J., Paşca, M., & Soroa, A. (2009). A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 19–27). Association for Computational Linguistics.
Artetxe, M., Labaka, G., & Agirre, E. (2016). Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 2289–2294). Azzini, A., da Costa Pereira, C., Dragoni, M., & Tettamanzi, A. G. (2012). A neuro-evolutionary corpus-based method for word sense disambiguation. IEEE Intelligent Systems, 27(6), 26–35. Banjade, R., Maharjan, N., Niraula, N. B., Rus, V., & Gautam, D. (2015). Lemon and tea are not similar: Measuring word-to-word similarity by combining different methods. In International Conference on Intelligent Text Processing and Computational Linguistics (pp. 335–346). Springer. Bengio, Y., Delalleau, O., & Le Roux, N. (2006). Label Propagation and Quadratic Criterion. In O. Chapelle, B. Schölkopf, & A. Zien (Eds.), Semi-Supervised Learning (pp. 193–216). MIT Press. Bian, J., Gao, B., & Liu, T.-Y. (2014). Knowledge-powered deep learning for word embedding. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 132–148). Springer. Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 135–146. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., & Taylor, J. (2008). Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1247–1250). ACM. Bollegala, D., Mohammed, A., Maehara, T., & Kawarabayashi, K. (2016). Joint Word Representation Learning Using a Corpus and a Semantic Lexicon. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (pp. 2690–2696). Phoenix, Arizona: AAAI Press. Bruni, E., Tran, N.-K., & Baroni, M. (2014). Multimodal distributional semantics. The Journal of Artificial Intelligence Research, 49, 1–47. Budanitsky, A., & Hirst, G. (2006). Evaluating wordnet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1), 13–47. Bullinaria, J. A., & Levy, J. P. (2007). Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39(3), 510–526. Camacho-Collados, J., Pilehvar, M. T., & Navigli, R. (2015). NASARI: A novel approach to a semantically-aware representation of items. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 567–577). Denver, Colorado: Association for Computational Linguistics. Chang, K.-W., Yih, W., & Meek, C. (2013). Multi-relational latent semantic analysis. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (pp. 1602–1612). Association for Computational Linguistics. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. ArXiv Preprint ArXiv:1406.1078. Cohen, P. R., & Kjeldsen, R. (1987). Information retrieval by constrained spreading activation in semantic networks. Information Processing & Management, 23(4), 255–268. https://doi.org/10.1016/0306-4573(87)90017-3 Deerwester, S. C., Dumais, S. T., & Harshman, R. A. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv Preprint ArXiv:1810.04805. Dragoni, M., & Petrucci, G. (2017). A neural word embeddings approach for multi-domain sentiment analysis. IEEE Transactions on Affective Computing, 8(4), 457–470. Ettinger, A., Resnik, P., & Carpuat, M. (2016). Retrofitting sense-specific word vectors using parallel text. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1378–1383). Faruqui, M., Dodge, J., Jauhar, S. K., Dyer, C., Hovy, E., & Smith, N. A. (2015). Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1606–1615). Fellbaum, C. (1998). WordNet. Wiley Online Library. Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., & Ruppin, E. (2001). Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web (pp. 406–414). ACM. Ganitkevitch, J., Van Durme, B., & Callison-Burch, C. (2013). PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 758–764). Goikoetxea, J., Agirre, E., & Soroa, A. (2016). Single or multiple? combining word representations independently learned from text and WordNet. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (pp. 2608–2614). AAAI Press. Hill, F., Reichart, R., & Korhonen, A. (2015). Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4). Huang, E. H., Socher, R., Manning, C. D., & Ng, A. Y. (2012). Improving word representations via global context and multiple word prototypes. In Annual Meeting of the Association for Computational Linguistics (ACL). Iacobacci, I., Pilehvar, M. T., & Navigli, R. (2015). SensEmbed: learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 95–105). Beijing, China: Association for Computational Linguistics. Jarmasz, M., & Szpakowicz, S. (2004). Roget’s thesaurus and semantic similarity. Recent Advances in Natural Language Processing III: Selected Papers from RANLP, 2003, 111. Jauhar, S. K., Dyer, C., & Hovy, E. (2015). Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 683–693). Jin, P., & Wu, Y. (2012). Semeval-2012 task 4: evaluating chinese word similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (Vol. 1, pp. 374–377). Association for Computational Linguistics. Joulin, A., Bojanowski, P., Mikolov, T., Jégou, H., & Grave, E. (2018). Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 2979–2984). Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2017). Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers (Vol. 2, pp. 427–431). Kim, Y. (2014). Convolutional neural networks for sentence classification. ArXiv Preprint ArXiv:1408.5882. Kipfer, B. A., & Institute, P. L. (1993). Roget’s 21st century thesaurus in dictionary form: The essential reference for home, school, or office. Dell Pub. Krebs, A., & Paperno, D. (2016). Capturing discriminative attributes in a distributional space: Task proposal. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP (pp. 51–54). Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211. Leacock, C., & Chodorow, M. (1998). Combining local context and WordNet similarity for word sense identification. WordNet: An Electronic Lexical Database, 49(2), 265–283. Lebret, R., & Collobert, R. (2014). Word embeddings through Hellinger PCA. In 14th Conference of the European Chapter of the Association for Computational Linguistics. Lee, G.-H., & Chen, Y.-N. (2017). MUSE: Modularizing unsupervised sense embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 327–337). Copenhagen, Denmark: Association for Computational Linguistics. Lee, Y.-Y., Ke, H., Huang, H.-H., & Chen, H.-H. (2016a). Combining word embedding and lexical database for semantic relatedness measurement. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 73–74). International World Wide Web Conferences Steering Committee. Lee, Y.-Y., Ke, H., Huang, H.-H., & Chen, H.-H. (2016b). Less is more: filtering abnormal dimensions in GloVe. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 71–72). International World Wide Web Conferences Steering Committee. Lee, Y.-Y., Yen, T.-Y., Huang, H.-H., & Chen, H.-H. (2017). Structural-fitting Word Vectors to Linguistic Ontology for Semantic Relatedness Measurement. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (pp. 2151–2154). ACM. Lee, Y.-Y., Yen, T.-Y., Huang, H.-H., Shiue, Y.-T., & Chen, H.-H. (2018). GenSense: A Generalized Sense Retrofitting Model. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 1662–1671). Li, J., & Jurafsky, D. (2015). Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (pp. 1722–1732). Lisbon, Portugal: Association for Computational Linguistics. Lin, D. (1998). An information-theoretic definition of similarity. In Proceedings of the Fifteenth International Conference on Machine Learning (Vol. 98, pp. 296–304). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Lin, J. (1983). Tongyici cilin. Shanhai cishu. Liu, X., Nie, J.-Y., & Sordoni, A. (2016). Constraining word embeddings by prior knowledge–application to medical information retrieval. In Asia Information Retrieval Symposium (pp. 155–167). Springer. Luong, T., Socher, R., & Manning, C. D. (2013). Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning (pp. 104–113). Maneewongvatana, S., & Mount, D. M. (1999). It’s okay to be skinny, if your friends are fat. In Center for Geometric Computing 4th Annual Workshop on Computational Geometry (Vol. 2, pp. 1–8). Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. ArXiv Preprint ArXiv:1301.3781. Mikolov, T., Le, Q. V., & Sutskever, I. (2013). Exploiting similarities among languages for machine translation. ArXiv Preprint ArXiv:1309.4168. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111–3119). Miller, G. A. (1995). WordNet: a lexical database for English. Communications of the ACM, 38(11), 39–41. https://doi.org/10.1145/219717.219748 Mrkšić, N., Séaghdha, D. Ó., Thomson, B., Gašić, M., Rojas-Barahona, L., Su, P.-H., … Young, S. (2016). Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 142–148). Association for Computational Linguistics. Pavlick, E., Rastogi, P., Ganitkevich, J., & Ben Van Durme, C. C.-B. (2015). PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015). Beijing, China: Association for Computational Linguistics. Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12, 1532–1543. Pilehvar, M. T., Jurgens, D., & Navigli, R. (2013). Align, disambiguate and walk: A unified approach for measuring semantic similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 1341–1351). Pirró, G., & Euzenat, J. (2010). A feature and information theoretic framework for semantic similarity and relatedness. In Proceedings of the 9th international semantic web conference on The semantic web-Volume Part I (pp. 615–630). Springer. Pucher, M. (2007). WordNet-based semantic relatedness measures in automatic speech recognition for meetings. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions (pp. 129–132). Association for Computational Linguistics. Qiu, Likun, Zhang, Y., & Lu, Y. (2015). Syntactic dependencies and distributed word representations for analogy detection and mining. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (pp. 2441–2450). Qiu, Lin, Tu, K., & Yu, Y. (2016). Context-dependent sense embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 183–191). Rada, R., & Bicknell, E. (1989). Ranking documents with a thesaurus. Journal of the American Society for Information Science, 40(5), 304. Radinsky, K., Agichtein, E., Gabrilovich, E., & Markovitch, S. (2011). A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web (pp. 337–346). ACM. Razran, G. (1949). Semantic and phonetographic generalizations of salivary conditioning to verbal stimuli. Journal of Experimental Psychology, 39(5), 642. Reisinger, J., & Mooney, R. J. (2010). Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 109–117). Association for Computational Linguistics. Resnik, P. (1995). Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 1 (pp. 448–453). Morgan Kaufmann Publishers Inc. Rubenstein, H., & Goodenough, J. B. (1965). Contextual correlates of synonymy. Communications of the ACM, 8(10), 627–633. Smith, S. L., Turban, D. H., Hamblin, S., & Hammerla, N. Y. (2017). Offline bilingual word vectors, orthogonal transformations and the inverted softmax. ArXiv Preprint ArXiv:1702.03859. Sun, F., Guo, J., Lan, Y., Xu, J., & Cheng, X. (2016). Inside out: Two jointly predictive models for word representations and phrase representations. In Thirtieth AAAI Conference on Artificial Intelligence. Sussna, M. (1993). Word sense disambiguation for free-text indexing using a massive semantic network. In Proceedings of the second international conference on Information and knowledge management (pp. 67–74). ACM. Thompson-Schill, S. L., Kurtz, K. J., & Gabrieli, J. D. (1998). Effects of semantic and associative relatedness on automatic priming. Journal of Memory and Language, 38(4), 440–458. Turney, P. D. (2001). Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In European Conference on Machine Learning (pp. 491–502). Springer. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998–6008). Wieting, J., Bansal, M., Gimpel, K., Livescu, K., & Roth, D. (2015). From paraphrase database to compositional paraphrase model and back. Transactions of the Association for Computational Linguistics, 3, 345–358. Wu, Z., & Palmer, M. (1994). Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics (pp. 133–138). Association for Computational Linguistics. Xing, C., Wang, D., Liu, C., & Lin, Y. (2015). Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1006–1011). Yang, D., & Powers, D. M. (2006). Verb similarity on the taxonomy of WordNet. In Proceedings of the Third International WordNet Conference. Masaryk University. Yen, T.-Y., Lee, Y.-Y., Huang, H.-H., & Chen, H.-H. (2018). That Makes Sense: Joint Sense Retrofitting from Contextual and Ontological Information. In Companion of the The Web Conference 2018 on The Web Conference 2018 (pp. 15–16). International World Wide Web Conferences Steering Committee. Yih, W., & Qazvinian, V. (2012). Measuring word relatedness using heterogeneous vector space models. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 616–620). Association for Computational Linguistics. Yu, M., & Dredze, M. (2014). Improving lexical embeddings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (Vol. 2, pp. 545–550). | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71499 | - |
| dc.description.abstract | 隨著自然語言處理工作的需求增加,近年來對於較好的詞分散表示法(詞向量)及詞義分散表示法(詞義向量)的需求在增加當中。在本篇研究當中,我們先探討在詞向量中的不正常維度,然後提出結合詞向量與本體論之模型。結合的方法分為三個部分來討論:直接結合方法,支持向量迴歸方法及利用後適配方法。在詞義向量方面,我們首先提出了能夠利用文本即本體論資訊學習更好詞義向量的聯合詞義後適配模型,並且一般化提出來的模型。 | zh_TW |
| dc.description.abstract | With the increasing number of natural language processing tasks, the need for better representation of words (word embedding) and senses (sense embedding) is getting higher in recent years. In this study, we firstly discuss the problem of abnormal dimensions in word embeddings, and then propose models that combine word embedding with ontology. The combination is discussed in three ways: directly combination approach, support vector regression approach and retrofitting approach. In sense embedding, we firstly propose a joint sense retrofitting model that learns better sense embedding from contextual and ontological information, and then generalize the proposed model. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T06:01:55Z (GMT). No. of bitstreams: 1 ntu-108-D02922015-1.pdf: 2834796 bytes, checksum: 40494c21d90b8d84121b68e2cf093b0a (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 口試委員會審定書 #
誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS iv LIST OF FIGURES ix LIST OF TABLES xi Chapter 1 Introduction 1 1.1 Research Motivation and Objectives 1 1.2 Organization of this Dissertation 8 Chapter 2 Related Works 10 2.1 Linguistic Resource Based Approaches 11 2.2 Corpus Based Approaches 11 2.3 Hybrid Approaches 13 2.3.1 In-processing 13 2.3.2 Post-processing 13 Chapter 3 Resources 15 3.1 WordNet 15 3.2 The Chinese Synonym Ontology Tongyici Cilin 16 3.3 The Paraphrase Database 17 3.4 Semantic Relatedness Datasets 17 3.4.1 RG65 17 3.4.2 WordSim-353 (WS353) 18 3.4.3 Chinese WordSim (CWS) 18 3.4.4 YP130 18 3.4.5 MEN 18 3.4.6 SimLex-999 (SL999) 19 3.4.7 Rare Words (RW) 19 3.4.8 MTurk 20 3.4.9 Stanford's Contextual Word Similarities (SCWS) Dataset 20 3.4.10 Summary of the Datasets 20 3.5 Semantic Difference Dataset 25 3.6 Synonym Selection Datasets 25 3.6.1 ESL-50 25 3.6.2 RD-300 25 3.6.3 TOEFL-80 25 Chapter 4 The Abnormal Dimensions in Word Embedding 27 4.1 The Problem 27 4.2 Transforming Abnormal Dimensions 30 4.2.1 Dimension Removal 30 4.2.2 Offset Transformation 31 4.2.3 Uniform Transformation 31 4.2.4 Harmonic Series Transformation 32 4.3 Experiments 33 4.3.1 Measures 33 4.3.2 Experimental Setup 34 4.3.3 Experimental Results 34 4.4 Other Word Embedding with Abnormal Dimensions 37 4.5 The Impact of Dimension Removal and Retrofitting 38 4.6 Abnormal Dimensions in Chinese Word Embedding 40 4.7 Summary 42 Chapter 5 Combination of Word Embedding and WordNet 44 5.1 The Model 44 5.2 Experiment 44 5.3 Summary 47 Chapter 6 Support Vector Regression Approach 48 6.1 The Model 48 6.2 Experiment 49 6.2.1 Analysis of the WordNet Features 50 6.3 Summary 52 Chapter 7 Structural-fitting of Word Embedding 53 7.1 Structural-fitting of Word Vectors 55 7.1.1 Fine2coarse Approach 55 7.1.2 Coarse2fine Approach 56 7.2 Experiment 57 7.2.1 Experimental Results 59 7.3 Summary 62 Chapter 8 Joint Sense Retrofitting from Contextual and Ontological Information 63 8.1 Introduction 63 8.2 Joint Sense Retrofitting 64 8.3 Datasets and Experimental Setup 65 8.4 Results and Discussions 68 8.5 Summary 69 Chapter 9 GenSense: A Generalized Sense Retrofitting Model 71 9.1 Introduction 71 9.2 Generalized Sense Retrofitting Model 73 9.2.1 Standardization on the Dimensions 77 9.2.2 Neighbor Expansion from the Nearest Neighbors 80 9.2.3 Combination of Standardization and Neighbor Expansion 82 9.3 Procrustes Analysis 82 9.3.1 Infer from Procrustes Analysis 84 9.4 Experiments 85 9.4.1 Experimental Setup 85 9.4.2 Semantic Relatedness 86 9.4.3 Contextual Word Similarity 86 9.4.4 Semantic Difference 87 9.4.5 Synonym Selection 87 9.5 Results and Discussion 88 9.5.1 Semantic Relatedness 88 9.5.2 Contextual Word Similarity 98 9.5.3 Semantic Difference 99 9.5.4 Synonym Selection 99 9.6 Summary 100 Chapter 10 Conclusion 102 10.1 Word Level 102 10.2 Sense Level 103 10.3 Future Works 104 REFERENCE 106 | |
| dc.language.iso | en | |
| dc.subject | 詞向量 | zh_TW |
| dc.subject | 本體論 | zh_TW |
| dc.subject | 語意關聯度 | zh_TW |
| dc.subject | 詞義向量 | zh_TW |
| dc.subject | word embedding | en |
| dc.subject | sense embedding | en |
| dc.subject | ontology | en |
| dc.subject | semantic relatedness | en |
| dc.title | 利用本體論及後適配技術於產生較佳之詞及詞義分散表示法 | zh_TW |
| dc.title | On Utilization of Ontology and Retrofitting Techniques for Better Distributed Representations of Words and Senses | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 107-1 | |
| dc.description.degree | 博士 | |
| dc.contributor.oralexamcommittee | 鄭卜壬(Pu-Jen Cheng),李宏毅(Hung-yi Lee),吳宗憲(Chung-Hsien Wu),蔡宗翰(Tzong-Han Tsai),陳柏琳(Berlin Chen) | |
| dc.subject.keyword | 詞向量,詞義向量,本體論,語意關聯度, | zh_TW |
| dc.subject.keyword | word embedding,sense embedding,ontology,semantic relatedness, | en |
| dc.relation.page | 116 | |
| dc.identifier.doi | 10.6342/NTU201900341 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2019-01-31 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 2.77 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
