Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63049
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor李宏毅(Hung-Yi Lee)
dc.contributor.authorTsung-Yuan Hsuen
dc.contributor.author許宗嫄zh_TW
dc.date.accessioned2021-06-16T16:20:18Z-
dc.date.available2021-07-15
dc.date.copyright2020-07-15
dc.date.issued2020
dc.date.submitted2020-05-19
dc.identifier.citation[1]Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho, “Meta- learning for low-resource neural machine translation,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Bel- gium, Oct.-Nov. 2018, pp. 3622–3631, Association for Computational Linguistics.
[2]Chia-Hsuan Lee and Hung-yi Lee, “Cross-lingual transfer learning for question answering,” CoRR, vol. abs/1907.06042, 2019.
[3]Jeffrey Pennington, Richard Socher, and Christopher D. Manning, “Glove: Global vectors for word representation,” in Empirical Methods in Natural Language Pro- cessing (EMNLP), 2014, pp. 1532–1543.
[4]Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, “Dis- tributed representations of words and phrases and their compositionality,” in Ad- vances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou,
[5]M. Welling, Z. Ghahramani, and K. Q. Weinberger, Eds., pp. 3111–3119. Curran Associates, Inc., 2013.
Chunting Zhou, Xuezhe Ma, Di Wang, and Graham Neubig, “Density matching for bilingual word embedding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, June 2019, pp. 1588–1598, Association for Computational Linguistics.
[6]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg,
[7]S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., pp. 5998– 6008. Curran Associates, Inc., 2017.
[8]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “BERT: Pre- training of deep bidirectional transformers for language understanding,” in Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, June 2019, pp. 4171–4186, Association for Computational Linguistics.
[9]Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen, “Fast and accurate reading comprehension by combining self-attention and convo- lution,” in International Conference on Learning Representations, 2018.
[10]Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang, “SQuAD: 100,000+ questions for machine comprehension of text,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, Nov. 2016, pp. 2383–2392, Association for Computational Linguistics.
[11]Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi, “SWAG: A large-scale adversarial dataset for grounded commonsense inference,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, Oct.-Nov. 2018, pp. 93–104, Association for Computational Linguistics.
[12]Siva Reddy, Danqi Chen, and Christopher D. Manning, “CoQA: A conversational question answering challenge,” Transactions of the Association for Computational Linguistics, vol. 7, pp. 249–266, Mar. 2019.
[13]Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Ma- jumder, and Li Deng, “Ms marco: A human generated machine reading comprehen- sion dataset.,” CoRR, vol. abs/1611.09268, 2016.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and
[14]L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, pp. 541–551, 1989.
[15]Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural machine transla- tion by jointly learning to align and translate,” 2014, cite arxiv:1409.0473Comment: Accepted at ICLR 2015 as oral presentation.
[16]Chelsea Finn, Pieter Abbeel, and Sergey Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Confer- ence on Machine Learning, Doina Precup and Yee Whye Teh, Eds., International Convention Centre, Sydney, Australia, 06–11 Aug 2017, vol. 70 of Proceedings of Machine Learning Research, pp. 1126–1135, PMLR.
[17]Alex Nichol, Joshua Achiam, and John Schulman, “On first-order meta-learning algorithms,” CoRR, vol. abs/1803.02999, 2018.
[18]Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27, Z. Ghahramani,
[19]M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds., pp. 2672–2680. Curran Associates, Inc., 2014.
[20]Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herve´ Je´gou, “Word translation without parallel data,” in International Conference on Learning Representations, 2018.
[21]Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever, “Language models are unsupervised multitask learners,” 2019.
[22]Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Ma- jumder, and Li Deng, “MS MARCO: A human generated machine reading compre- hension dataset,” CoRR, vol. abs/1611.09268, 2016.
[23]Chih-Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai, “DRCD: a chinese machine reading comprehension dataset,” CoRR, vol. abs/1806.00920, 2018.
[24]Dirk Weissenborn, Georg Wiese, and Laura Seiffe, “Fastqa: A simple and efficient neural architecture for question answering,” CoRR, vol. abs/1703.04816, 2017.
Shauli Ravfogel, Yoav Goldberg, and Tal Linzen, “Studying the inductive biases of rnns with synthetic variations of natural languages,” CoRR, vol. abs/1903.06400, 2019.
[25]Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bow- man, Holger Schwenk, and Veselin Stoyanov, “Xnli: Evaluating cross-lingual sen- tence representations,” in Proceedings of the 2018 Conference on Empirical Meth-ods in Natural Language Processing. 2018, Association for Computational Linguis- tics.
[26]Mikel Artetxe, Sebastian Ruder, and Dani Yogatama, “On the cross-lingual trans- ferability of monolingual representations,” CoRR, vol. abs/1910.11856, 2019.
[27]Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals, “Rapid learning or feature reuse? towards understanding the effectiveness of {maml},” in Interna- tional Conference on Learning Representations, 2020.
[28]Anders Søgaard, Sebastian Ruder, and Ivan Vulic´, “On the limitations of unsuper- vised bilingual dictionary induction,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, July 2018, pp. 778–788, Association for Computational Linguistics.
[29]Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig, “Bilingual lexicon induction with semi-supervision in non- isometric embedding spaces,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019, pp. 184–193, Association for Computational Linguistics.
[30]Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber, “Are girls neko or sho¯jo? cross-lingual alignment of non-isomorphic embeddings with iterative normalization,” in Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, Florence, Italy, July 2019, pp. 3180–3189, Association for Computational Linguistics.
[31]David Marecˇek and Rudolf Rosa, “Extracting syntactic trees from transformer en- coder self-attentions,” in Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, Nov. 2018, pp. 347–349, Association for Computational Linguistics.
[32]Philipp Dufter and Hinrich Schu¨tze, “Identifying necessary elements for bert’s mul- tilinguality,” 2020.
Ivan Vulic´, Sebastian Ruder, and Anders Søgaard, “Are all good word vector spaces isomorphic?,” 2020.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63049-
dc.description.abstract以深層學習(Deep Learning)模型來建構問答系統(Question Answering System)是當今主流,然而面對不同語言、形式、主題等多樣化的問答任務,缺乏足夠的目標領域(Target Domain)標註資料是實際開發問答系統時常須面對的問題,而遷移學習(Transfer Learning)正是以其他領域標註資料來解決目標領域資料不足的方法。本論文聚焦於跨語言遷移學習(Cross-lingual Transfer Learning)任務,探討如何用單一語言大型問答資料集進行遷移學習,幫助提升問答系統在其他語言的表現。
針對問答系統的跨語言遷移學習任務,前人提出的方法高度仰賴成熟的機器翻譯系統來翻譯問答資料,然而另外訓練一套機器翻譯系統何嘗不須大量標註資料?因此本論文另闢蹊徑,借助自監督式學習(Self-supervised Learning)得到的多語言語言表示模型(Multi-lingual Language Representation Model)使得問答模型具備理解不同語言的能力,並成功單獨使用英文問答資料集訓練出可以進行中文問答的跨語言問答模型。
本論文進一步設計不同特性的人工問答資料集,來探究此方法成功將問答技巧跨語言遷移的原因,實驗結果發現多語言語言表示模型將不同語言相近意義的字詞以相近的向量表示,本論文稱之為語言表示的「語言無關」現象,發現此現象可能是語言表示模型預訓練資料裡少數語碼變換~(Code-switching)資料造成的,並探討此現象與下游任務(Downstream Task)跨語言遷移學習表現的關聯。
zh_TW
dc.description.abstractDeep-learning-based question answering systems are very powerful but also highly data-dependent. Broader applications of an existing system are often hindered by mismatches of data domains, such as languages, topics, etc.. However, collecting and annotating sufficient data is also impractical and costly. Therefore, there are many studies dedicated to tansfer learning in the literature, which aims to improve models with limited target resouce by utilizing annotated data from other domains.
This paper focuses on the task of cross-lingual transfer learning, and explores how to use a large and well-collected monolingual question answering dataset to help improve the performance in other languages. For the task of cross-lingual transfer learning on question answering, most of the existng methods depend on an additional machine translation system to bridge the gap between languages, but it means that the translation quality of machine translation model is highly decisive for the performance of question answering model. That is, a well-trained machine translation system is required. However, the prerequisite is also hard to acheive in terms of some low-resourced languages.
Therefore, this study takes a totally different approach. By combining language representation model pretrained on non-parallel multilingual data in a self-supervised way, we equipped our question answering model with an ability to understand different languages, and then successfully used English question answering data alone to train the model to build the question answering skill that could apply to different languages. In the end, we obtained a cross-lingual question answering model that can perform Chinese question answering.
The study further explored the performance of our method on several artifial datasets with designed linguistic features, with an intention to find out the underlying mechanisms of the superior cross-lingual ability of our model. The experimental results showed that pretrained multilingual language representation model represents words with similar meanings in different languages ​​with similar vectors, which could be interpreted as a phenomenon called cross-lingual alignment. It was found that this phenomenon may be caused by some code-switching data in the pre-training data of the language representation model. The study also found out the correlation of quantified cross-lingual alignment with the performance of cross-lingual transfer on downstream task.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T16:20:18Z (GMT). No. of bitstreams: 1
ntu-109-R06942075-1.pdf: 5192546 bytes, checksum: 154cd5e1391fc053f722d9513643ea4e (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
中文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
一、導論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究方向. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 章節安排. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
二、機器學習背景知識. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1 機器學習基礎觀念. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 何謂機器學習. . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 機器學習問題架構. . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 深度學習基礎知識. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 多層感知器. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 卷積類神經網路. . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 自專注網路. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 元學習(Meta-Learing) . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 基於學習初始值(Initialization)的元學習. . . . . . . . . . . . 15
2.3.2 元學習用於跨語言遷移學習. . . . . . . . . . . . . . . . . . . 18
2.4 生成對抗網路. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.1 生成對抗網路用於跨語言遷移學習. . . . . . . . . . . . . . . 21
2.5 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
三、自然語言處理背景知識. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1 預訓練語言表示模型. . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 語言模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.2 類神經網路語言模型與自監督式學習. . . . . . . . . . . . . . 23
3.1.3 詞向量. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.4 大型預訓練語言模型. . . . . . . . . . . . . . . . . . . . . . . 29
3.1.5 基於轉換器模型編碼器的語言模型. . . . . . . . . . . . . . . 32
3.2 問答系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 背景介紹:文字問答系統. . . . . . . . . . . . . . . . . . . . . 33
3.2.2 深度學習端到端模型於問答系統之應用. . . . . . . . . . . . 35
3.2.3 跨語言問答系統. . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
四、跨語言問答系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1 背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.1 跨語言與多語言問答系統. . . . . . . . . . . . . . . . . . . . 38
4.1.2 問答系統的跨語言遷移學習. . . . . . . . . . . . . . . . . . . 39
4.1.3 跨語言問答任務介紹. . . . . . . . . . . . . . . . . . . . . . . 40
4.2 大型預訓練語言表示模型用於跨語言遷移學習. . . . . . . . . . . . . 41
4.2.1 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.2 實驗設置. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.3 基準模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.4 模型架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.5 訓練過程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2.6 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 機器翻譯輔助大型預訓練語言模型用於跨語言遷移學習. . . . . . . 50
4.3.1 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.2 模型架構與訓練過程. . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 以未知語言(Unseen Language) 資料集探討模型跨語言遷移學習能力53
4.4.1 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.2 模型架構與訓練過程. . . . . . . . . . . . . . . . . . . . . . . 54
4.4.3 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5 以語碼變換(Code-switching) 資料集探討模型跨語言遷移學習能力. 56
4.5.1 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.5.2 模型架構與訓練過程. . . . . . . . . . . . . . . . . . . . . . . 57
4.5.3 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6 以文法型態操縱(Typology-manipulated)資料集探討模型跨語言遷移
學習能力. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6.1 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6.2 模型架構與訓練過程. . . . . . . . . . . . . . . . . . . . . . . 60
4.6.3 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.7 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
五、跨語言遷移學習. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2 語言無關性指標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.1 餘弦值相似性(Cosine Similarity) . . . . . . . . . . . . . . . . . 65
5.2.2 序位倒數平均值(Mean Reciprocal Rank) . . . . . . . . . . . . 66
5.3 語碼變換資料與語言無關性的關聯. . . . . . . . . . . . . . . . . . . 66
5.3.1 實驗設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3.2 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4 語言無關性與跨語言遷移學習的關聯. . . . . . . . . . . . . . . . . . 75
5.4.1 實驗設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4.2 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
六、元學習和生成對抗網路用於跨語言遷移學習. . . . . . . . . . . . . . . . 84
6.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2 以元學習輔助大型預訓練語言表示模型用於跨語言遷移學習. . . . . 84
6.2.1 實驗設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2.2 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.2.3 模型設計與參數. . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.2.4 訓練過程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2.5 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.3 以生成對抗網路輔助大型預訓練語言表示模型用於跨語言遷移學習. 90
6.3.1 資料集. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3.2 模型設計與參數. . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.3.3 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4 本章總結. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
七、結論與展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.1 研究貢獻與討論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.1.1 大型預訓練多語言語言表示模型用於問答模型跨語言遷移學習95
7.1.2 探討大型預訓練多語言語言表示模型跨語言遷移學習能力. . 95
7.1.3 探討元學習對於大型預訓練語言模型的效益. . . . . . . . . . 96
7.1.4 探討生成對抗網路對於大型預訓練語言模型的效益. . . . . . 96
7.2 未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
7.2.1 探討同義詞跨語言聚集的其他指標. . . . . . . . . . . . . . . 96
7.2.2 深度理解語言無關現象. . . . . . . . . . . . . . . . . . . . . . 97
7.2.3 遷移學習擬合與概化(Generalization) 的取捨. . . . . . . . . . 97
7.2.4 探討資料量與模型表現性的效應. . . . . . . . . . . . . . . . 98
參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
dc.language.isozh-TW
dc.subject問答系統zh_TW
dc.subject自然語言理解zh_TW
dc.subject自然語言處理zh_TW
dc.subjectQAen
dc.subjectNLPen
dc.subjectNLUen
dc.title使用多語言語言表示模型進行跨語言遷移學習之問答系統zh_TW
dc.titleCross-lingual Transfer Learning with Multi-lingual Language Representation Model on Question Answeringen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee李琳山(Lin-Shan Lee),陳縕儂(Yun-Nung Chen),蔡宗翰(Tsung-Han Tsai),曹昱(Yu Tsao)
dc.subject.keyword自然語言處理,自然語言理解,問答系統,zh_TW
dc.subject.keywordNLP,NLU,QA,en
dc.relation.page105
dc.identifier.doi10.6342/NTU202000827
dc.rights.note有償授權
dc.date.accepted2020-05-20
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-109-1.pdf
  未授權公開取用
5.07 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved