請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73077
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 鄭卜壬(Pu-Jen Cheng) | |
dc.contributor.author | Wei-Te Chien | en |
dc.contributor.author | 簡瑋德 | zh_TW |
dc.date.accessioned | 2021-06-17T07:16:31Z | - |
dc.date.available | 2019-07-17 | |
dc.date.copyright | 2019-07-17 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-07-12 | |
dc.identifier.citation | 1. Lai, Y. Y., Neville, J., & Goldwasser, D. (2019). TransConv: Relationship Embedding in Social Networks.
2. Chen, Y. Y., Hsu, W. H., & Liao, H. Y. M. (2012, October). Discovering informative social subgraphs and predicting pairwise relationships from group photos. In Proceedings of the 20th ACM international conference on Multimedia (pp. 669-678). ACM. 3. Goyal, P., & Ferrara, E. (2018). Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems, 151, 78-94. 4. Perozzi, B., Al-Rfou, R., & Skiena, S. (2014, August). Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 701-710). ACM. 5. Grover, A., & Leskovec, J. (2016, August). node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 855-864). ACM. 6. Chen, H., Perozzi, B., Hu, Y., & Skiena, S. (2018, April). Harp: Hierarchical representation learning for networks. In Thirty-Second AAAI Conference on Artificial Intelligence. 7. Perozzi, B., Kulkarni, V., Chen, H., & Skiena, S. (2017, July). Don't Walk, Skip!: Online Learning of Multi-scale Network Embeddings. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017 (pp. 258-265). ACM. 8. Wang, D., Cui, P., & Zhu, W. (2016, August). Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1225-1234). ACM. 9. Kipf, T. N., & Welling, M. (2016). Variational graph auto-encoders. arXiv preprint arXiv:1611.07308. 10. Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., & Mei, Q. (2015, May). Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web (pp. 1067-1077). International World Wide Web Conferences Steering Committee. 11. Mnih, V., Heess, N., & Graves, A. (2014). Recurrent models of visual attention. In Advances in neural information processing systems (pp. 2204-2212). 12. Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. 13. Luong, M. T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. 14. Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. 15. Allamanis, M., Peng, H., & Sutton, C. (2016, June). A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning (pp. 2091-2100). 16. Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching machines to read and comprehend. In Advances in neural information processing systems (pp. 1693-1701). 17. Yin, W., Ebert, S., & Schütze, H. (2016). Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341. 18. Kadlec, R., Schmid, M., Bajgar, O., & Kleindienst, J. (2016). Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547. 19. Dhingra, B., Liu, H., Yang, Z., Cohen, W. W., & Salakhutdinov, R. (2016). Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549. 20. Vinyals, O., Kaiser, Ł., Koo, T., Petrov, S., Sutskever, I., & Hinton, G. (2015). Grammar as a foreign language. In Advances in neural information processing systems (pp. 2773-2781). 21. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1480-1489). 22. Yin, W., Schütze, H., Xiang, B., & Zhou, B. (2016). Abcnn: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics, 4, 259-272. 23. Wang, L., Cao, Z., De Melo, G., & Liu, Z. (2016). Relation classification via multi-level attention cnns. 24. Zhou, P., Shi, W., Tian, J., Qi, Z., Li, B., Hao, H., & Xu, B. (2016). Attentionbased bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (Vol. 2, pp. 207-212). 25. Xiao, J., Ye, H., He, X., Zhang, H., Wu, F., & Chua, T. S. (2017). Attentional factorization machines: Learning the weight of feature interactions via attention networks. arXiv preprint arXiv:1708.04617. 26. Choi, E., Bahadori, M. T., Song, L., Stewart, W. F., & Sun, J. (2017, August). GRAM: graph-based attention model for healthcare representation learning. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 787-795). ACM. 27. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119). 28. Zhang, T., Ramakrishnan, R., & Livny, M. (1996, June). BIRCH: an efficient data clustering method for very large databases. In ACM Sigmod Record (Vol. 25, No. 2, pp. 103-114). ACM. 29. Celeux, G., & Govaert, G. (1995). Gaussian parsimonious clustering models. Pattern recognition, 28(5), 781-793. 30. Wang, K., Zhang, J., Li, D., Zhang, X., & Guo, T. (2008). Adaptive affinity propagation clustering. arXiv preprint arXiv:0805.1096. 31. Tran, T. N., Drab, K., & Daszykowski, M. (2013). Revised DBSCAN algorithm to cluster data with dense adjacent clusters. Chemometrics and Intelligent Laboratory Systems, 120, 92-96.. 32. Jain, A. K. (2010). Data clustering: 50 years beyond K-means. Pattern recognition letters, 31(8), 651-666. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73077 | - |
dc.description.abstract | 社群網路在現今生活當中扮演了相當重要的環節,而「網絡表示學習」或稱「圖嵌入」這類的演算法與模型,更在近年的研究中被廣泛地用來學習並抽取網絡中使用者的「特徵」,並成就了許多令人驚艷的結果。在該問題中,「使用者」被視為「圖論」中的「節點」,而他們之間的「互動」則被視為「邊」。一般來說,社群網絡中的互動可以有相當多種的形式,包含了文字訊息、交友邀請甚至是參與共同社團。在本文中,我們嘗試著手面對的目標相當特別,它是所謂的「電話通訊網絡」。更精確一點來說明的話,這個資料集包含了許多的電話號碼(也就是使用者),這些號碼之間彼此會有一些電話的通聯紀錄,除此之外,對於每一通電話,我們亦可擷取出一些對應的特徵,例如通話時長、來電方向以及來電的時間點。
除了資料集的獨特性之外,本文也致力於闡述「一人多態」的實用性以及必要性。簡言之,我們希望模型能夠根據不一樣的「情境」,給予「同一個」使用者不一樣的「潛在向量」。這個概念與文本中的「一詞多義」很相似,不過在「通訊網絡」中我們能夠更具體、直白地點出「情境」的差異。透過「分群」與「注意力機制」結合「深度學習」的架構,我們準確預測了兩個通話中使用者的身分關聯,並藉此展示該模型的潛力以及實際的應用。 | zh_TW |
dc.description.abstract | Social network plays an important role in our real lives. Graph embedding mechanism has been adopted to model the structure in such problem and achieves remarkable success. Users are considered as nodes, while the interactions are the links of the graph. In general, interactions may have different forms, including text messages and common preferences between users. In this paper, we are dealing with a special network – log of phone calls. More precisely, the dataset consists of many users (numbers) making phone calls to each other and a call itself carries some useful features: call time, call duration, etc.
In addition to the distinctness of the data, we want to show that it is realistic to represent a user variously under different scenario. The concept is similar to polysemy in word-embedding. However, call behaviors have much more features than context in documents. By showing how clustering and attention net help to predict the pairwise relationships in phone calls, we demonstrate the power of attentive representation and its applications. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T07:16:31Z (GMT). No. of bitstreams: 1 ntu-108-R07922012-1.pdf: 1324203 bytes, checksum: 08417461795782cd2003edb05a775063 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 致謝 i
摘要 ii Abstract iii 一. 研究簡介 1 二. 相關研究 4 2.1 人物身分關係的預測 4 2.2 圖嵌入與網絡表示學習 5 2.3 注意力機制類神經網絡 6 三. 模型架構 7 3.1 電話號碼(使用者)特徵向量 7 3.2 使用者分群與群聚中心 10 3.3 注意力機制類神經網絡 14 3.4 人物身分辨識 16 四. 實驗結果 17 4.1 資料集簡介與預測結果 17 4.2 個案視覺化 20 4.3 冷啟動 22 五. 研究結論 24 六. 參考文獻 25 | |
dc.language.iso | zh-TW | |
dc.title | 透過注意力機制優化電話號碼的潛在向量用於預測通話中的人物身分關聯 | zh_TW |
dc.title | Attentive Representations for Phone Numbers Predicting Pairwise Relationships in Phone Calls | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林守德(Shou-De Lin),陳信希(Hsin-Hsi Chen),陳柏琳(Berlin Chen) | |
dc.subject.keyword | 社群網路,網絡表示學習,圖嵌入,分群問題,注意力機制,電話號碼特徵向量,身分關聯預測, | zh_TW |
dc.subject.keyword | Social Network,Graph Embedding,Representation Learning,Clustering,Attention Network,Phone Embedding,Relationships Prediction, | en |
dc.relation.page | 27 | |
dc.identifier.doi | 10.6342/NTU201901267 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-07-13 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 1.29 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。