請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73123
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 王勝德(Sheng-De Wang) | |
dc.contributor.author | Hsi-Wen Chen | en |
dc.contributor.author | 陳璽文 | zh_TW |
dc.date.accessioned | 2021-06-17T07:18:32Z | - |
dc.date.available | 2020-07-17 | |
dc.date.copyright | 2019-07-17 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-07-10 | |
dc.identifier.citation | [1] C. Aggarwal and K. Subbian. Evolutionary network analysis: A survey. ACM Computing Surveys (CSUR), 47(1):10, 2014.
[2] A. Ahmed, N. Shervashidze, S. Narayanamurthy, V. Josifovski, and A. J. Smola. Distributed large-scale natural graph factorization. In Proceedings of the 22nd international conference on World Wide Web, pages 37–48. ACM, 2013. [3] L. Backstrom and J. Leskovec. Supervised random walks: predicting and recommending links in social networks. In Proceedings of the fourth ACM WSDM, pages 635–644. ACM, 2011. [4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems, pages 585–591, 2002. [5] J. Benesty. Adaptive eigenvalue decomposition algorithm for passive acoustic source localization. the Journal of the Acoustical Society of America, 107(1):384–391, 2000. [6] S. Bhagat, G. Cormode, and S. Muthukrishnan. Node classification in social networks. In Social network data analytics, pages 115–148. Springer, 2011. [7] S. Cao, W. Lu, and Q. Xu. Deep neural networks for learning graph representations. In Thirtieth AAAI Conference on Artificial Intelligence, 2016. [8] H. Chen, B. Perozzi, Y. Hu, and S. Skiena. Harp: Hierarchical representation learning for networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [9] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. [10] C. W. Coley, R. Barzilay, W. H. Green, T. S. Jaakkola, and K. F. Jensen. Convolutional embedding of attributed molecular graphs for physical property prediction. Journal of chemical information and modeling, 57(8):1757–1772, 2017. [11] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM journal on Matrix Analysis and Applications, 21(4):1253–1278, 2000. [12] Y. Dong, N. V. Chawla, and A. Swami. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 135–144. ACM, 2017. [13] L. Du, Y. Wang, G. Song, Z. Lu, and J. Wang. Dynamic network embedding: An extended approach for skip-gram based network embedding. In IJCAI, pages 2086–2092, 2018. [14] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12(Jul):2121–2159, 2011. [15] S. Fortunato. Community detection in graphs. Physics reports, 486(3-5):75–174, 2010. [16] Y. Fu, M. Liu, and T. S. Huang. Conformal embedding analysis with local graph modeling on the unit hypersphere. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–6. IEEE, 2007. [17] K.-i. Funahashi and Y. Nakamura. Approximation of dynamical systems by continuous time recurrent neural networks. Neural networks, 6(6):801–806, 1993. [18] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [19] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD, pages 855–864. ACM, 2016. [20] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In Advances in NIPS, pages 1024–1034, 2017. [21] W. L. Hamilton, J. Leskovec, and D. Jurafsky. Diachronic word embeddings reveal statistical laws of semantic change. arXiv preprint arXiv:1605.09096, 2016. [22] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [23] A. Hyvärinen, J. Karhunen, and E. Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004. [24] I. Jolliffe. Principal component analysis. Springer, 2011. [25] D. E. Knuth. The Stanford GraphBase: a platform for combinatorial computing. [26] P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885–1894. JMLR. org, 2017. [27] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [28] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014. [29] J. Li, C. Chen, H. Tong, and H. Liu. Multi-layered network embedding. In Proceedings of the 2018 SIAM ICDM, pages 684–692. SIAM, 2018. [30] J. Li, H. Dani, X. Hu, J. Tang, Y. Chang, and H. Liu. Attributed network embedding for learning in a dynamic environment. In Proceedings of the 2017 ACM on CIKM, pages 387–396. ACM, 2017. [31] J. Liang, S. Gurukar, and S. Parthasarathy. Mile: A multi-level framework for scalable graph embedding. arXiv preprint arXiv:1802.09612, 2018. [32] M.-T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. [33] J. Ma, P. Cui, and W. Zhu. Depthlgp: learning embeddings of out-of-sample nodes in dynamic networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [34] T. Mikolov, M. Karafiát, L. Burget, J. Černockỳ, and S. Khudanpur. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association, 2010. [35] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in NIPS, pages 3111–3119, 2013. [36] S. Pan, J. Wu, X. Zhu, C. Zhang, and Y. Wang. Tri-party deep network representation. Network, 11(9):12. [37] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD, pages 701–710. ACM, 2014. [38] R. Rossi and N. Ahmed. The network data repository with interactive graph analytics and visualization. In Twenty-Ninth AAAI, 2015. [39] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326, 2000. [40] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. [41] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In Proceedings of the 24th WWW, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015. [42] A. Tsitsulin, D. Mottin, P. Karras, and E. Müller. Verse: Versatile graph embeddings from similarity measures. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 539–548. International World Wide Web Conferences Steering Committee, 2018. [43] C. Tu, W. Zhang, Z. Liu, M. Sun, et al. Max-margin deepwalk: Discriminative learning of network representation. In IJCAI, pages 3889–3895, 2016. [44] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. [45] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371–3408, 2010. [46] D. Wang, P. Cui, and W. Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD, pages 1225–1234. ACM, 2016. [47] J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. [48] K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka. Representation learning on graphs with jumping knowledge networks. arXiv preprint arXiv:1806.03536, 2018. [49] L. Zhou, Y. Yang, X. Ren, F. Wu, and Y. Zhuang. Dynamic network embedding by modeling triadic closure process. In Thirty-Second AAAI, 2018. [50] M. Zoghi, T. Tunys, M. Ghavamzadeh, B. Kveton, C. Szepesvari, and Z. Wen. Online learning to rank in stochastic click models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4199–4208. JMLR. org, 2017. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73123 | - |
dc.description.abstract | 網路嵌入 (Network embedding) 是一個已經廣泛應用的表示學習方法,其目的是將稀疏的圖 (Graph) 資訊降維到一個低維的隱空間 (latent space),然而傳統的方法多是專注於靜態網路 (static network) 的嵌入學習,然而在實際的應用中,網路的結構是持續變動的。由於當有新的資訊進入網路時,重新運行靜態的網路嵌入算法的時間複雜度是非常高的,因此,本論文提出一個針對串流網路嵌入 (streaming network embedding) 的算法以解決下列問題:1) 在多型態更新的條件下,有效率的選擇應該更新的點 2) 決定被選更新的點的更新幅度,以及基於拓樸 (topology),來調整相鄰的點。本論文填補了此領域之不足,我們提出名為 Graph Memory Refreshing (GMR) 的算法,來保留圖上的結構資訊並且維持住嵌入向量 (embedding vector) 的一致性。除了理論分析上,我們證明 GMR 擁有更好的一致性和時間複雜度,實驗結果也顯示 GMR 不論在準確率和執行時間上都明顯優於先前的嵌入算法。 | zh_TW |
dc.description.abstract | Static network embedding has been widely studied to convert the sparse structure information to a dense latent space for various applications. However, real networks are continuously evolving, and deriving the whole embedding for every snapshot is computationally intensive. In this paper, therefore, we explore streaming network embedding to 1) efficiently identify the nodes required to update the embeddings under multi-type network changes and 2) carefully revise the embeddings to maintain transduction over different parts of the network. Specifically, we propose a new representation learning framework, named Graph Memory Refreshing (GMR), to preserve both structural information and embedding consistency for streaming network embedding. We prove that GMR is more consistent than other state-of-the-art methods. Experimental results manifest that GMR outperforms the baselines in both the accuracy and the running time. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T07:18:32Z (GMT). No. of bitstreams: 1 ntu-108-R06921045-1.pdf: 1773082 bytes, checksum: 965465977570b4e9a2a7d48522db4253 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 口試委員會審定書 iii
誌謝 v 摘要 vii Abstract ix 1 Introduction 1 2 Related Work 5 3 Problem Formulation 9 4 Graph Memory Refreshing 11 4.1 Multi-type Embedding Updating 11 4.2 Hierarchical Addressing 12 4.3 Refresh & Percolate 17 5 Theoretical Analysis 21 5.1 Consistency 21 5.2 Influence Distribution 29 5.3 Convergence Analysis and Time Complexity 30 6 Experiments 33 6.1 Experiment Setup 33 6.2 Quantitative Results 34 6.2.1 Link Prediction 34 6.2.2 Node Classification 35 6.2.3 Network Visualization 37 6.3 Sensitivity Test 37 6.3.1 Deletion Weight α 37 6.3.2 Filtering Threshold h 38 6.3.3 Number of Search Beams k 39 6.3.4 Embedding Dimension d 40 6.3.5 Comparisons of Incorporating with Different Static Models 41 7 Conclusion 43 Bibliography 45 | |
dc.language.iso | en | |
dc.title | 基於記憶更新機制的串流網路嵌入學習 | zh_TW |
dc.title | Streaming Network Embedding with Memory Refreshing | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.coadvisor | 楊得年(De-Nian Yang) | |
dc.contributor.oralexamcommittee | 帥宏翰(Hong-Han Shuai),李宏毅(Hung-yi Lee) | |
dc.subject.keyword | 網路嵌入,串流網路,線上學習, | zh_TW |
dc.subject.keyword | Network Embedding,Streaming Network,Online Learning, | en |
dc.relation.page | 50 | |
dc.identifier.doi | 10.6342/NTU201901324 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-07-10 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 1.73 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。