請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51400
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 林守德(Shou-de Lin) | |
dc.contributor.author | Yen-Hua Huang | en |
dc.contributor.author | 黃彥樺 | zh_TW |
dc.date.accessioned | 2021-06-15T13:32:56Z | - |
dc.date.available | 2016-03-08 | |
dc.date.copyright | 2016-03-08 | |
dc.date.issued | 2015 | |
dc.date.submitted | 2016-02-02 | |
dc.identifier.citation | [1] D. Kempe, J. Kleinberg , and E. Tardos ,”Maximizing the spread of influence through a social network” in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2003
[2] P. Domingos and M. Richardson, “Mining the network value of customers,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2001. [3] M. Richardson and P. Domingos, “Mining knowledge-sharing sites for viral marketing,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2002. [4] W. Chen, Y. Wang, and S. Yang, “Efficient influence maximization in social networks,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2009. [5] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance, “Cost-effective outbreak detection in networks,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2007. [6] A. Goyal, W. Lu, and L. V. S. Lakshmanan,”Simpath: An efficient algorithm for influence maximization under the linear threshold model,' in IEEE International Conference on Data Mining, 2011. [7] C. Watkins and P. Dayanr, “Q-learning” , Machine Learning, vol. 8, pp. 279-292,1992 [8] R. Bellman. A Markovian Decision Process. Journal of Mathematics and Mechanics 6, 1957. [9] S.-C. Lin , S.-D. Lin , and M.-S. Chen, “A Learning-based Framework to Handle Multi-round Multi-party Influence Maximization on Social Networks” , in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2015 [10] [Online] http://scikit-learn.org/ [11] [Online] http://snap.stanford.edu/data/ [12] [Online] http://konect.uni-koblenz.de/networks/ | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51400 | - |
dc.description.abstract | 在影響力最大化的研究上近幾十年間已有大量以策略導向進行選擇的方式之研究。在其中已證明貪婪演算法(Greedy Algorithm)能夠達到至少涵蓋63%以上的最大擴散範圍,因此是個非常強大且有競爭力的演算法。在此我們提出以學習為主框架的方式來解決影響力最大化問題,目標是超越貪婪演算法的影響範圍和效率。我們提出的增強式學習架構與分類器相結合的模型,不僅減輕了資料上所需標記的訓練數據,而且還允許逐步發展的影響最大化的策略,能夠在每個狀況下找出其適合的策略,最後的表現不論在執行時間和影響範圍都打敗貪婪演算法。 | zh_TW |
dc.description.abstract | Strategies to choose nodes on a social network to maximize the total influence has been studied for decades. Studies have shown that the greedy algorithm is a competitive strategy and it has been proved to cover at least 63% of the optimal spread. Here we propose a learning-based framework for influence maximization aiming at outperforming the greedy algorithm in terms of both coverage and efficiency. The proposed reinforcement learning framework combining with a classification model not only alleviates the requirement of the labelled training data, but also allows the influence maximization strategy to be developed gradually and eventually outperforms a basic greedy approach. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T13:32:56Z (GMT). No. of bitstreams: 1 ntu-104-R02944055-1.pdf: 1140890 bytes, checksum: 2b7f6298c40d57fcafb722b5953964bb (MD5) Previous issue date: 2015 | en |
dc.description.tableofcontents | 口試委員審定書 i
誌謝 ii 中文摘要 iii ABSTRACT iv CONTENTS v LIST OF FIGURES vi LIST OF TABLES vii Chapter 1 Introduction 1 Chapter 2 Preliminary 6 2.1 Linear Threshold Model (LT model) 6 2.2 Q-learning 7 Chapter 3 Problem Definition and Methodology 9 3.1 Problem Definition 9 3.2 Methodology 10 3.2.1 Strategy 11 3.2.2 Q-learning 13 3.2.3 Classifier 19 Chapter 4 Experiment 25 4.1 Hypothesis 25 4.2 Experiment Setting 25 4.3 Result 27 Chapter 5 Conclusion 37 REFERENCE 38 | |
dc.language.iso | en | |
dc.title | 在未標記資料上利用增強式學習解決影響力最大化之問題 | zh_TW |
dc.title | Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data | en |
dc.type | Thesis | |
dc.date.schoolyear | 104-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 林軒田(Hsuan-Tien Lin),楊得年(De-Nian Yang),葉彌妍(Mi-Yen Yeh),李政德(Cheng-Te Li) | |
dc.subject.keyword | 增強式學習,社群網路,訊息傳播最大化,機器學習,貪婪演算法, | zh_TW |
dc.subject.keyword | Reinforcement-Learning,Social network,Influence Maximization,Machine learning,Greedy Algorithm, | en |
dc.relation.page | 39 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2016-02-02 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-104-1.pdf 目前未授權公開取用 | 1.11 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。