Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51400
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林守德(Shou-de Lin)
dc.contributor.authorYen-Hua Huangen
dc.contributor.author黃彥樺zh_TW
dc.date.accessioned2021-06-15T13:32:56Z-
dc.date.available2016-03-08
dc.date.copyright2016-03-08
dc.date.issued2015
dc.date.submitted2016-02-02
dc.identifier.citation[1] D. Kempe, J. Kleinberg , and E. Tardos ,”Maximizing the spread of influence through a social network” in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2003
[2] P. Domingos and M. Richardson, “Mining the network value of customers,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2001.
[3] M. Richardson and P. Domingos, “Mining knowledge-sharing sites for viral marketing,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2002.
[4] W. Chen, Y. Wang, and S. Yang, “Efficient influence maximization in social networks,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2009.
[5] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance, “Cost-effective outbreak detection in networks,' in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2007.
[6] A. Goyal, W. Lu, and L. V. S. Lakshmanan,”Simpath: An efficient algorithm for influence maximization under the linear threshold model,' in IEEE International Conference on Data Mining, 2011.
[7] C. Watkins and P. Dayanr, “Q-learning” , Machine Learning, vol. 8, pp. 279-292,1992
[8] R. Bellman. A Markovian Decision Process. Journal of Mathematics and Mechanics 6, 1957.
[9] S.-C. Lin , S.-D. Lin , and M.-S. Chen, “A Learning-based Framework to Handle Multi-round Multi-party Influence Maximization on Social Networks” , in ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2015
[10] [Online] http://scikit-learn.org/
[11] [Online] http://snap.stanford.edu/data/
[12] [Online] http://konect.uni-koblenz.de/networks/
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51400-
dc.description.abstract在影響力最大化的研究上近幾十年間已有大量以策略導向進行選擇的方式之研究。在其中已證明貪婪演算法(Greedy Algorithm)能夠達到至少涵蓋63%以上的最大擴散範圍,因此是個非常強大且有競爭力的演算法。在此我們提出以學習為主框架的方式來解決影響力最大化問題,目標是超越貪婪演算法的影響範圍和效率。我們提出的增強式學習架構與分類器相結合的模型,不僅減輕了資料上所需標記的訓練數據,而且還允許逐步發展的影響最大化的策略,能夠在每個狀況下找出其適合的策略,最後的表現不論在執行時間和影響範圍都打敗貪婪演算法。zh_TW
dc.description.abstractStrategies to choose nodes on a social network to maximize the total influence has been studied for decades. Studies have shown that the greedy algorithm is a competitive strategy and it has been proved to cover at least 63% of the optimal spread. Here we propose a learning-based framework for influence maximization aiming at outperforming the greedy algorithm in terms of both coverage and efficiency. The proposed reinforcement learning framework combining with a classification model not only alleviates the requirement of the labelled training data, but also allows the influence maximization strategy to be developed gradually and eventually outperforms a basic greedy approach.en
dc.description.provenanceMade available in DSpace on 2021-06-15T13:32:56Z (GMT). No. of bitstreams: 1
ntu-104-R02944055-1.pdf: 1140890 bytes, checksum: 2b7f6298c40d57fcafb722b5953964bb (MD5)
Previous issue date: 2015
en
dc.description.tableofcontents口試委員審定書 i
誌謝 ii
中文摘要 iii
ABSTRACT iv
CONTENTS v
LIST OF FIGURES vi
LIST OF TABLES vii
Chapter 1 Introduction 1
Chapter 2 Preliminary 6
2.1 Linear Threshold Model (LT model) 6
2.2 Q-learning 7
Chapter 3 Problem Definition and Methodology 9
3.1 Problem Definition 9
3.2 Methodology 10
3.2.1 Strategy 11
3.2.2 Q-learning 13
3.2.3 Classifier 19
Chapter 4 Experiment 25
4.1 Hypothesis 25
4.2 Experiment Setting 25
4.3 Result 27
Chapter 5 Conclusion 37
REFERENCE 38
dc.language.isoen
dc.subject訊息傳播最大化zh_TW
dc.subject增強式學習zh_TW
dc.subject社群網路zh_TW
dc.subject機器學習zh_TW
dc.subject貪婪演算法zh_TW
dc.subject增強式學習zh_TW
dc.subject社群網路zh_TW
dc.subject訊息傳播最大化zh_TW
dc.subject機器學習zh_TW
dc.subject貪婪演算法zh_TW
dc.subjectReinforcement-Learningen
dc.subjectReinforcement-Learningen
dc.subjectSocial networken
dc.subjectInfluence Maximizationen
dc.subjectMachine learningen
dc.subjectGreedy Algorithmen
dc.subjectSocial networken
dc.subjectInfluence Maximizationen
dc.subjectMachine learningen
dc.subjectGreedy Algorithmen
dc.title在未標記資料上利用增強式學習解決影響力最大化之問題zh_TW
dc.titleExploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Dataen
dc.typeThesis
dc.date.schoolyear104-1
dc.description.degree碩士
dc.contributor.oralexamcommittee林軒田(Hsuan-Tien Lin),楊得年(De-Nian Yang),葉彌妍(Mi-Yen Yeh),李政德(Cheng-Te Li)
dc.subject.keyword增強式學習,社群網路,訊息傳播最大化,機器學習,貪婪演算法,zh_TW
dc.subject.keywordReinforcement-Learning,Social network,Influence Maximization,Machine learning,Greedy Algorithm,en
dc.relation.page39
dc.rights.note有償授權
dc.date.accepted2016-02-02
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-104-1.pdf
  未授權公開取用
1.11 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved