請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67054
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 鄭卜壬 | |
dc.contributor.author | Hao-Cheng Wang | en |
dc.contributor.author | 王浩丞 | zh_TW |
dc.date.accessioned | 2021-06-17T01:18:35Z | - |
dc.date.available | 2022-08-20 | |
dc.date.copyright | 2017-08-20 | |
dc.date.issued | 2016 | |
dc.date.submitted | 2017-08-11 | |
dc.identifier.citation | [1] A. Anderson, R. Kumar, A. Tomkins, and S. Vassilvitskii. The dynamics of repeat consumption. In Proceedings of the 23rd international conference on World wide web, pages 419–430. ACM, 2014.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [3] D. G. M. Barbara E. Kahn, Manohar U. Kalwani. Measuring variety-seeking and reinforcement behaviors using panel data. Journal of Marketing Research, 23(2):89–100, 1986. [4] A. R. Benson, R. Kumar, and A. Tomkins. Modeling user consumption sequences. In Proceedings of the 25th International Conference on World Wide Web, pages 519–529. International World Wide Web Conferences Steering Committee, 2016. [5] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015. [6] J. Chen, C. Wang, and J. Wang. Will you” reconsume” the near past? fast prediction on short-term reconsumption behaviors. In AAAI, pages 23–29, 2015. [7] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attentionbased models for speech recognition. In Advances in Neural Information Processing Systems, pages 577–585, 2015. [8] T. Di Noia, V. C. Ostuni, J. Rosati, P. Tomeo, and E. Di Sciascio. An analysis of users’ propensity toward diversity in recommendations. In Proceedings of the 8th ACM Conference on Recommender systems, pages 285–288. ACM, 2014. [9] M. Glanzer. Curiosity, exploratory drive, and stimulus satiation. Psychological Bulletin, 55(5):302, 1958. [10] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693–1701, 2015. [11] Y. Hijikata, T. Shimizu, and S. Nishida. Discovery-oriented collaborative filtering for improving user satisfaction. In Proceedings of the 14th international conference on Intelligent user interfaces, pages 67–76. ACM, 2009. [12] G. Hinton, N. Srivastava, and K. Swersky. Lecture 6a overview of mini–batch gradient descent. Coursera Lecture slides https://class.coursera.org/neuralnets-2012-001/lecture,[Online. [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [14] K. Kapoor, K. Subbian, J. Srivastava, and P. Schrater. Just in time recommendations: Modeling the dynamics of boredom in activity streams. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 233–242. ACM, 2015. [15] K. Kapoor, M. Sun, J. Srivastava, and T. Ye. A hazard based approach to user return time prediction. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1719–1728. ACM, 2014. [16] M.-T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. [17] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. [18] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448, 2015. [19] L. White, R. Togneri, W. Liu, and M. Bennamoun. How well sentence embeddings capture meaning. In Proceedings of the 20th Australasian Document Computing Symposium, page 9. ACM, 2015. [20] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2015. [21] M. Zhang and N. Hurley. Avoiding monotony: Improving the diversity of recommendation lists. In Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys ’08, pages 123–130, New York, NY, USA, 2008. ACM. [22] C.-N. Ziegler, S. M. McNee, J. A. Konstan, and G. Lausen. Improving recommendation lists through topic diversification. In Proceedings of the 14th international conference on World Wide Web, pages 22–32. ACM, 2005. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67054 | - |
dc.description.abstract | 推薦系統的設計總是致力於推薦符合使用者興趣的物品,然而使用者的興趣轉移卻很少被納入考量。我們觀察到音樂推薦平台如YouTube,使用的推薦策略也是以同歌手或同歌名歌曲為主,它背後假設使用者總是會想聽類似的歌曲,卻沒有考慮使用者厭倦的情況。為了提供一個更貼近使用者的體驗,這幾年有越來越多研究致力於將新鮮感加入推薦清單中;然而,卻沒有任何研究提及「使用者什麼時候會興趣轉移」這個問題,而這個問題將會影響我們要採取的推薦策略。因此在這篇論文中,我們提出一個新模型來預測使用者的興趣轉移。透過近年在各領域獲得成功的深度學習,我們試著建立使用者心理狀態的隱表示法,並透過專注機制模型以找出興趣轉移的關鍵。實驗結果顯示我們提出的模型在準確率和解讀性上均有良好的表現。 | zh_TW |
dc.description.abstract | Recommendation systems have mainly dealt with the problem of recommending items to fit user preferences, while the dynamicity of user interest is not fully considered. We observe that music streaming platforms like YouTube always recommend songs that either from the same artist or with the same title, assuming that users have a static interest in similar items, but ignore the fact that we get satiated easily with repeated consumptions. To provide a more appealing user experience, recent developments in recommendation system have focused on introducing novelty in the recommendation list; however, none of these works try to discuss ``when will the users shift their interest?', the key problem that determines our strategies to recommend new items or similar items. In this work, we present a novel model for interest shift prediction. By the state-of-the-art deep learning techniques that excel in extracting high-level knowledge, we try to construct the latent representations of mental states, and apply the attention mechanism on our model to automatically detect the shifting patterns in the listening records. Experiments and case studies show that our models can achieve good accuracy as well as interpretability. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T01:18:35Z (GMT). No. of bitstreams: 1 ntu-105-R03922152-1.pdf: 1644000 bytes, checksum: 58ac20c964acc6fdd5e9b0db241c7ad6 (MD5) Previous issue date: 2016 | en |
dc.description.tableofcontents | 口試委員會審定書iii
誌謝v 摘要vii Abstract ix 1 Introduction 1 2 Related Works 5 2.1 Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Re-consumption prediction . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Mental states transition modeling . . . . . . . . . . . . . . . . . 6 2.1.3 Diversity in recommendation . . . . . . . . . . . . . . . . . . . . 6 2.2 Background Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Recurrent Neural Network . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Attention-based Model . . . . . . . . . . . . . . . . . . . . . . . 8 3 Preliminaries 9 3.1 Interest Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Interest Encodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Proposed Model 13 4.1 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Deep LSTM Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2.1 LSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2.2 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Session Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3.1 Weight Calculation . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3.2 Softmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3.3 Weighted Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4 Document Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.4.1 Document Formulation . . . . . . . . . . . . . . . . . . . . . . . 18 4.4.2 Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . 18 4.5 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.6 Cost Function & Optimization . . . . . . . . . . . . . . . . . . . . . . . 19 5 Experiments 21 5.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.2 Experiment Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.3 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.4.1 Performances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.4.2 Performances of Attention Layers . . . . . . . . . . . . . . . . . 23 5.5 Attention Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.5.1 Session Attention . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.5.2 Document Attention . . . . . . . . . . . . . . . . . . . . . . . . 24 6 Case Study 27 6.1 Case Study I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.2 Case Study II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 7 Conclusions and Future Work 31 Bibliography 33 | |
dc.language.iso | en | |
dc.title | 利用專注機制神經網路之使用者興趣轉移預測 | zh_TW |
dc.title | An Attention-based Neural Network Model for Interest Shift Prediction | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 蒙以亨,盧文祥,徐典裕 | |
dc.subject.keyword | 深度學習,類神經網路,專注機制模型,興趣轉移預測,音樂推薦, | zh_TW |
dc.subject.keyword | Deep Learning,Neural Network,Attention-based Model,Interest Shift Prediction,Music Recommendation, | en |
dc.relation.page | 35 | |
dc.identifier.doi | 10.6342/NTU201702975 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2017-08-14 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-105-1.pdf 目前未授權公開取用 | 1.61 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。