請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97992完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林澤 | zh_TW |
| dc.contributor.advisor | Che Lin | en |
| dc.contributor.author | 洪明邑 | zh_TW |
| dc.contributor.author | Ming-Yi Hong | en |
| dc.date.accessioned | 2025-07-23T16:22:51Z | - |
| dc.date.available | 2025-07-24 | - |
| dc.date.copyright | 2025-07-23 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-02 | - |
| dc.identifier.citation | Bibliography
D. Burgstahler, U. Lampe, N. Richerzhagen, and R. Steinmetz, “Push vs. pull: An energy perspective (short paper),” 12 2013, pp. 190–193. M. Richardson, E. Dominowska, and R. Ragno, “Predicting clicks: Estimating the click-through rate for new ads,” in Proceedings of the 16th International Conference on World Wide Web, ser. WWW ’07, 2007, p. 521–530. G. Zhou, N. Mou, Y. Fan, Q. Pi, W. Bian, C. Zhou, X. Zhu, and K. Gai, “Deep interest evolution network for click-through rate prediction,” in Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, 2019. X. Li, C. Wang, B. Tong, J. Tan, X. Zeng, and T. Zhuang, “Deep time-aware item evolution network for click-through rate prediction,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, ser. CIKM ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 785–794. J. Wei, J. He, K. Chen, Y. Zhou, and Z. Tang, “Collaborative filtering and deep learning based recommendation system for cold start items,” Expert Systems with Applications, pp. 29–39, 2017. C.-C. Yu, M.-Y. Hong, C.-Y. Ho, and C. Lin, “Push4rec: Temporal and contextual trend-aware transformer push notification recommender,” in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, pp. 6625–6629. H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah, “Wide and deep learning for recommender systems,” in Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, ser. DLRS 2016, 2016, p. 7–10. H. Guo, R. Tang, Y. Ye, Z. Li, and X. He, “Deepfm: A factorization-machine based neural network for ctr prediction,” in Proceedings of the 26th International Joint Conference on Artificial Intelligence, ser. IJCAI’17, 2017, p. 1725–1731. R. Wang, B. Fu, G. Fu, and M. Wang, “Deep & cross network for ad click predictions,” in Proceedings of the ADKDD’17, ser. ADKDD’17, 2017. G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, X. Ma, Y. Yan, J. Jin, H. Li, and K. Gai, “Deep interest network for click through rate prediction,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD 18, 2018, p. 1059–1068. D. Chakrabarti, D. Agarwal, and V. Josifovski, “Contextual advertising by combining relevance with click feedback,” in Proceedings of the 17th International Conference on World Wide Web, ser. WWW ’08, 2008. J. Pan, J. Xu, A. L. Ruiz, W. Zhao, S. Pan, Y. Sun, and Q. Lu, “Field-weighted factorization machines for click-through rate prediction in display advertising,” in Proceedings of the 2018 World Wide Web Conference, ser. WWW ’18, 2018, p. 1349–1357. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17. Curran Associates Inc., 2017, p. 6000–6010. Q. Chen, H. Zhao, W. Li, P. Huang, and W. Ou, “Behavior sequence transformer for e-commerce recommendation in alibaba,” in Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data, ser. DLPKDD ’19, 2019. P. G. Campos, F. Díez, and I. Cantador, “Time-aware recommender systems: a comprehensive survey and analysis of existing evaluation protocols,” User Modeling and User-Adapted Interaction, vol. 24, no. 1-2, pp. 67–119, 2013. J. Li, Y. Wang, and J. McAuley, “Time interval aware self-attention for sequential recommendation,” in Proceedings of the 13th International Conference on Web Search and Data Mining, ser. WSDM ’20, 2020, p. 322–330. J. Jin, X. Chen, W. Zhang, J. Huang, Z. Feng, and Y. Yu, “Learn over past, evolve for future: Search-based time-aware recommendation with sequential behavior data,” in Proceedings of the ACM Web Conference 2022, ser. WWW ’22, 2022, p. 2451– 2461. Y. Gu, Z. Ding, S. Wang, and D. Yin, “Hierarchical user profiling for e-commerce recommender systems,” in Proceedings of the 13th International Conference on Web Search and Data Mining, ser. WSDM ’20, 2020, p. 223–231. S. Yu, C. Yang, Z. Jie, and X. Shi, “Time-aware attentive click sequence network for click-through rate prediction,” in Proceedings of the 4th International Conference on Big Data Engineering, ser. BDE ’22, 2022, p. 134–139. T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” 2013. J. Pennington, R. Socher, and C. Manning, “GloVe: Global vectors forword representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1532–1543. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in North American Chapter of the Association for Computational Linguistics, 2019. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” 2019. C. Wu, F. Wu, S. Ge, T. Qi, Y. Huang, and X. Xie, “Neural news recommendation with multi-head self-attention,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 6389– 6394. C. Jeong, S. Jang, E. Park, and S. Choi, “Acontext-aware citation recommendation model with BERT and graph convolutional networks,” Scientometrics, vol. 124, no. 3, pp. 1907–1922, 2020. G. de Souza Pereira Moreira, F. Ferreira, and A. M. da Cunha,“Newssession-based recommendations using deep neural networks,” in Proceedings of the 3rd Workshop on Deep Learning for Recommender Systems, ser. DLRS 2018, 2018, p. 15–23. C. Wu, F. Wu, T. Qi, and Y. Huang,“Empowering news recommendation with pretrained language models,” ser. SIGIR ’21, 2021, p. 1652–1656. Y. Gu, Z. Ding, S. Wang, L. Zou, Y. Liu, and D. Yin, “Deep multifaceted transformers for multi-objective ranking in large-scale e-commerce recommender systems,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, ser. CIKM ’20, 2020, p. 2493–2500. Y. Ho and S. Wookey, “The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling,” IEEE Access, vol. 8, pp. 4806–4813, 2020. C. Xiong, S. Merity, and R. Socher, “Dynamic memory networks for visual and textual question answering,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ser. ICML’16, 2016, p. 2397–2406. K. Hajian-Tilaki, “Receiver operating characteristic (roc) curve analysis for medical diagnostic test evaluation,” Caspian journal of internal medicine, vol. 4, pp. 627–635, 09 2013. J. Davis and M. Goadrich, “The relationship between precision-recall and roc curves,” in Proceedings of the 23rd International Conference on Machine Learning, ser. ICML ’06, 2006, p. 233–240. K. Boyd, K. H. Eng, and C. D. Page, “Area under the precision-recall curve: Point estimates and confidence intervals,” in Proceedings of the 2013th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part III, ser. ECMLPKDD’13, 2013, p. 451–466. J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “GPT-4 technical report,” arXiv preprint arXiv:2303.08774, 2023. A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang, A. Fan et al., “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783, 2024. W.-C. Kang and J. McAuley, “Self-attentive sequential recommendation,” in IEEE international conference on data mining (ICDM). IEEE, 2018, pp. 197–206. F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang, “BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer,” in Proceedings of the 28th ACM international conference on information and knowledge management, 2019, pp. 1441–1450. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. K. O’Shea, “An introduction to convolutional neural networks,” arXiv preprint arXiv:1511.08458, 2015. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning. PMLR, 2021, pp. 8748–8763. S. Geng, S. Liu, Z. Fu, Y. Ge, and Y. Zhang, “Recommendation as language processing (RLP): A unified pretrain, personalized prompt & predict paradigm (P5),” in Proceedings of the 16th ACM Conference on Recommender Systems, 2022, pp. 299–315. H. Lyu, S. Jiang, H. Zeng, Y. Xia, Q. Wang, S. Zhang, R. Chen, C. Leung, J. Tang, and J. Luo, “LLM-Rec: Personalized recommendation via prompting large language models,” in Findings of the Association for Computational Linguistics: NAACL, K. Duh, H. Gomez, and S. Bethard, Eds. Association for Computational Linguistics, 2024, pp. 583–612. B. Ugurlu, “Style4rec: Enhancing transformer-based e-commerce recommendation systems with style and shopping cart information,” NTU Theses and Dissertations, pp. 1–59, 2023. R. He and J. McAuley, “VBPR: visual bayesian personalized ranking from implicit feedback,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, 2016. F. Liu, Z. Cheng, C. Sun, Y. Wang, L. Nie, and M. Kankanhalli, “User diverse preference modeling by multimodal attentive metric learning,” in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 1526–1534. J. Liang, X. Zhao, M. Li, Z. Zhang, W. Wang, H. Liu, and Z.Liu, “MMMLP: Multimodal multilayer perceptron for sequential recommendations,” in Proceedings of the ACM Web Conference, 2023, pp. 1109–1117. G. Shani, D. Heckerman, R. I. Brafman, and C. Boutilier, “An mdp-based recommender system.” Journal of machine Learning research, vol. 6, no. 9, 2005. B. Hidasi, “Session-based recommendations with recurrent neural networks,” arXiv preprint arXiv:1511.06939, 2015. G. d. S. P. Moreira, S. Rabhi, J. M. Lee, R. Ak, and E. Oldridge, “Transformers4Rec: Bridging the gap between NLP and sequential / session-based recommendation,” in Proceedings of the 15th ACM Conference on Recommender Systems, ser. RecSys ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 143–153. [Online]. Available: https://doi.org/10.1145/3460231.3474255 C. Li, L. Xia, X. Ren, Y. Ye, Y. Xu, and C. Huang, “Graph transformer for recommendation,” in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 1680– 1689. [Online]. Available: https://doi.org/10.1145/3539618.3591723 Q. Liu, J. Hu, Y. Xiao, X. Zhao, J. Gao, W. Wang, Q. Li, and J. Tang, “Multimodal recommender systems: A survey,” ACM Comput. Surv., 2024. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021. Z. Zhao, W. Fan, J. Li, Y. Liu, X. Mei, Y. Wang, Z. Wen, F. Wang, X. Zhao, J. Tang et al., “Recommender systems in the era of large language models (LLMs),” arXiv preprint arXiv:2307.02046, 2023. R. Li, W. Deng, Y. Cheng, Z. Yuan, J. Zhang, and F. Yuan, “Exploring the upper limits of text-based collaborative filtering using large language models: Discoveries and insights,” arXiv preprint arXiv:2305.11700, 2023. L. Wu, Z. Zheng, Z. Qiu, H. Wang, H. Gu, T. Shen, C. Qin, C. Zhu, H. Zhu, Q. Liu et al., “A survey on large language models for recommendation,” World Wide Web, vol. 27, no. 5, p. 60, 2024. H. Zhou, X. Zhou, Z. Zeng, L. Zhang, and Z. Shen, “A comprehensive survey on multimodal recommender systems: Taxonomy, evaluation, and future directions,” arXiv preprint arXiv:2302.04473, 2023. C. Wu, F. Wu, T. Qi, and Y. Huang, “MM-Rec:multimodalnewsrecommendation,” arXiv preprint arXiv:2104.07407, 2021. X. Zhou and Z. Shen, “A tale of two graphs: Freezing and denoising graph structures for multimodal recommendation,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 935–943. S. Zhong, Z. Huang, D. Li, W. Wen, J. Qin, and L. Lin, “Mirror gradient: Towards robust multimodal recommender systems via exploring flat local minima,” in Proceedings of the ACM on Web Conference 2024, 2024, pp. 3700–3711. C. Wei, J. Liang, D. Liu, and F. Wang, “Contrastive graph structure learning via information bottleneck for recommendation,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35. Curran Associates, Inc., 2022, pp. 20407–20420. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2022/file/ 803b9c4a8e4784072fdd791c54d614e2-Paper-Conference.pdf W. Zhao, S. Zhong, Y. Liu, W. Wen, J. Qin, M. Liang, and Z. Huang, “DVIB: Towards robust multimodal recommender systems via variational information bottleneck distillation,” in Proceedings of the ACM on Web Conference 2025, ser. WWW ’25. New York, NY, USA: Association for Computing Machinery, 2025, p. 2549–2561. [Online]. Available: https://doi.org/10.1145/3696410.3714840 P. G. Campos, F. Díez, and I. Cantador, “Time-aware recommender systems: a comprehensive survey and analysis of existing evaluation protocols,” User Modeling and User-Adapted Interaction, vol. 24, pp. 67–119, 2014. Q. Zhang, L. Cao, C. Shi, and Z. Niu, “Neural time-aware sequential recommendation by jointly modeling preference dynamics and explicit feature couplings,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5125–5137, 2021. H. Jiang, W. Wang, Y. Wei, Z. Gao, Y. Wang, and L. Nie, “What aspect do you like: Multi-scale time-aware user interest modeling for micro-video recommendation,” in Proceedings of the 28th ACM International conference on Multimedia, 2020, pp. 3487–3495. L. Wang, C. Ma, X. Wu, Z. Qiu, Y. Zheng, and X. Chen, “Causally debiased time-aware recommendation,” in Proceedings of the ACM Web Conference 2024, ser. WWW ’24. New York, NY, USA: Association for Computing Machinery, 2024, p. 3331–3342. [Online]. Available: https://doi.org/10.1145/3589334.3645400 L. A. Gatys, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576, 2015. L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414–2423. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd International Conference on Learning Representations (ICLR), pp. 1–14, 2015. OpenAI, “Openai GPT-4o API [gpt-4o-mini],” 2024. [Online]. Available: https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/ C.-K. Huang, Y.-H. Hsieh, T.-J. Chien, L.-C. Chien, S.-H. Sun, T.-H. Su, J.-H. Kao, and C. Lin, “Scalable numerical embeddings for multivariate time series: Enhancing healthcare data representation learning,” arXiv preprint arXiv:2405.16557, 2024. A. Nagrani, S. Yang, A. Arnab, A. Jansen, C. Schmid, and C. Sun, “Attention bottlenecks for multimodal fusion,” Advances in neural information processing systems, vol. 34, pp. 14 200–14 213, 2021. Z. Meng, R. McCreadie, C. Macdonald, and I. Ounis, “Exploring data splitting strategies for the evaluation of recommendation models,” in Proceedings of the 14th ACM conference on recommender systems, 2020, pp. 681–686. J. M. Stokes, K. Yang, K. Swanson, W. Jin, A. Cubillos-Ruiz, N. M. Donghia, C. R. MacNair, S. French, L. A. Carfrae, Z. Bloom-Ackermann et al., “A deep learning approach to antibiotic discovery,” Cell, vol. 180, no. 4, pp. 688–702, 2020. W. Jin, K. Yang, R. Barzilay, and T. Jaakkola, “Learning multimodal graph-to-graph translation for molecule optimization,” in International Conference on Learning Representations, 2019. [Online]. Available: https://openreview.net/ forum?id=B1xJAsA5F7 S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, and T. Tan, “Session-based recommendation with graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, 2019, pp. 346–353. S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: A survey,” ACM Comput. Surv., vol. 55, no. 5, dec 2022. [Online]. Available: https://doi.org/10.1145/3535101 W. Shi and R. Rajkumar, “Point-gnn: Graph neural network for 3d object detection in a point cloud,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1711–1719. Y. Li, C. Gu, T. Dullien, O. Vinyals, and P. Kohli, “Graph matching networks for learning the similarity of graph structured objects,” 2019. [Online]. Available: https://openreview.net/forum?id=S1xiOjC9F7 Y. He, Q. Gan, D. Wipf, G. Reinert, J. Yan, and M. Cucuringu, “Gnnrank: Learning global rankings from pairwise comparisons via directed graph neural networks,” in ICML 2022, 2022. X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu, “Heterogeneous graph attention network,” in The World Wide Web Conference, ser. WWW ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 2022–2032. [Online]. Available: https://doi.org/10.1145/3308558.3313562 X. Fu, J. Zhang, Z. Meng, and I. King, “Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding,” in Proceedings of The Web Conference 2020, ser. WWW ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 2331–2341. [Online]. Available: https://doi.org/10.1145/3366423.3380297 S. Yun, M. Jeong, R. Kim, J. Kang, and H. J. Kim, “Graph transformer networks,” in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019. [Online]. Available: https://proceedings.neurips.cc/paper/ 2019/file/9d63484abb477c97640154d40595a3bb-Paper.pdf Z. Hu, Y. Dong, K. Wang, and Y. Sun, “Heterogeneous graph transformer,” in Proceedings of The Web Conference 2020, ser. WWW ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 2704–2710. [Online]. Available: https://doi.org/10.1145/3366423.3380027 Q. Lv, M. Ding, Q. Liu, Y. Chen, W. Feng, S. He, C. Zhou, J. Jiang, Y. Dong, and J. Tang, “Are we really making much progress? revisiting, benchmarking and refining heterogeneous graph neural networks,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 1150–1160. [Online]. Available: https://doi.org/10.1145/3447548.3467350 Z. Zhou, J. Shi, R. Yang, Y. Zou, and Q. Li, “Slotgat: slot-based message passing for heterogeneous graphs,” in Proceedings of the 40th International Conference on Machine Learning, ser. ICML’23. JMLR.org, 2023. M.-Y. Hong, S.-Y. Chang, H.-W. Hsu, Y.-H. Huang, C.-Y. Wang, and C. Lin, “Treexgnn: can gradient-boosted decision trees help boost heterogeneous graph neural networks?” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5. D. P. Kingma and M. Welling, “Auto-encodingvariationalbayes,”CoRR,vol.abs/ 1312.6114, 2013. D. J. Rezende, S. Mohamed, and D. Wierstra, “Stochastic back-propagation and variational inference in deep latent gaussian models,” ArXiv, vol. abs/1401.4082, 2014. T. Kipf and M. Welling, “Variational graph auto-encoders,” ArXiv, vol. abs/ 1611.07308, 2016. A. Dalvi, A. Acharya, J. Gao, and V. G. Honavar, “Variational graphauto-encoders for heterogeneous information network,” in NeurIPS 2022 Workshop: New Frontiers in Graph Learning, 2022. [Online]. Available: https://openreview.net/ forum?id=-l2yynwJWtX Y. Yang, Z. Guan, Z. Wang, W. Zhao, C. Xu, W. Lu, and J. Huang, “Self-supervised heterogeneous graph pre-training based on structural clustering,” in Proceedings of the 36th International Conference on Neural Information Processing Systems, ser. NIPS ’22. Red Hook, NY, USA: Curran Associates Inc., 2022. X. Wang, N. Liu, H. Han, and C. Shi, “Self-supervised heterogeneous graph neural network with co-contrastive learning,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 1726–1736. [Online]. Available: https://doi.org/10.1145/3447548.3467415 Y. Ma, N. Yan, J. Li, M. Mortazavi, and N. V. Chawla, “Hetgpt: Harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks,” in Proceedings of the ACM Web Conference 2024, ser. WWW ’24. New York, NY, USA: Association for Computing Machinery, 2024, p. 1015–1023. [Online]. Available: https://doi.org/10.1145/3589334.3645685 X. Yu, Y. Fang, Z. Liu, and X. Zhang, “Hgprompt: Bridging homogeneous and heterogeneous graphs for few-shot prompt learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 15, pp. 16578–16586, Mar. 2024. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/29596 Y. Yang, Z. Guan, J. Li, W. Zhao, J. Cui, and Q. Wang, “Interpretable and efficient heterogeneous graph convolutional network,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 2, pp. 1637–1650, 2023. W. Wang, X. Suo, X. Wei, B. Wang, H. Wang, H.-N. Dai, and X. Zhang, “Hgate: Heterogeneous graph attention auto-encoders,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 4, pp. 3938–3951, 2023. Y. Tian, K. Dong, C. Zhang, C. Zhang, and N. V. Chawla, “Heterogeneous graph masked autoencoders,” in Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, ser. AAAI’23/IAAI’23/EAAI’23. AAAI Press, 2023. [Online]. Available: https://doi.org/10.1609/aaai.v37i8.26192 W. U. Ahmad, N. Peng, and K.-W. Chang, “Gate: Graph attention transformer encoder for cross-lingual relation and event extraction,” ArXiv, vol. abs/2010.03009, 2020. X. Zhang, J. J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” in NIPS, 2015. K. Kafle, M. Yousefhussien, and C. Kanan, “Data augmentation for visual question answering,” in Proceedings of the 10th International Conference on Natural Language Generation. Santiago de Compostela, Spain: Association for Computational Linguistics, Sep. 2017, pp. 198–202. [Online]. Available: https://aclanthology.org/W17-3529 R. Sennrich, B. Haddow, and A. Birch, “Improving neural machine translation models with monolingual data,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 86–96. [Online]. Available: https://aclanthology.org/P16-1009 Q. Xie, Z. Dai, E. H. Hovy, M.-T. Luong, and Q. V. Le, “Unsupervised data augmentation for consistency training,” arXiv: Learning, 2019. S. Edunov, M. Ott, M. Auli, and D. Grangier, “Understanding back-translation at scale,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Brussels, Belgium: Association for Com- putational Linguistics, Oct.-Nov. 2018, pp. 489–500. [Online]. Available: https://aclanthology.org/D18-1045 C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, pp. 1–48, 2019. Y. Rong, W. Huang, T. Xu, and J. Huang, “Dropedge: Towards deep graph convolutional networks on node classification,” in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/ forum?id=Hkx1qkrKPr D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and relieving the over-smoothing problem for graph neural networks from the topological view,” in AAAI Conference on Artificial Intelligence, 2019. T. Zhao, Y. Liu, L. Neves, O. J. Woodford, M. Jiang, and N. Shah, “Data augmentation for graph neural networks,” in AAAI Conference on Artificial Intelligence, 2020. X. Han, Z. Jiang, N. Liu, and X. Hu, “G-mixup: Graph data augmentation for graph classification,” in Proceedings of the 39th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 17–23 Jul 2022, pp. 8230–8248. [Online]. Available: https: //proceedings.mlr.press/v162/han22c.html S. Liu, R. Ying, H. Dong, L. Li, T. Xu, Y. Rong, P. Zhao, J. Huang, and D. Wu, “Local augmentation for graph neural networks,” in Proceedings of the 39th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 17–23 Jul 2022, pp. 14 054–14 072. [Online]. Available: https://proceedings.mlr.press/v162/liu22s.html Y. Zhang, H. Zhu, Z. Song, P. Koniusz, and I. King, “Spectral feature augmentation for graph contrastive learning and beyond,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 9, pp. 11 289–11 297, Jun. 2023. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/26336 M. Zhou and Z. Gong, “Graphsr: A data augmentation algorithm for imbalanced node classification,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 4, pp. 4954–4962, Jun. 2023. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/25622 Z. Zhang, L. Li, S. Wan, S. Wang, Z. Wang, Z. Lu, D. Hao, and W. Li, “Dropedge not foolproof: Effective augmentation method for signed graph neural networks,” in Advances in Neural Information Processing Systems, A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, Eds., vol. 37. Curran Associates, Inc., 2024, pp. 117041–117069. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2024/file/ d450dceeacd6083d1d550247377f2320-Paper-Conference.pdf T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999–3007, 2017. T. Chen, K. Zhou, K. Duan, W. Zheng, P. Wang, X. Hu,and Z. Wang, “Bag of tricks for training deeper graph neural networks: A comprehensive benchmark study,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 2769–2781, 2023. Y. Wang, S. Tang, Y. Lei, W. Song, S. Wang, and M. Zhang, “Disenhan: Disentangled heterogeneous graph attention network for recommendation,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, ser. CIKM ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 1605–1614. [Online]. Available: https: //doi.org/10.1145/3340531.3411996 S. Zhu, C. Zhou, S. Pan, X. Zhu, and W. Xian Bin, “Relation structure-aware heterogeneous graph neural network,” 11 2019. C. Zhang, D. Song, C. Huang, A. Swami, and N. V. Chawla, “Heterogeneous graph neural network,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 793–803. [Online]. Available: https://doi.org/10.1145/3292500.3330961 H. Hong, H. Guo, Y. Lin, X. Yang, Z. Li, and J. Ye, “An attention-based graph neural network for heterogeneous structural learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 4132–4139, Apr. 2020. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/5833 C. Li, Z. Guo, Q. He, and K. He, “Long-range meta-path search on large-scale heterogeneous graphs,” in Advances in Neural Information Processing Systems, A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, Eds., vol. 37. Curran Associates, Inc., 2024, pp. 44240–44268. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2024/file/ 4e392aa9bc70ed731d3c9c32810f92fb-Paper-Conference.pdf A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chin- tala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8024–8035. M. Wang, D. Zheng, Z. Ye, Q. Gan, M. Li, X. Song, J. Zhou, C. Ma, L. Yu, Y. Gai, T. Xiao, T. He, G. Karypis, J. Li, and Z. Zhang, “Deep graph library: A graph-centric, highly-performant package for graph neural networks,” arXiv preprint arXiv:1909.01315, 2019. V. P. Dwivedi, C. K. Joshi, T. Laurent, Y. Bengio, and X. Bresson, “Benchmarking graph neural networks,” 2020. E. Abbe, “Community detection and stochastic block models: recent developments,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 6446–6531, 2017. Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Generating explanations for graph neural networks,” Advances in neural information processing systems, vol. 32, 2019. B. P. Majumder, S. Li, J. Ni, and J. McAuley, “Generating personalized recipes from historical user preferences,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), K. Inui, J. Jiang, V. Ng, and X. Wan, Eds. Hong Kong, China: Association for Computational Linguistics, Nov. 2019, pp. 5976–5982. [Online]. Available: https://aclanthology.org/D19-1613 M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using GAN for improved liver lesion classification,” CoRR, vol. abs/1801.02385, 2018. [Online]. Available: http://arxiv.org/abs/1801.02385 K. W. Bowyer, N. V. Chawla, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” CoRR, vol. abs/1106.1813, 2011. [Online]. Available: http://arxiv.org/abs/1106.1813 L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni, “Modeling tabular data using conditional GAN,” CoRR, vol. abs/1907.00503, 2019. [Online]. Available: http://arxiv.org/abs/1907.00503 A. Figueira and B. Vaz, “Survey on synthetic data generation, evaluation methods and gans,” Mathematics, vol. 10, no. 15, 2022. [Online]. Available: https://www.mdpi.com/2227-7390/10/15/2733 H.-W. Dong, W.-Y. Hsiao, L.-C. Yang, and Y.-H. Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018. T. A. Snijders and K. Nowicki, “Estimation and prediction for stochastic blockmodels for graphs with latent block structure,” Journal of classification, vol. 14, no. 1, pp. 75–100, 1997. A. Tsitsulin, J. Palowitch, B. Perozzi, and E. Müller, “Graph clustering with graph neural networks,” arXiv preprint arXiv:2006.16904, 2020. B. Rozemberczki, P. Englert, A. Kapoor, M. Blais, and B. Perozzi, “Pathfinder discovery networks for neural message passing,” in Proceedings of the Web Conference 2021, 2021, pp. 2547–2558. J. Palowitch, A. Tsitsulin, B. Mayer, and B. Perozzi, “Graphworld: Fake graphs bring real insights for gnns,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 3691–3701. D. Luo, W. Cheng, D.Xu, W.Yu, B.Zong, H.Chen, and X.Zhang, “Parameterized explainer for graph neural network,” Advances in neural information processing systems, vol. 33, pp. 19 620–19 631, 2020. H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On explainability of graph neural networks via subgraph explorations,” in International Conference on Machine Learning. PMLR, 2021, pp. 12 241–12 252. W. Lin, H. Lan, H. Wang, and B. Li, “Orphicx: A causality-inspired latent variable model for interpreting graph neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 729–13 738. H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in graph neural networks: A taxonomic survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, pp. 5782–5799, 2020. [Online]. Available: https: //api.semanticscholar.org/CorpusID:229923402 E. Dai and S. Wang, “Towards self-explainable graph neural network,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 302–311. T. Li, J. Deng, Y. Shen, L. Qiu, H. Yongxiang, and C. C. Cao, “Towards finegrained explainability for heterogeneous graph neural network,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 7, pp. 8640–8647, Jun. 2023. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/26040 G. Lv, C. J. Zhang, and L. Chen, “Hence-x: Toward heterogeneity-agnostic multi-level explainability for deep graph networks,” Proc. VLDB Endow., vol. 16, no. 11, p. 2990–3003, Jul. 2023. [Online]. Available: https: //doi.org/10.14778/3611479.3611503 A. K. Debnath, R. L. L. de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch, “Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity.” Journal of medicinal chemistry, vol. 34 2, pp. 786–97, 1991. [Online]. Available: https://api.semanticscholar.org/CorpusID:19990980 R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, “Network motifs: simple building blocks of complex networks,” Science, vol. 298, no. 5594, pp. 824–827, 2002. Y. Chang, C. Chen, W. Hu, Z. Zheng, X. Zhou, and S. Chen, “Megnn: Meta-path extracted graph neural network for heterogeneous graph representation learning,” Knowledge-Based Systems, vol. 235, p. 107611, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S095070512100873X R. Albert and A.-L. Barabási, “Statistical mechanics of complex networks,” Reviews of modern physics, vol. 74, no. 1, p. 47, 2002. A. Tsitsulin, B. Rozemberczki, J. Palowitch, and B. Perozzi, “Synthetic graph generation to benchmark graph learning,” arXiv preprint arXiv:2204.01376, 2022. Y. Li, J. Zhou, S. Verma, and F. Chen, “A survey of explainable graph neural networks: Taxonomy and evaluation metrics,” arXiv preprint arXiv:2207.12599, 2022. W. Hu, B. Liu, J. Gomes, M. Zitnik, P. Liang, V. Pande, and J. Leskovec, “Strategies for pre-training graph neural networks,” in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/ forum?id=HJlWWJSFDH M. T. Rosenstein, Z. Marx, L. P. Kaelbling, and T. G. Dietterich, “To transfer or not to transfer,” in Neural Information Processing Systems, 2005. [Online]. Available: https://api.semanticscholar.org/CorpusID:597779 S. Darabi, P. Bigaj, D. Majchrowski, A. Kasymov, P. Morkisz, and A. Fit-Florea, “A framework for large scale synthetic graph dataset generation,” 2023. [Online]. Available: https://arxiv.org/abs/2210.01944 C. Jones, A. Smith, and E. Roberts, “Article title,” in Proceedings Title, vol. II. IEEE, 2003, pp. 803–806. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016. W. Fan, Y. Ma, Q. Li, Y. He, Y. E. Zhao, J. Tang, and D. Yin, “Graphneuralnetworks for social recommendation,” The World Wide Web Conference, 2019. R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec, “Graph convolutional neural networks for web-scale recommender systems,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 974–983. [Online]. Available: https://doi.org/10.1145/3219819.3219890 J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh, “Graph r-cnn for scene graph generation,” in Computer Vision – ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham: Springer International Publishing, 2018, pp. 690–706. R. Ying, J. You, C. Morris, X. Ren, W. L. Hamilton, and J. Leskovec, “Hierarchical graph representation learning with differentiable pooling,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems, ser. NIPS’18. Red Hook, NY, USA: Curran Associates Inc., 2018, p. 4805–4815. S. K. Maurya, X. Liu, and T. Murata, “Graph neural networks for fast node ranking approximation,” ACM Trans. Knowl. Discov. Data, vol. 15, no. 5, may 2021. [Online]. Available: https://doi.org/10.1145/3446217 J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ser. ICML’17. JMLR.org, 2017, p. 1263–1272. K. Madhawa, K. Ishiguro, K. Nakago, and M. Abe, “Graph{nvp}: an invertible flow-based model for generating molecular graphs,” 2020. [Online]. Available: https://openreview.net/forum?id=ryxQ6T4YwB Y. Dong, N. V. Chawla, and A. Swami, “Metapath2vec: Scalable representation learning for heterogeneous networks,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’17. New York, NY, USA: Association for Computing Machinery, 2017, p. 135– 144. [Online]. Available: https://doi.org/10.1145/3097983.3098036 Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun, “Gpt-gnn: Generative pre-training of graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 1857– 1867. [Online]. Available: https://doi.org/10.1145/3394486.3403237 C. Yang, A. Pal, A. Zhai, N. Pancha, J. Han, C. Rosenberg, and J. Leskovec, “Multisage: Empowering gcn with contextualized multi-embeddings on web-scale multipartite networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 2434–2443. [Online]. Available: https://doi.org/10.1145/3394486.3403293 C. Zhang, D. Song, C. Huang, A. Swami, and N. V. Chawla, “Heterogeneous graph neural network,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 793–803. [Online]. Available: https://doi.org/10.1145/3292500.3330961 S. Zhu, C. Zhou, S. Pan, X. Zhu, and B. Wang, “Relation structure-aware heterogeneous graph neural network,” in 2019 IEEE International Conference on Data Mining (ICDM), 2019, pp. 1534–1539. H. Hong, H. Guo, Y. Lin, X. Yang, Z. Li, and J. Ye, “An attention-based graph neural network for heterogeneous structural learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 4132–4139, Apr. 2020. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/5833 X. Wang, D. Bo, C. Shi, S. Fan, Y. Ye, and P. S. Yu, “A survey on heterogeneous graph embedding: Methods, techniques, applications and sources,” IEEE Transactions on Big Data, vol. 9, pp. 415–436, 2020. J. Zhao, X. Wang, C. Shi, B. Hu, G. Song, and Y. Ye, “Heterogeneous graph structure learning for graph neural networks,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, pp. 4697–4705, May 2021. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/16600 T. Chen, K. Zhou, K. Duan, W. Zheng, P. Wang, X. Hu, and Z. Wang, “Bag of tricks for training deeper graph neural networks: A comprehensive benchmark study,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. O. Shchur and S. Günnemann, “Overlapping community detection with graph neural networks,” arXiv preprint arXiv:1909.12201, 2019. Z. Cui, X. Xu, X. Fei, X. Cai, Y. Cao, W. Zhang, and J. Chen, “Personalized recommendation system based on collaborative filtering for iot scenarios,” IEEE Transactions on Services Computing, vol. 13, no. 4, pp. 685–695, 2020. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97992 | - |
| dc.description.abstract | 隨著電子商務與社交網路的快速發展,並受到多樣化數據模式的驅動,推薦系統的演進顯著加速。傳統的序列推薦模型主要專注於使用者的互動序列,但往往難以完整捕捉使用者偏好以及新興趨勢的影響。隨著使用者互動形式日益多樣化,包括文字、圖像和影片,對於多模態模型的需求變得愈發重要。這些模型能夠進行更豐富的特徵提取與跨模態理解,有效縮短個人化內容推薦的落差。同時,社交網路正在重新塑造數位互動方式,對於複雜互動關係的深度理解成為關鍵。圖神經網路作為強大的工具,能夠有效建模在這些複雜的圖結構資料上。然而,在異質資訊網路中,如何有效使用並整合各種類型節點和連線資訊對此領域帶來許多挑戰性。設計一個穩健且通用的異構圖神經網路是一個重要且廣泛追求的研究目標。此外,隨著圖神經網路的廣泛使用在社群分析,我們對模型的解釋和理解也變得越來越重要,針對模型解釋性的研究也因應而生,準備可靠的基準圖資料集的需求隨之增加,特別是在異質資訊網路資料集尤其匱乏。
為了解決上述問題。第一、我們提出了一種時間與情境趨勢感知的Transformer模型,能夠捕捉時間序列、情境點擊行為以及新興趨勢,以優化點擊率(CTR)的預測。第二、我們透過時間對齊共享標籤框架,整合多模態資訊,包括文字、圖像與價格,實現序列推薦的進一步提升,確保時間一致性並改善推薦準確度。第三、針對圖結構的表徵學習任務,我們提出了一種類型感知的異質圖自動編碼器及增強策略,透過優化邊緣連接與強化抗噪性來提升表徵學習能力。第四、我們系統性的生成合成的異質資訊網路並帶有解釋性答案,作為評估圖學習與解釋性的強健基準,並解決多樣化異質資料集匱乏的問題。 透過我們的創新設計,我們推動了推薦系統深度表徵學習的邊界,充分利用序列與多模態資訊。此外,我們成功實現了對多樣化的異質資訊網路進行有效的生成和半監督學習,有效解決異質資訊網路資料集的匱乏並加強異構圖神經網路學習能力。 | zh_TW |
| dc.description.abstract | The rapid growth of e-commerce and social networks has propelled the evolution of recommendation systems, driven by the availability of diverse data modalities. Traditional sequential recommendation models primarily focus on user interaction sequences, yet they often fall short in capturing the full complexity of user preferences and emerging trends. As user interactions diversify—encompassing text, images, and videos—the need for multimodal models becomes critical. These models enable richer feature extraction and cross-modal understanding, bridging the gap in personalized content delivery. Simultaneously, social networks are reshaping digital interactions, demanding a deeper understanding of complex interaction relationships. Graph Neural Networks (GNNs) have emerged as powerful tools for modeling these intricate structures, enhancing representation learning and prediction accuracy. However, the effective use and integration of various node and edge types in heterogeneous information networks (HINs) can be challenging. Designing a robust and general heterogeneous GNN is a vital and widely pursued research objective. Furthermore, as the interpretation and understanding of GNNs become more important, there is a growing need for reliable benchmarks and extensive graph datasets, especially in the context of HINs.
To address the above issues, first, we proposed a temporal and contextual trend-aware transformer model that can capture temporal and contextual click behaviors and emerging trends to optimize CTR prediction. Second, we leverage a Time-aligned Shared Token (TST) framework to integrate multimodal information, including text, images, and prices, achieving further improvements in sequential recommendation, ensuring temporal consistency, and enhancing recommendation accuracy. Third, for graph-based representation learning tasks, we propose a type-aware heterogeneous graph autoencoder and augmentation strategy that enhances representation learning by optimizing edge connections and strengthening noise resistance. Fourth, we systematically generate synthetic HINs with explanation ground truths to serve as robust baselines for evaluating graph learning and explainability, addressing the scarcity of diverse heterogeneous datasets. Through our innovative design, we push the boundaries of deep representation learning for recommendation systems, leveraging sequential and multimodal information. Furthermore, we successfully achieved effective generation and semi-supervised learning for diverse HINs, effectively addressing the scarcity of HIN datasets and enhancing the learning capabilities of HGNNs. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-23T16:22:51Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-07-23T16:22:51Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員審定書 i
誌謝 iii 摘要 v Abstract vii Contents ix List of Figures xv List of Tables xvii Chapter 1 Introduction 1 Chapter 2 Temporal and Contextual Trend-aware Transformer Framework 5 2.1 Challenges and Contributions 5 2.2 Related Works 7 2.2.1 CTR Prediction 7 2.2.2 Temporal Information Extraction 8 2.2.3 Contextual Information Extraction 9 2.3 Proposed Method: Push4Rec 10 2.3.1 Temporal Information Extractor (TIE) 11 2.3.2 Contextual Information Extractor (CIE) 13 2.3.3 Temporal/Contextual Interest Learner (TIL, CIL) 14 2.3.4 Trend-aware Learner 15 2.3.5 MLP Layers and Loss Function 16 2.4 Experiments 16 2.4.1 Datasets 16 2.4.2 Baselines 17 2.4.3 Parameter Settings 17 2.4.4 Evaluation Metrics 17 2.4.4.1 Area Under ROC Curve (AUROC) 17 2.4.4.2 Area Under the Precision-Recall Curve (AUPRC) 18 2.4.4.3 F1-score 18 2.5 Results and Discussion 19 2.5.1 Performance Comparison 19 2.5.2 Cross-Dataset Evaluation 20 2.5.3 Visualization of Gating Network Weights 21 2.5.4 Different fusion function between TIL and CIL 21 2.5.5 Ablation Study of Learners 22 2.5.6 Comparing BERT and GPT-2 embeddings 23 2.6 Summary 23 Chapter 3 Multimodal Time-Aligned Shared Token framework 25 3.1 Challenges and Contributions 25 3.2 Related Works 28 3.2.1 Sequential Recommendation Systems 28 3.2.2 Multimodal Recommendation Systems 29 3.2.3 Temporal Dynamics in Recommendation Systems 31 3.3 Proposed Method: MTSTRec 31 3.3.1 Preliminaries 32 3.3.2 Feature Extractors 33 3.3.2.1 ID Extractor 33 3.3.2.2 Style Extractor 33 3.3.2.3 Text Extractor 35 3.3.2.4 Prompt-Text Extractor 35 3.3.2.5 Price Extractor 38 3.3.3 Multimodal Transformer with Time-aligned Shared Token Fusion 39 3.3.3.1 Self-Attention Encoder 39 3.3.3.2 Time-aligned Shared Token Fusion with Sequential Multimodal Integration 40 3.3.4 Loss Function 42 3.4 Experiments 43 3.4.1 Experimental Settings 43 3.4.2 Hyper-Parameter Settting 48 3.5 Results and Discussion 49 3.5.1 Performance Comparison 49 3.5.2 Ablation Study of Modalities 51 3.5.3 Impact of Fusion Strategies 53 3.5.4 Impact of Time-aligned Shared Tokens 54 3.5.5 Complexity and Runtime Analysis 55 3.5.6 Comparison of prompt strategies and LLMs 56 3.5.7 Use of Modality-Specific CLOZE Tokens (zcz) 57 3.5.8 Excluding the Shared Token (zsh) from Final Output 58 3.6 Summary 58 Chapter 4 Type-Aware Heterogeneous Graph Autoencoder Combined with Graph Augmentation 61 4.1 Challenges and Contributions 61 4.2 Related Works 64 4.2.1 Heterogeneous Graph Neural Networks 64 4.2.2 Graph AutoEncoders 65 4.2.3 Graph Data Augmentation 66 4.3 Proposed Method: THeGAU 68 4.3.1 Preliminaries 68 4.3.2 Type-aware Heterogeneous Graph AutoEncoder and Augmentation Framework 68 4.3.3 Heterogeneous Graph Encoder 69 4.3.4 Type-aware Graph Decoder (TGD) 70 4.3.5 Target Node Feature-based Classifier (FBC) 73 4.3.6 Heterogeneous Graph Classifier (HGC) 74 4.3.7 Joint Learning 74 4.3.8 Type-aware Graph Augmentation (TG-Aug) 75 4.4 Experimental Settings 77 4.4.1 Datasets 77 4.4.2 Evaluation Metrics 77 4.4.3 Baselines 78 4.4.4 Environments and Parameter Settings 81 4.4.5 Model Parameters 81 4.5 Results and Discussion 82 4.5.1 The Performance of the THeGAU 82 4.5.2 Ablation Studies 84 4.5.3 The Design of the Type-Aware Graph Decoder 85 4.5.4 Parameter Sensitiveness 86 4.5.5 Complexity of THeGAU 87 4.6 Summary 88 Chapter 5 Generation of Heterogeneous Information Networks 89 5.1 Challenges and Contributions 89 5.2 Related Works 90 5.2.1 Synthetic Graph Generation 90 5.2.2 Explainer for Graph Neural Networks 91 5.2.3 Graph Datasets with Ground-Truth Explanations 92 5.3 Proposed Method: SynHING 93 5.3.1 Preliminaries 93 5.3.2 Overview of SynHING 93 5.3.3 Major Motif Generation (MMG) 94 5.3.4 Base Subgraph Generation (BSG) 95 5.3.5 Merge to Generate HINs 95 5.3.5.1 Intra-Cluster Merge (Intra-CM) 97 5.3.5.2 Inter-Cluster Merge (Inter-CM) 98 5.3.6 Node Feature Generation (NFG) 100 5.3.7 Post-Pruning (P-P) 100 5.3.8 Complexity and Scalability of SynHING 101 5.3.8.1 SynHING’s Complexity and Scalability 101 5.4 Experimental Settings 103 5.4.1 Datasets and HGNNs 103 5.4.2 Evaluation Metrics 104 5.4.3 Benchmark Heterogeneous Graph Neural Networks 106 5.4.4 Computing Resources 106 5.5 Results and Discussion 107 5.5.1 Cluster Exclusion Controls Enable Structured Benchmarking of HGNNs 107 5.5.2 Fidelity Trends Reveal the Explanatory Power of Major Motifs 109 5.5.3 Ablation Studies 111 5.5.4 Synthetic Graph Pretraining Leads to Positive Transfer in Real HIN Tasks 112 5.5.5 Approximating Referenced Graph 114 5.5.5.1 SynHING’s Empricial Runtime 116 5.5.6 SynHING Supports the Evaluation of HGNN Explanation Methods 117 5.6 Summary 118 Chapter 6 Conclusion and Future Works 119 Bibliography 123 | - |
| dc.language.iso | en | - |
| dc.subject | 序列推薦 | zh_TW |
| dc.subject | 可解釋人工智慧 | zh_TW |
| dc.subject | 合成圖產生 | zh_TW |
| dc.subject | 圖資料增強 | zh_TW |
| dc.subject | 異構圖自動編碼器 | zh_TW |
| dc.subject | 圖神經網路 | zh_TW |
| dc.subject | 異質資訊網路 | zh_TW |
| dc.subject | 大型語言模型 | zh_TW |
| dc.subject | 時間對齊共享標籤 | zh_TW |
| dc.subject | 多模態序列推薦 | zh_TW |
| dc.subject | 趨勢感知 | zh_TW |
| dc.subject | 時間和語義訊息 | zh_TW |
| dc.subject | 推薦系統 | zh_TW |
| dc.subject | Explainable Artificial Intelligence | en |
| dc.subject | Recommendation Systems | en |
| dc.subject | Sequential Recommendation | en |
| dc.subject | Temporal and Contextual Information | en |
| dc.subject | Trend-aware | en |
| dc.subject | Multimodal Sequential Recommendation | en |
| dc.subject | Time-aligned Shared Token | en |
| dc.subject | Large Language Model | en |
| dc.subject | Heterogeneous Information Network | en |
| dc.subject | Graph Neural Network | en |
| dc.subject | Heterogeneous Graph Autoencoder | en |
| dc.subject | Graph Data Augmentation | en |
| dc.subject | Synthetic Graph Generation | en |
| dc.title | 推動推薦系統的深度表徵學習:從序列到多模態與圖結構數據 | zh_TW |
| dc.title | Advancing Deep Representation Learning for Recommendation: From Sequential to Multimodal and Graph Structure Data | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.coadvisor | 王志宇 | zh_TW |
| dc.contributor.coadvisor | Chih-Yu Wang | en |
| dc.contributor.oralexamcommittee | 李宏毅;古倫維;王釧茹;蔡銘峰 | zh_TW |
| dc.contributor.oralexamcommittee | Hung-Yi Lee;Lun-Wei Ku;Chuan-Ju Wang;Ming-Feng Tsai | en |
| dc.subject.keyword | 推薦系統,序列推薦,時間和語義訊息,趨勢感知,多模態序列推薦,時間對齊共享標籤,大型語言模型,異質資訊網路,圖神經網路,異構圖自動編碼器,圖資料增強,合成圖產生,可解釋人工智慧, | zh_TW |
| dc.subject.keyword | Recommendation Systems,Sequential Recommendation,Temporal and Contextual Information,Trend-aware,Multimodal Sequential Recommendation,Time-aligned Shared Token,Large Language Model,Heterogeneous Information Network,Graph Neural Network,Heterogeneous Graph Autoencoder,Graph Data Augmentation,Synthetic Graph Generation,Explainable Artificial Intelligence, | en |
| dc.relation.page | 150 | - |
| dc.identifier.doi | 10.6342/NTU202501410 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-07-03 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資料科學學位學程 | - |
| dc.date.embargo-lift | 2030-06-30 | - |
| 顯示於系所單位: | 資料科學學位學程 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 此日期後於網路公開 2030-06-30 | 11.08 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
