請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/43192
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳信希(Hsin-Hsi Chen) | |
dc.contributor.author | Ming-Feng Tsai | en |
dc.contributor.author | 蔡銘峰 | zh_TW |
dc.date.accessioned | 2021-06-15T01:41:48Z | - |
dc.date.available | 2009-07-16 | |
dc.date.copyright | 2009-07-16 | |
dc.date.issued | 2009 | |
dc.date.submitted | 2009-07-13 | |
dc.identifier.citation | [1] G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering, 17(6):734–749, June 2005.
[2] E. Agichtein, E. Brill, and S. Dumais. Improving web search ranking by incorporating user behavior information. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 19–26, New York, NY, USA, 2006. ACM. [3] A. Al-Maskari, M. Sanderson, and P. Clough. The relationship between ir effectiveness measures and user satisfaction. In SIGIR ’07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 773–774, New York, NY, USA, 2007. ACM. [4] N. Birrell and P. Davies. Quantum Fields in Curved Space. Cambridge University Press, 1982. [5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 89–96, New York, NY, USA, 2005. ACM. [6] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise approach. In ICML ’07: Proceedings of the 24th international conference on Machine learning, pages 129–136, New York, NY, USA, 2007. ACM. [7] J. Carbonell and J. Goldstein. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR ’98: Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335–336, New York, NY, USA, 1998. ACM. [8] H. Chen and D. R. Karger. Less is more: probabilistic models for retrieving fewer relevant documents. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 429–436, New York, NY, USA, 2006. ACM. [9] K.-H. Chen, H.-H. Chen, N. Kando, K. Kuriyama, S. Lee, S. H. Myaeng, K. Kishida, K. Eguchi, and H. Kim. Overview of CLIR Task at the Third NTCIR Workshop. NTCIR-3 Proceedings, 2003. [10] C. L. Clarke, M. Kolla, G. V. Cormack, O. Vechtomova, A. Ashkan, S. Büttcher, and I. MacKinnon. Novelty and diversity in information retrieval evaluation. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 659–666, New York, NY, USA, 2008. ACM. [11] K. Crammer and Y. Singer. Pranking with ranking. Advances in Neural Information Processing Systems, 14:641–647, 2002. [12] D. Fallows. Search engine users: Internet searchers are confident, satisfied and trusting – but they are also unaware and naïve. Technical report, Pew Internet & American Life Project Surveys, Washington, DC, 2005. http://www.pewinternet.org. [13] D. Fallows. Search engine use. Technical report, Pew Internet & American Life Project Surveys, Washington, DC, 2008. http://www.pewinternet.org. [14] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. The Journal of Machine Learning Research, 4:933–969, 2003. [15] M. Grubinger, P. Clough, A. Hanbury, and H. Muller. Overview of the ImageCLEF 2008 Photographic Retrieval Task. In Working Notes of the 2008 CLEF Workshop. Aarhus, Denmark, 2008. [16] W. Hersh, C. Buckley, T. J. Leone, and D. Hickam. Ohsumed: an interactive retrieval evaluation and new large test collection for research. In SIGIR ’94: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 192–201, New York, NY, USA, 1994. Springer-Verlag New York, Inc. [17] K. Järvelin and J. Kekäläinen. Ir evaluation methods for retrieving highly relevant documents. In SIGIR ’00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 41–48, New York, NY, USA, 2000. ACM. [18] K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446, 2002. [19] T. Joachims. Making large-Scale SVM Learning Practical. Advances in Kernel Methods- Support Vector Learning, B. Schölkopf and C. Burges and A. Smola, 1999. [20] T. Joachims. Optimizing search engines using clickthrough data. In KDD ’02: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133–142, New York, NY, USA, 2002. ACM. [21] K. Kishida, K.-H. Chen, S. Lee, K. Kuriyama, N. Kando, H.-H. Chen, and S. H. Myaeng. Overview of CLIR Task at the Fifth NTCIR Workshop. Proc. of the NTCIR-5 Workshop Meeting, pages 1–38, 2005. [22] K. Kishida, K.-H. Chen, S. Lee, K. Kuriyama, N. Kando, H.-H. Chen, S. H. Myaeng, and K. Eguchi. Overview of CLIR task at the fourth NTCIR workshop. Proceedings of NTCIR, 4, 2004. [23] W.-C. Lin and H.-H. Chen. Merging mechanisms in multilingual information retrieval. Lecture notes in computer science, pages 175–186, 2003. [24] W.-C. Lin and H.-H. Chen. Merging results by predicted retrieval effectiveness. Lecture notes in computer science, pages 202–209, 2004. [25] T.-Y. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval. Proceedings of SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, 2007. [26] F. Martínez-Santiago, L. Ureña-López, and M. Martín-Valdivia. A merging strategy proposal: The 2-step retrieval status value method. Information Retrieval, 9(1):71–93, 2006. [27] T. Minka and S. Robertson. Selection bias in the LETOR datasets. Proceedings of SIGIR 2008 Workshop on Learning to Rank for Information Retrieval, 2008. [28] J. Mostafa. Seeking better Web searches. Scientific American Magazine, 292(2):67–73, 2005. [29] I. Moulinier and H. Molina-Salgado. Thomson Legal and Regulatory experiments for CLEF 2002. Lecture notes in computer science, LNCS 2785, pages 155–163, 2003. [30] R. Nallapati. Discriminative models for information retrieval. In SIGIR ’04: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 64–71, New York, NY, USA, 2004. ACM. [31] L. Nie, B. D. Davison, and X. Qi. Topical link analysis for web search. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 91–98, New York, NY, USA, 2006. ACM. [32] M. Nielsen and I. Chuang. Quantum computation and quantum information. Cambridge University Press, 2000. [33] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, 1998. [34] C. Peters. Cross-Language Information Retrieval and Evaluation: Workshop of the Cross- Language Evaluation Forum, CLEF 2000, Lisbon, Portugal, September 21-22, 2000: Revised Papers. Springer, 2001. [35] A. L. Powell, J. C. French, J. Callan, M. Connell, and C. L. Viles. The impact of database selection on distributed searching. In SIGIR ’00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 232–239, New York, NY, USA, 2000. ACM. [36] T. Qin, T.-Y. Liu, J. Xu, and H. Li. How to Make LETOR More Useful and Reliable. Proceedings of SIGIR 2008 Workshop on Learning to Rank for Information Retrieval, 2008. [37] T. Qin, T.-Y. Liu, X.-D. Zhang, Z. Chen, and W.-Y. Ma. A study of relevance propagation for web search. In SIGIR ’05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 408–415, New York, NY, USA, 2005. ACM. [38] T. Qin, X.-D. Zhang, M.-F. Tsai, D.-S. Wang, T.-Y. Liu, and H. Li. Query-level loss functions for information retrieval. Information Processing and Management, 44(2):838– 855, 2008. [39] T. Qin, X.-D. Zhang, D.-S. Wang, T.-Y. Liu, W. Lai, and H. Li. Ranking with multiple hyperplanes. In SIGIR ’07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 279–286, New York, NY, USA, 2007. ACM. [40] F. Radlinski and S. Dumais. Improving personalized web search using result diversification. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 691–692, New York, NY, USA, 2006. ACM. [41] F. Radlinski, R. Kleinberg, and T. Joachims. Learning diverse rankings with multi-armed bandits. In ICML ’08: Proceedings of the 25th international conference on Machine learning, pages 784–791, New York, NY, USA, 2008. ACM. [42] P. Resnick and H. R. Varian. Recommender systems. Commun. ACM, 40(3):56–58, 1997. [43] S. Robertson. The probability ranking principle in IR. Journal of Documentation, 33(4):294–304, 1977. [44] S. Robertson and S.Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In SIGIR ’94: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 232–241, New York, NY, USA, 1994. Springer-Verlag New York, Inc. [45] S. Rüping. mySVM-Manual. University of Dortmund, Lehrstuhl Informatik, 8, 2000. [46] J. Savoy. Cross-language information retrieval: experiments based on CLEF 2000 corpora. Information Processing and Management, 39(1):75–115, 2003. [47] J. Savoy, A. Le Calve, and D. Vrajitoru. Report on the TREC-5 experiment: Data fusion and collection fusion. The Fifth Text REtrieval Conference (TREC-5), pages 489–502, 1997. [48] L. Si and J. Callan. A semisupervised learning method to merge search engine results. ACM Transactions on Information Systems (TOIS), 21(4):457–491, 2003. [49] L. Si and J. Callan. CLEF 2005: Multilingual Retrieval by Combining Multiple Multilingual Ranked Lists. Sixth Workshop of the Cross-Language Evaluation Forum, CLEF, 2005. [50] L. Si and J. Callan. Modeling search engine effectiveness for federated search. In SIGIR ’05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 83–90, New York, NY, USA, 2005. ACM. [51] J. Smith. Interesting search engine & blogging statistics. SearchSolutions Corporation, May 2008. http://www.searchingsolutions.com/interesting-search-engine-bloggingstatistics/. [52] R. Song, J.-R. Wen, S. Shi, G. Xin, T.-Y. Liu, T. Qin, X. Zheng, J. Zhang, G. Xue, and W.-Y. Ma. Microsoft Research Asia at Web Track and Terabyte Track of TREC 2004. Proceedings of the Thirteenth Text REtrieval Conference Proceedings (TREC-2004), 2004. [53] M.-F. Tsai, T.-Y. Liu, T. Qin, H.-H. Chen, and W.-Y. Ma. FRank: a ranking method with fidelity loss. In SIGIR ’07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 383–390, New York, NY, USA, 2007. ACM. [54] M.-F. Tsai, Y.-T. Wang, and H.-H. Chen. A study of learning a merge model for multilingual information retrieval. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 195–202, New York, NY, USA, 2008. ACM. [55] E. Vee, U. Srivastava, J. Shanmugasundaram, P. Bhat, and S. Yahia. Efficient computation of diverse query results. In Data Engineering, 2008. ICDE 2008. IEEE 24th International Conference on, pages 228–236, April 2008. [56] E. Voorhees. Overview of TREC 2002. The Eleventh Text Retrieval Conference (TREC 2002), 2002. [57] J. Xu and H. Li. AdaRank: a boosting algorithm for information retrieval. In SIGIR ’07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 391–398, New York, NY, USA, 2007. ACM. [58] Y. Xu and H. Yin. Novelty and topicality in interactive information retrieval. Journal of the American Society for Information Science and Technology, 59(2):201–215, 2008. [59] G.-R. Xue, Q. Yang, H.-J. Zeng, Y. Yu, and Z. Chen. Exploiting the hierarchical structure for link analysis. In SIGIR ’05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 186–193, New York, NY, USA, 2005. ACM. [60] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In SIGIR ’07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 271–278, New York, NY, USA, 2007. ACM. [61] C. Zhai. Risk Minimization and Language Modeling in Information Retrieval. PhD thesis, Carnegie Mellon University, 2002. [62] C. Zhai, W. W. Cohen, and J. Lafferty. Beyond independent relevance: methods and evaluation metrics for subtopic retrieval. In SIGIR ’03: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 10–17, New York, NY, USA, 2003. ACM. [63] C. Zhai and J. Lafferty. A risk minimization framework for information retrieval. Information Processing and Management, 42(1):31–55, 2006. [64] C.-N. Ziegler, S. M. McNee, J. A. Konstan, and G. Lausen. Improving recommendation lists through topic diversification. In WWW ’05: Proceedings of the 14th international conference on World Wide Web, pages 22–32, New York, NY, USA, 2005. ACM. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/43192 | - |
dc.description.abstract | 本論文主旨為研究如何利用機器學習的技術來增進資訊檢索的效能。
首先,在本論文的第一部分,我們提出一個全新的機器學習演算法來處理資訊檢索中的排序問題,這個演算法我們將它稱為『精準排序』(Fidelity Rank)或又稱為 FRank。在此方法中,我們提出了一個稱為『精準損失函數』(Fidelity Loss Function)。此函數具有些良好的性質可以使資訊檢索中的排序結果更加精準,比如:緩慢上升的損失,及每個文件對都可以到達其最低損失等特性。經實驗結果証實,我們提出的『精準排序』演算法,不論是在傳統的資訊檢索問題上、還是網路搜索問題上,其效能都可以優越於其他的方法。 接著,在本論文的第二部分,我們將第一部分所提出的『精準排序』學習演算法應用到多語言資訊檢索中著名的合併問題上。就我們所知,這個作法是第一個將機器學習演算法用到多語言資訊檢索的合併問題上。在這個部分,我們提出了許多有可能會影響合併效能的特徵,經實驗結果,我們發現學習出來的合併模型可以大大地改善合併的結果,並且透過學習出來的合併模型更可以幫助我們找出真正影響合併效能的重要特徵。 最後,在本論文的第三部分,我們想要試著將現有的機器學習演算法延伸出許多值得擁有的特性,比如:多樣性。一般的資訊檢索使用者,對於較為混淆的問句,都傾向於搜索到可以涵蓋不同主題的結果。因此,在第三部分,我們試著將多樣性這樣的考量也加入到學習的過程當中,提出一個稱為『二步式排序支持向量機』(Two-step Ranking SVM)的方法。在此方法中,我們搭配了『支持向量分類技術』(Support Vector Classification)和『支持向量迴歸技術』(Support Vector Regression)來增加檢索結果的多樣性、並且還能保持檢索結果的品質。經實驗結果顯示,所提到的作法確能保持排序品質,並且能擴大檢索結果的涵蓋主題範圍。 | zh_TW |
dc.description.abstract | Learning to rank is becoming important in many fields, especially in information retrieval. In this thesis, a novel learning-based ranking algorithm, Fidelity Rank (FRank), is first proposed to learn an effective ranking function. FRank not only inherits the useful properties of the probabilistic ranking framework, but also possesses new properties helpful for ranking, including slow-growing loss and the ability to reach zero for each document pair. The results demonstrate that FRank outperforms other ranking algorithms for conventional IR problem as well as Web-based searching.
Then, we apply the FRank algorithm to enhance the merge quality in multilingual information retrieval (MLIR). To the best of our knowledge, this practice is the first attempt to use a learning-based ranking algorithm to construct a merge model for MLIR merging. The experimental results show that the merge model constructed by FRank can significantly improve merging quality. In addition to the effectiveness, via the merge model, we can further identify key factors that influence the merging process; this information might provide us more insight and understanding into MLIR merging. Finally, we investigate how to extend learning-based ranking techniques with more desirable property -- diversity. For ambiguous queries, if there is no further information about user's intention, an IR system should better provide a ranking list of documents with all possible interpretations. For this diversification problem, we propose a two-step Ranking SVM technique, in which the support vector classification and regression techniques are utilized accordingly to enhance the diversity while maintain the ranking quality. According to the experimental results, the two-step learning technique not only improves ranking quality, but also broaden the coverage within the retrieved results. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T01:41:48Z (GMT). No. of bitstreams: 1 ntu-98-D92922003-1.pdf: 1547164 bytes, checksum: aa0f1f175d31c5f7a2ee8ce9b74fbde8 (MD5) Previous issue date: 2009 | en |
dc.description.tableofcontents | 口試委員會審定書 . . . . . . . . . . . . . . . . . . . . . . . . . . . i
誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Overview and Contributions . . . . . . . . . . . . . . . . . . . . 3 1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 2 Fidelity Ranking Algorithm . . . . . . . . . . . . . . . . . 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Fidelity Loss Function . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Probabilistic Ranking Framework . . . . . . . . . . . . . . . . 14 2.3.2 Fidelity Loss Function . . . . . . . . . . . . . . . . . . . . . 16 2.4 Fidelity Rank Algorithm . . . . . . . . . . . . . . . . . . . . . 18 2.5 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5.1 Evaluation Measures . . . . . . . . . . . . . . . . . . . . . . 22 2.5.2 Experimental Datasets . . . . . . . . . . . . . . . . . . . . . 23 2.5.3 Experimental Settings and Baselines . . . . . . . . . . . . . . 25 2.6 Experiments on LETOR Benchmark Dataset . . . . . . . . . . . . . . 26 2.6.1 OHSUMED Data Collection . . . . . . . . . . . . . . . . . . . . 26 2.6.2 TD2003 and TD2004 Data Collections . . . . . . . . . . . . . . . 28 2.6.3 Comparison between Fidelity Loss and Cross Entropy Loss . . . . 30 2.6.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7 Experiments on Web Search Dataset . . . . . . . . . . . . . . . . 33 2.7.1 The Training Phase of FRank . . . . . . . . . . . . . . . . . . 34 2.7.2 Performance Comparisons . . . . . . . . . . . . . . . . . . . . 35 2.7.3 Experiments on Different Size of Training Sets . . . . . . . . . 37 2.7.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 3 Learning to Merge . . . . . . . . . . . . . . . . . . . . . 43 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.3 Learning a Merge Model . . . . . . . . . . . . . . . . . . . . . . 48 3.3.1 Feature Set . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.3.2 The Construction of a Merge Model . . . . . . . . . . . . . . . 53 3.4 Data Collections and Evaluation Metric . . . . . . . . . . . . . . 55 3.5 Experiments of Learning a Merging Model . . . . . . . . . . . . . 57 3.5.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . 57 3.5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . 58 3.5.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.6 Feature Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.6.1 Feature Analysis via Learned Merge Model . . . . . . . . . . . . 63 3.6.2 Further Experiments . . . . . . . . . . . . . . . . . . . . . . 65 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Chapter 4 Relevancy and Diversity . . . . . . . . . . . . . . . . . . 71 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.1 Support Vector Classification . . . . . . . . . . . . . . . . . 77 4.3.2 Support Vector Regression . . . . . . . . . . . . . . . . . . . 78 4.3.3 Two-step Ranking SVM . . . . . . . . . . . . . . . . . . . . . . 79 4.4 Experiment Settings . . . . . . . . . . . . . . . . . . . . . . . 81 4.4.1 Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.4.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . 84 4.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 85 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Chapter 5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 89 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 | |
dc.language.iso | en | |
dc.title | 以機器學習研究資訊檢索之排列問題 | zh_TW |
dc.title | Learning to Rank for Information Retrieval | en |
dc.type | Thesis | |
dc.date.schoolyear | 97-2 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 傅楸善(Chiou-Shann Fuh),張俊盛(Jason S. Chang),吳宗憲(Chung-Hsien Wu),梁婷(Tyne Liang),陳克健(Keh-Jiann Chen),曾元顯(Yuen-Hsien Tseng) | |
dc.subject.keyword | 資訊檢索,機器學習, | zh_TW |
dc.subject.keyword | Learning to Rank,Information Retrieval,Machine Learning, | en |
dc.relation.page | 98 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2009-07-14 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-98-1.pdf 目前未授權公開取用 | 1.51 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。