Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 文學院
  3. 圖書資訊學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/75023
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳光華zh_TW
dc.contributor.authorYu-ting Chinagen
dc.contributor.author江玉婷zh_TW
dc.date.accessioned2021-07-01T08:11:26Z-
dc.date.available2021-07-01T08:11:26Z-
dc.date.issued1999-
dc.identifier.citation參考文獻
一、中文部分
(一)圖書
王石番。傳播內容分析法:理論與實證。臺北市:幼獅文化,民國80年。
李卓偉。統計學。臺北市:智勝文化,民國82年。
陳景堂。統計分析:SPSS for Windows入門與應用。臺北市:儒林圖書,民國85年。
黃慕萱。資訊檢索中「相關」概念之研究。臺北市:台灣學生,民國85年。
黃慕萱。資訊檢索。臺北市:台灣學生,民國85年。
(二)期刊論文
黃雪玲。「資訊需求者與次判斷者相關判斷之比較研究」。國立臺灣大學圖書館學研究所,碩士論文,民國84年6月。
黃慕萱。「資訊檢索中『相關』概念之發展」。圖畫館學刊第12期(民國86年12月),頁39-62。
黃慕萱。「檢索系統評估之發展-理論與實務」。中國圖晝館學會學報第59期(民國86年12月),頁109-126。
二、外文部分
(一)圖書
Bawden, David. Uer-oriented Evaluation of Information Systems and Services. Aldershot: Gower, 1990.
Saracevic, Tefko. Introduction to Information Science. New York: Bowker, 1970.
Siegel, Sidney. Nonparametric Statistics of the Behavioral Sciences. New York: McGraw-Hill, 1988.
Sparck Jones, Karan. Information Retrieval Experiment. London; Boston: Butterworths, 1981.
Van Rijsbergen, C. J. Information Retreival. London; Boston: Butterworths, 1975.
(二)期刊論文
Agosti, Maristella. The Positive and Negative Effects of TREC. In Proceedings of the Second Mira Workshop, Monselice, Italy,. November 14-15, 1996, edited by M.D. Dunlop. <http://www.dcs.gla.ac.uk/mira/workshops/padua_procs> (Oct 5, 1998)
Beaulieu, Micheline, Stephen E. Robertson, and Edie M. Rasmussen. Evaluation Interactive System in TREC. Journal of the American Society for Information Science 47, no. 1 (1996): 85-94.
Belkin, N. J., J. A. Shaw, Edward A. Fox, and P. Kantor, eds. Combining the Evidence of Multiple Query Representations for Information Retrieval. Information Processing & Management 31, no. 3 (1995): 431-448.
Blair, David C. STAIRS Redux: Thoughts on the STAIRS Evaluation, Ten Years After. Journal of the American Society for Information Science 47, no. 1 (1996): 4-22.
Bookstein, A. Infometric Distributions III. Ambiguity and Randomness. Journal of the American Society for Information Science 48, no. 1 (1997): 2-10.
Borlund, Pia and Peter Ingwersen. The Development of a Method for the Evaluation of Interactive Information Retrieval Systems. Journal of Documentation 53, no. 3 (1997): 225-250.
Borlund, Pia. Simulated Information Needs: A Practical Approach. In Proceedings of the 6(superscript th) Mira Workshop, Dublin, October 28-30, 1998. <http://www.dcs.gla. ac.uk/mira/workshops/dublin/procs/borlund/> (Nov. 5, 1998)
Burgin, Robert. Variations in Relevance Judgements and the Evaluation of Retrieval Performance. Information Processing and Management 28, no. 5 (1992): 619-627.
Callan, James P. and W. Bruce Croft. An Evaluation of Query Processing Strategies Using the TIPSTER Collection. In Proceedings of the 16(superscript th) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Pittsburgh, USA, June-July 1, 1993, 347-355.
Carletta, Jean. Assessing Agreement on Classification Tasks: The Kappa Statistic. Computational Linguistics 22, no. 2 (1996): 249-254.
Cleverdon, Cyril W. The Cranfield Tests on Index Language Devices. Aslib Proceedings 19, no. 6 (1967): 173-194.
Cleverdon, Cyril W. The Significance of the Cranfield Tests on Index Languages. In Proceedings of the 14(superscript th) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Chicago, IL, October 13-16, 1991, 3-12.
Denos, Nathalie. Non-Binary Relevance Judgements Criteria and Data Quality Criteria. In Proceedings of the 6(superscript th) Mira Workshop. Dublin, October 28-30, 1998. <http://www.dcs.gla.ac.uk/mira/workshops/dublin/procs/denos.pdf> (Nov. 5, 1998)
Eisenberg, Michael B. Measuring Relevance Judgements. Information Processing and Management 24, no. 4 (1988): 373-389.
Eisenberg, Michael B. and X Hu. Dichotomous Relevance Judgements and the Evaluation of Information Systems. In Proceedings of the American Society for Information Science Annual Meeting, 24, 1988, 66-70.
Ellis, David. The Dilemma of Measurement in Information Retrieval Research. Journal of the American Society for Information Science 47, no. 1 (1996): 23-36.
Evaluation Techniques and Measures. In The Sixth Text REtrieval Conference (TREC-6), Gaithersburg, Maryland, November 19-21, 1997, edited by Ellen M. Voorhees and Donna. K. Harman. <http://trec.nist.gov/pubs/trec6/t6_proceedings.html> (Aug. 26, 1998)
Fox, Edward A. Characteristics of Two New Experimental Collections in Computer and Information Science Containing Textual and Bibliographic Concepts. Technical Report TR 83-561, Cornell University: Computing Science Department, 1983. <http://cs-tr.cs.cornell.edu:80/Dienst/UI/1.0/Display/ncstrl. cornell/TR83-561> (Nov. 30, 1998)
Harman, Donna K. Evaluation Issues in Information Retrieval. Information Processing and Management 28, no. 4 (1992): 439-440.
Harman, Donna K. The First Text REtrieval Conference (TREC-1). Information Processing and Management 29, no. 4 (1993): 411-414.
Harman, Donna K. Overview of the Third Text REtrieval Conference (TREC-3). In Overview of the Sixth Text REtrieval Conference, Gaithersburg, Maryland, November 2-4, 1994, edited by Donna K. Harman. <http://trec.nist.gov/pubs/trec3/papers/overview.ps> (Aug. 26, 1998)
Harman, Donna K. The Second Text REtrieval Conference (TREC-2). Information Processing and Management 31, no. 3 (1995): 269-270.
Harman, Donna K. Overview of the Second Text Retrieval Conference (TREC-2). Information Processing and Management 31, no. 3 (1995): 271-289.
Harman, Donna K. Overview of the Fourth Text REtrieval Conference (TREC-4). In The Fourth Text REtrieval Conference (TREC-4), Gaithersburg, Maryland, November 1-3, 1995, edited by Donna K. Harman. <http://trec.nist.gov/pubs/trec4/papers/overview.ps> (Aug. 26, 1998)
Harman, Donna K. Panel: Building and Using Test Collections. In Proceedings of the 19(superscript th) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Zurich, Switzerland, August 18-22, 1996, 335-337.
Harman, Donna K. The TREC Conference. In Readings in Information Retrieval, edited by Karen Sparck Jones and Peter Willett, 247-256. San Francisco: Morgan Kaufmann Publishers, Inc., 1997.
Harman, Donna K. The Text REtrieval Conferences (TRECs): Providing a Test-Bed for Information Retrieval Systems. Bulletin of the American Society for Information Science 24, no. 4 (1998): 11-13.
Harter, Stephen P. The Cranfield II Relevance Assessments: A Critical Evaluation. Library Quarterly 41, no. 3 (1971): 229-243.
Harter, Stephen P. Variations in Relevance Assessments and the Measurement of Retrieval Effectiveness. Journal of American Society for Information Science 47, no. 1 (1996): 37-49.
Harter, Stephen P. and Carol A. Hert. Evaluation of Information Retrieval Systems: Approaches, Issues, and Methods. In Annual Review of Information Science and Technology (ARIST), 32, edited by Martha E. Williams, 3-94. New York: Interscience Publishers, 1997.
Hersh, William. OHSUMED: An Interactive Evaluation and New Large Test Collection for Research. In Proceedings of the 17(superscript th) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland, July 3-6, 1994, 192-201.
Ingwersen, Peter. Polyrepresentation of Information Needs and Semantic Entities. In Proceedings of the 17(superscript th) Annual International ACM-SIGIR Conference on Research and Development of Information Retrieval, Dublin, Ireland, July 3-6, 1994, 101-110.
Ingwersen, Peter. Shortcomings of Interactive TREC. In Proceedings of the 6(superscript th) Mira Workshop. Dublin, October 28-30, 1998. <http://www.dcs.gla.ac.uk/miral/workshops/dublinlprocs/ingwersen/> (Nov. 30, 1998)
Janes, Joseph W. The Binary Nature of Continuous Relevance Judgements: A Study of Users' Perceptions. Journal of the American Society for Information Science 42, no. 10 (1991): 754-756.
Janes, Joseph W. and R. McKinney. Relevance Judgements of Actual Users and Secondary Judges: A Comparative Study. The Library Quarterly 62:2 (1992): 150-168.
Janes, Joseph W. Other People's Judgements: A Comparison of Users' and Others' Judgements of Documents Relevance, Topicality, and Utility. Journal of the American Society for Information Science 45:3 (1994): 160-171.
Kageura, K. and others, eds. NACSIS Corpus Project for IR and Terminological Research. In Natural Language Processing Pacific Rim Symposium '97, Phuket, Thailand, December 2-5, 1997, 493-496.
Kando, N. and others, eds. NTCIR: NACSIS Test Collection Project, In The 20(superscript th) Annual Collquium of BCS-IRSG in Autrans, France, March 25-27, 1997. <http://www.rd.nacsis.ac.jp/~ntcadmIindex-en.html> (Oct. 31, 1998)
Katzer, Jeffery and Herbert Snyder. Toward a More Realistic Assessment of Information Retrieval Performance. In Proceedings of the 53(superscript rd) American Society for Information Science Annual Meeting, 27, Toronto, Canada, November 4-8, 1990, 80-85.
Kitani, Tsuyoshi and others, eds. BMIR-J2: A Test Collection for Evaluation of Japanese Information Retrieval Systems. In Proceedings of IPSJ SIG Notes, DBS-114-3, 1998, 15-22.
Kitani, Tsuyoshi and others, eds. Lessons form BMIR-J2: A Test Collection for Japanese IR Systems. In Proceedings of the 21(superscript st) ACM-SIGIR International Conference on Research and Development in Information Retrieval, Melbourne, Australia. August 24-28, 1998, 345-346.
Lesk, M. E. and Gerard Salton. Relevance Assessments and Retrieval System Evaluation. Information Storage and Retrieval 4, no. 4 (1969): 343-359.
Lewis, David D. The TREC-5 Filtering Track. In The Fifth Text REtrieval Conference (TREC-5), Gaithersburg, Maryland, November 20-22, 1996, edited by Ellen M. Voorhees and Donna K. Harman. <http://trec.nist.gov/pubs/trec5/papers/filtering.ps> (Aug. 26, 1998)
Matsui and others, eds. Test Collection for Information Retrieval Systems form the Viewpoint of Evaluation System Functions. In Proceedings of International Workshop on Information Retrieval with Oriental Languages, 1996, 42-47.
Over, Paul. Presentation on The TREC Interactive Track: An Overview. In Proceedings of the 6(superscript th) Mira Workshop, Dublin, October 28-30, 1998. <http://www.itl.nist.gov/div894/894.02/works/presentations/dublin98/index.htm> (Nov. 5, 1998)
Pao, M. L. Term and Citation Retrieval: A Field Study. Information Processing and Management 29, no. 1 (1993): 95-112.
Parsons, Simon and E. H. Mamdani. Qualitative Dempster-Shafer Theory. In Proceedings of the III Imacs International Workshop on Qualitative Resoning and Decision Technologies, Barcelona. June 1993.
Raghavan, V.V. and P. Bollmann. and G.S. Jung. Retrieval System Evaluation Using Recall and Precision: Problems and Answers. In Proceedings of the 12(superscript th) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Cambridge, Massachusetts, USA, June 25-28. 1989, 59-68.
Reid, Jane and Stefano Mizzaro. On the Consensus between Relevance Judges in a Multi-media Context. In Proceedings of the 6(superscript th) Mira Workshop, Dublin, October 28-30, 1998. <http://www.dcs.gla.ac.uk/mira/workshops/dublin/procs/rir.pdf> (Nov. 5, 1998)
Reid, Jane, Mounia Lalmas, and Ian Ruthven. Combining Binary and Weighted Relevance Judgements for IR system Evaluation. In Proceedings of the 6th Mira Workshop, Dublin, October 28-30, 1998. <http://www.dcs.gla.ac.uk/mira/workshops/dublin/procs/rir.pdf> (Nov. 5, 1998)
Reid, Jane. A Task Oriented Test Collection. In Proceedings of the 6(superscript th) Mira Workshop, Dublin, October 28-30, 1998. <http://www.dcs.gla.ac.uk/mira/workshops/dublin/procs/reid.pdf> (Nov. 5, 1998)
Robertson, S. E. and Micheline Beaulieu. On the Evaluation of IR Systems. Information Processing and Management 28, no. 4 (1992): 457-466.
Robertson, S. E. and Micheline Beaulieu. Research and Evaluation in Information Retrieval. Journal of Documentation 53, no. 1 (1997): 51-57.
Salton, Gerard. A New Comparison between Conventional Indexing (MEDLARS) and Automatic Text Processing (SMART). Journal of the American Society for Information Science 23, no. 1 (1972): 75-84.
Salton, Gerard. The State of Retrieval System Evaluation. Information Processing and Management 28, no. 4 (1992): 441-449.
Saracevie, T., P. B. Kantor, A. Y. Chamis, and D. Trivision. A Study of Information Seeking and Retrieving: Part I. Background and Methodology. Journal of the American Society for Information Science 39, no. 3 (1988): 161-176.
Saracevic, T. Evaluation of Evaluation in Information Retrieval. In Proceedings of the l8(superscript th) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, July 9-13, 1995, 138-146.
Saracevic, T. Relevance Reconsidered. In Proceedings of COLIS 2: Second International Conference on Conceptions of Library and Information Science: Integration in Perspective, Copenhangen-Denmark. October 13-16, 1996, 201-218.
Schamber, L. Relevance and Information Behavior. In Annual Review of Information Science and Technology (ARIST), 29, edited by Martha E. Williams, 3-48. New York: Interscience Publishers, 1994.
Shaw, William M., Judith B. Wood, Robert E. Wood, and Helen R. Tibbo. The Cystic Fibrosis Database: Content and Research Opportunities. Library and Information Science Research 13 (1991): 347-366.
Shaw, William M., Robert Burgin, and Patrick Howell. Performance Standards and Evaluations in IR Test Collections: Vector-Space and Other Retrieval Models. Information Processing and Management 33, no. 1 (1997): 15-36. <http:/ruby.ils.unc.edukhowep/perform/hypergeom.html> (Dec. 3, 1998)
Smeaton, Alan F. and Donna K. Harman. The TREC Experiments and Their Impact on Europe. Journal of Information Science 23, no. 2 (1997): 169-174.
Sparck Jones, Karan and C.J. van Rijsbergen. Information Retrieval Test Collections. Journal of Documentation 32 (1976): 59-75.
Sparck Jones, Karen. Reflections on TREC. Information Processing and Managemtns 31, no. 3 (1995): 291-314.
Sparck Jones, Karan. Summary Performance Comparisons TREC-2, TREC-3, TREC-4, TREC-5, TREC-6. In The Sixth Text REtrieval Conference (TREC-6), Gaithersburg, Maryland, November 19-21, 1997, edited by Eellen M. Voorhees and Donna. K. Harman. <http://trec.nist.gov/pubs/trec6/papers/sparck.ps> (Aug. 26, 1998)
Spink, Amanda and Howard Greisdorf. Partial Relevance Judgements During Interactive Information Retrieval: An Exploratory Study. In Proceedings of the 59(superscript th) American Society for Information Science Annual Meeting. 33. 1997, edited by Candy Schwartz and Mark Rorvig, 111-122.
Swanson, Don R. Historical Note: Information Retrieval and the Future of an Illusion. Journal of the American Society for Information Science 39, no. 2 (1988): 92-98.
Tague-Sutcliffe, Jean M. The Pragmatics of Information Retrieval Experimentation, Revisited. Information Processing and Management 28, no. 4 (1992): 467-490.
Tague-Sutcliffe, Jean M. Some Perspectives on the Evaluation of Information Retrieval Systems. Journal of the American Society for Information Science 47, no. 1(1996): 1-3.
Vickery, B. C. Subject Analysis for Information Retrieval. In Proceedings of the International Conference on Scientific Information, 2, February 1958, 855-865.
Voorhees, Ellen M. and Donna K. Harman. Overview of the Seventh Text REtrieval Conference (TREC-7). In The Seventh Text Retreival Conference (TREC-7), Gaithersburg, Maryland, November 09-11, 1998, edited by Ellen M. Voorhees and Donna K. Harman. <http://trec.nist.gov/pubs/trec7/papers/overview_7.ps> (June. 6, 1999)
Voorhees, Ellen M. and Donna K. Harman. Overview of the Sixth Text REtrieval Conference (TREC-6). In The Sixth Text REtrieval Conference (TREC-6), Gaithersburg, Maryland, November 19-21, 1997, edited by Ellen M. Voorhees and Donna K. Harman. <http://trec.nist.gov/pubs/trec6/papers/overview.ps> (Aug. 5, 1998)
Voorhees, Ellen M. and Donna K. Harman. Overview of the Fifth Text REtrieval Conference (TREC-5). In The Fifth Text REtrieval Conference (TREC-5), Gaithersburg, Maryland, November 20-22, 1996, edited by Ellen M. Voorhees and Donna. K. Harman. <http://trec.nist.gov/pubs/trec5/papers/overview.ps> (Aug. 5, 1998)
Voorhees, Ellen. M. Variations in Relevance Judgements and the Measurement of Retrieval Effectiveness. In Proceedings of the 21(superscript st) ACM-SIGIR International Conference on Research and Development in Information Retrieval, Melbourne, Australia, 24-28 August 1998. 315-323.
Wallis, Peter and James A. Thom. Relevance Judgements for Assessing Recall. Information Processing and Management 32, no. 3 (1996): 273-286.
Zobel, Justin. How Reliable are the Results of Large-Scale Information Retrieval Experiments In Proceedings of the 21(superscript st) Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Melbourne, Australia, August 24-28, 1998, 397-314.
芥子育雄ほか。「情報檢索システム評?用べンチマクVer. l.0 (BMIR-J1)について」。情?研報DBS-106-19(1996):139-145。
木本晴夫ほか。「情報檢索システム評?用テ-タ-ベ-スの構築の提案」。情?研報FI-32-1(1993):1-8。
木本晴夫ほか。「情報檢索システム評?用テ-タ-ベ-スの構築の提案」。1998年情報學シンポジゥム講演論文集(1998):103-119。
小川秦嗣ほか。「日本語情報檢索システム評?用テストコレクションの構築のためのべンチマ-ク」。情?研報DBS-100-16(1996):145-152。
(三)網路資源
AMARYLLIS Homepage. <http://www.inist.fr/accueil/profran.htm> (Oct. 29, 1998)
IREX (Information Retrieval and Extraction Exercise) Homepage. <http://cs.nyu.edu/cs/projects/proteus/irex/index-e.html> (Oct. 31, 1998)
MIRA (Evaluation Frameworks for Interactive Multimedia Information Retrieval Application) Homepage. <http://www.dcs.gla.ac.uk/mira> (Nov. 5, 1998)
NTCIR Project (NACSIS Test Collection for IR Systems) Homepage. <http://www.rd.nacsis.ac.jp/~ntcadm/index-en.html> (Oct. 31, 1998)
Test Collections. <http://www.dcs.gla.ac.uk/idom/ir_resources/test_collections/> (Dec. 5, 1998)
Test Collections. <ftp://ftp.cs.cornell.edu/pub/smart/> (Dec. 5, 1998)
Text REtrieval Conference (TREC) Homepage. <http://trec.nist.gov/> (Dec. 7, 1998)
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/75023-
dc.description.abstract摘要
在國內資訊檢索研究已日趨受到重視,合適的測試評估機制卻十分缺乏的背景下,本研究實際進行測試集的規劃與建置工作。首先透過相關文獻之閱讀與分析,觀察測試集的組成要素,並參考國外各測試集的結構、特性與建構經驗,訂定發展中文資訊檢索測試集的方法與程式。測試集建構工作主要包括蒐集整理檔、建立查詢主題、以及進行相關判斷三個部分。
本研究所建立的文件集來源為新聞網站中的五種電子報,共有132,207篇檔。查詢主題是透過網路問卷實際徵集查詢需求,並進行三次的篩選之後,修正建構而成,共完成50個查詢主題。相關判斷的部分則是先對每個查詢主題建立-相關文件候選集,再針對候選集中的每篇檔以人工進行相關判斷,每一查詢主題由三位次判斷者同時進行,最後,則依據判斷結果計算並定義文件的相關程度。
經由研究結果的分析顯示,就統計抽樣的觀點,本研究建構的檔集已具備一定的效度;查詢主題呈現詳盡且多樣化的查詢需求,能反映一些真實的檢索情況;三位判斷者的相關判斷結果則有顯著的一致性,推斷它們是具有可信度的。本測試集雖然尚處於初始階段,但已有完整的架構及一定的規模,未來的研究應可以此為基礎,作進一步的擴展與改進。
zh_TW
dc.description.abstractABSTRACT
The research and development of information retrieval (IR) has made much progress recently. However, there's not any applicable mechanism for system evaluation in the Chinese research society. This thesis aims at the design and the implementation for Chinese information retrieval benchmark. First, we observe the framework and contents of existing foreign benchmarks, and develop a realistic methodology, and setup a procedure to establish a Chinese IR Benchmark. Generally speaking, a benchmark consists of a set of documents, a set of topics, and a set of relevance between documents and topics. Accordingly, our task is also separated into three parts.
The document set is downloaded from various electronic news sites, and totally 132,207 are collected. To build the topics, we investigate the real user information needs by using a questionnaire, then modify them to be the formal topics. As to relevance judgment, we first set up a pool of candidate documents for each topic, then invite three persons to judge the relevance. Finally, we combine the judgments and offer a relevance measure for each document in the pool.
The result of our research shows that the quantity of the document set is valid from the viewpoint of sampling statistics. The topics reveal various information needs from users' viewpoint, and they may somehow reflect certain real situation while users proceeding their searches. Besides, the judgments of three persons have exhibited significant agreements, so we can say that the relevance judgment in our benchmark is reliable. Although the benchmark is in its first edition, it possesses a complete structure and medium scale, and we may further expand and improve it based on existing framework in the future.
en
dc.description.provenanceMade available in DSpace on 2021-07-01T08:11:26Z (GMT). No. of bitstreams: 0
Previous issue date: 1999
en
dc.description.tableofcontents目次
中文摘要………………………………………………………………………………………………….?
英文摘要………………………………………………………………………………………………….?
目次……………………………………………………………………………………………………….?
表目次…………………………………………………………………………………………………….?
圖目次…………………………………………………………………………………………………….?
第一章 緒論………………………………………………………………………………………………1
第一節 問題陳述……………………………………………………………………………………1
第二節 研究目的……………………………………………………………………………………3
第三節 研究假定……………………………………………………………………………………4
第四節 研究範圍與限制……………………………………………………………………………5
第五節 研究方法與程式……………………………………………………………………………7
第六節 名詞釋義……………………………………………………………………………………10
註釋……………………………………………………………………………………………………13
第二章 文獻分析………………………………………………………………………………………….15
第一節 資訊檢索系統測試與測試集………………………………………………………………..15
第二節 重要測試集實證研究………………………………………………………………………..19
第三節 資訊檢索測試集之相關討論與評價…………………………………………………………35
註釋……………………………………………………………………………………………………40
第三章 測試集之建構………………………………………………………………………………………49
第一節 文件集…………………………………………………………………………………………50
第二節 查詢主題…………………………………………………………………………………….57
第三節 相關判斷…………………………………………………………………………………….85
註釋…………………………………………………………………………………………………….96
第四章 研究結果分析與討論…………………………………………………………………………….97
第一節 檔集特性分析…………………………………………………………………………….97
第二節 查詢主題特性分析…………………………………………………………………………..104
第三節 相關判斷一致性分析………………………………………………………………………..111
註釋…………………………………………………………………………………………………...124
第五章 結論與建議………………………………………………………………………………………..125
第一節 結論…………………………………………………………………………………………..125
第二節 建議………………………………………………………………………………………...129
註釋…………………………………………………………………………………………………..135
參考文獻…………………………………………………………………………………………………..137
附錄………………………………………………………………………………………………………..149
附錄A 查詢需求問卷………………………………………………………………………………...149
附錄B 查詢主題……………………………………………………………………………………151
附錄C 查詢主題特性總表…………………………………………………………………………177
附錄D 相關度統計總表……………………………………………………………………………179
附錄E 相關判斷結果分佈總表……………………………………………………………………181
附錄F 相關判斷一致性統計總表……………………………………………………………………183
-
dc.language.isozh-TW-
dc.title中文資訊檢索測試集設計與製作之研究zh_TW
dc.titleA Study on Design and Implementation for Chinese Information Retrieval Benchmarken
dc.date.schoolyear87-2-
dc.description.degree碩士-
dc.relation.page198-
dc.rights.note未授權-
dc.contributor.author-dept文學院zh_TW
dc.contributor.author-dept圖書資訊學研究所zh_TW
顯示於系所單位:圖書資訊學系

文件中的檔案:
沒有與此文件相關的檔案。
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved