TREC

From Openresearch
Jump to navigation Jump to search
TREC Download TRECUpload
TREC
Text Retrieval Conference
Categories: Data mining
DblpSeries: trec
Bibliography: dblp.uni-trier.de/db/conf/trec/
Avg. acceptance rate: 0
Avg. acceptance rate (last 5 years): 0
Table of Contents

Text Retrieval Conference (TREC) has an average acceptance rate of 0% (last 5 years 0%).

Events

There are 26 events of the series TREC known to this wiki: TREC 1992, TREC 1993, TREC 1994, TREC 1995, TREC 1996, TREC 1997, TREC 1998, TREC 1999, TREC 2000, TREC 2001, TREC 2002, TREC 2003, TREC 2004, TREC 2005, TREC 2006, TREC 2007, TREC 2008, TREC 2009, TREC 2010, TREC 2011, TREC 2012, TREC 2013, TREC 2014, TREC 2015, TREC 2016, TREC 2020

 OrdinalYearFromToCityCountrypresenceHomepageTibKatIdGNDdblpWikiCFPWikidata
TREC 20202020Nov 18Nov 20GaithersburgUSAhttps://trec.nist.gov/pubs/call2020.html
TREC 20162016Nov 15Nov 18GaithersburgUSA
TREC 20152015Nov 17Nov 20GaithersburgUSA
TREC 20142014Nov 19Nov 21GaithersburgUSA
TREC 20132013Nov 19Nov 22GaithersburgUSA
TREC 20122012Nov 6Nov 9GaithersburgUSA
TREC 20112011Nov 15Nov 18GaithersburgUSA
TREC 20102010Nov 16Nov 19GaithersburgUSA
TREC 20092009Nov 17Nov 20GaithersburgUSA
TREC 20082008Nov 18Nov 21GaithersburgUSA
TREC 20072007Nov 5Nov 9GaithersburgUSA
TREC 20062006Nov 14Nov 17GaithersburgUSA
TREC 20052005Nov 15Nov 18GaithersburgUSA
TREC 20042004Nov 16Nov 19GaithersburgUSA
TREC 20032003Nov 18Nov 21GaithersburgUSA
TREC 20022002Nov 19Nov 22GaithersburgUSA
TREC 20012001Nov 13Nov 16GaithersburgUSA
TREC 20002000Nov 13Nov 16GaithersburgUSA
TREC 19991999Nov 17Nov 19GaithersburgUSA
TREC 19981998Nov 9Nov 11GaithersburgUSA
TREC 19971997Nov 19Nov 21GaithersburgUSA
TREC 19961996Nov 20Nov 22GaithersburgUSA
TREC 19951995Nov 1Nov 3GaithersburgUSA
TREC 19941994Nov 2Nov 4GaithersburgUSA
TREC 19931993Aug 31Sep 2GaithersburgUSA
TREC 19921992Nov 4Nov 6GaithersburgUSA


Submission/Acceptance

Locations




The Text REtrieval Conference (TREC), co-sponsored by the National Institute of Standards and Technology (NIST) and U.S. Department of Defense, was started in 1992 as part of the TIPSTER Text program. Its purpose was to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies. In particular, the TREC workshop series has the following goals:

  • to encourage research in information retrieval based on large test collections;
  • to increase communication among industry, academia, and government by creating an open forum for the

exchange of research ideas;

  • to speed the transfer of technology from research labs into commercial products by demonstrating

substantial improvements in retrieval methodologies on real-world problems; and

  • to increase the availability of appropriate evaluation techniques for use by industry and academia,

including development of new evaluation techniques more applicable to current systems.

TREC is overseen by a program committee consisting of representatives from government, industry, and academia. For each TREC, NIST provides a test set of documents and questions. Participants run their own retrieval systems on the data, and return to NIST a list of the retrieved top-ranked documents. NIST pools the individual results, judges the retrieved documents for correctness, and evaluates the results. The TREC cycle ends with a workshop that is a forum for participants to share their experiences.

This evaluation effort has grown in both the number of participating systems and the number of tasks each year. Ninety-three groups representing 22 countries participated in TREC 2003. The TREC test collections and evaluation software are available to the retrieval research community at large, so organizations can evaluate their own retrieval systems at any time. TREC has successfully met its dual goals of improving the state-of-the-art in information retrieval and of facilitating technology transfer. Retrieval system effectiveness approximately doubled in the first six years of TREC.

TREC has also sponsored the first large-scale evaluations of the retrieval of non-English (Spanish and Chinese) documents, retrieval of recordings of speech, and retrieval across multiple languages. TREC has also introduced evaluations for open-domain question answering and content-based retrieval of digital video. The TREC test collections are large enough so that they realistically model operational settings. Most of today's commercial search engines include technology first developed in TREC.

A TREC workshop consists of a set tracks, areas of focus in which particular retrieval tasks are defined. The tracks serve several purposes. First, tracks act as incubators for new research areas: the first running of a track often defines what the problem really is, and a track creates the necessary infrastructure (test collections, evaluation methodology, etc.) to support research on its task. The tracks also demonstrate the robustness of core retrieval technology in that the same techniques are frequently appropriate for a variety of tasks. Finally, the tracks make TREC attractive to a broader community by providing tasks that match the research interests of more groups.

Each track has a mailing list. The primary purpose of the mailing list is to discuss the details of the track's tasks in the current TREC. However, a track mailing list also serves as a place to discuss general methodological issues related to the track's retrieval tasks. TREC track mailing lists are open to all; you need not participate in TREC to join a list. Most lists do require that you become a member of the list before you can send a message to it.

The set of tracks that will be run in a given year of TREC is determined by the TREC program committee. The committee has established a procedure for proposing new tracks.