Abstract
The fifth QA campaign at CLEF, the first having been held in 2006. was characterized by continuity with the past and at the same time by innovation. In fact, topics were introduced, under which a number of Question-Answer pairs could be grouped in clusters, containing also co-references between them. Moreover, the systems were given the possibility to search for answers in Wikipedia. In addition to the main task, two other tasks were offered, namely the Answer Validation Exercise (AVE), which continued last year's successful pilot, and QUAST, aimed at evaluating the task of Question Answering in Speech Transcription. As general remark, it must be said that the task proved to be more difficult than expected, as in comparison with last year's results the Best Overall Accuracy dropped from 49,47% to 41,75% in the multilingual subtasks, and, more significantly, from 68,95% to 54% in the monolingual subtasks.
| Original language | English |
|---|---|
| Journal | CEUR Workshop Proceedings |
| Volume | 1173 |
| Publication status | Published - 2007 |
| Event | 2007 Cross Language Evaluation Forum Workshop, CLEF 2007, co-located with the 11th European Conference on Digital Libraries, ECDL 2007 - Budapest, Hungary Duration: 19 Sep 2007 → 21 Sep 2007 |
Fingerprint
Dive into the research topics of 'Overview of the CLEF 2007 multilingual question answering track'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver