Overview of the CLEF 2008 multilingual Question Answering track

Pamela Forner, Anselmo Peñas, Eneko Agirre, Iñaki Alegria, Corina Forəscu, Nicolas Moreau, Petya Osenova, Prokopis Prokopidis, Paulo Ro-Cha, Bogdan Sacaleanu, Richard Sutcliffe, Erik Tjong Kim Sang

Research output: Contribution to journalConference articlepeer-review

Abstract

The QA campaign at CLEF [1], was manly the same as that proposed last year. The results and the analyses reported by last year's participants suggested that the changes introduced in the previous campaign had led to a drop in systems' performance. So for this year's competition it has been decided to practically replicate last year's exercise. Following last year's experience some QA pairs were grouped in clusters. Every cluster was characterized by a topic (not given to participants). The questions from a cluster contained co-references between one of them and the others. Moreover, as last year, the systems were given the possibility to search for answers in Wiki-pedia1 as document corpus beside the usual newswire collection. In addition to the main task, three additional exercises were offered, namely the Answer Validation Exercise (AVE), the Question Answering on Speech Transcriptions (QAST), which continued last year's successful pilot, and Word Sense Disambiguation for Question Answering (QA-WSD). As general remark, it must be said that the task still proved to be very challenging for participating systems. In comparison with last year's results the Best Overall Accuracy dropped significantly from 41,75% to 19% in the multi-lingual subtasks, while instead it increased a little in the monolingual sub-tasks, going from 54% to 63,5%.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume1174
Publication statusPublished - 2008
Event2008 Working Notes for CLEF Workshop, CLEF 2008 - Co-located with the 12th European Conference on Digital Libraries, ECDL 2008 - Aarhus, Denmark
Duration: 17 Sep 200819 Sep 2008

Fingerprint

Dive into the research topics of 'Overview of the CLEF 2008 multilingual Question Answering track'. Together they form a unique fingerprint.

Cite this