Overview of the CLEF 2007 multilingual question answering track

Danilo Giampiccolo, Anselmo Peñas, Christelle Ayache, Dan Cristea, Pamela Forner, Valentin Jijkoun, Petya Osenova, Paulo Rocha, Bogdan Sacaleanu, Richard Sutcliffe

Research output: Contribution to journalConference articlepeer-review

Abstract

The fifth QA campaign at CLEF, the first having been held in 2006. was characterized by continuity with the past and at the same time by innovation. In fact, topics were introduced, under which a number of Question-Answer pairs could be grouped in clusters, containing also co-references between them. Moreover, the systems were given the possibility to search for answers in Wikipedia. In addition to the main task, two other tasks were offered, namely the Answer Validation Exercise (AVE), which continued last year's successful pilot, and QUAST, aimed at evaluating the task of Question Answering in Speech Transcription. As general remark, it must be said that the task proved to be more difficult than expected, as in comparison with last year's results the Best Overall Accuracy dropped from 49,47% to 41,75% in the multilingual subtasks, and, more significantly, from 68,95% to 54% in the monolingual subtasks.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume1173
Publication statusPublished - 2007
Event2007 Cross Language Evaluation Forum Workshop, CLEF 2007, co-located with the 11th European Conference on Digital Libraries, ECDL 2007 - Budapest, Hungary
Duration: 19 Sep 200721 Sep 2007

Fingerprint

Dive into the research topics of 'Overview of the CLEF 2007 multilingual question answering track'. Together they form a unique fingerprint.

Cite this