TY - JOUR
T1 - Overview of the CLEF 2007 multilingual question answering track
AU - Giampiccolo, Danilo
AU - Peñas, Anselmo
AU - Ayache, Christelle
AU - Cristea, Dan
AU - Forner, Pamela
AU - Jijkoun, Valentin
AU - Osenova, Petya
AU - Rocha, Paulo
AU - Sacaleanu, Bogdan
AU - Sutcliffe, Richard
PY - 2007
Y1 - 2007
N2 - The fifth QA campaign at CLEF, the first having been held in 2006. was characterized by continuity with the past and at the same time by innovation. In fact, topics were introduced, under which a number of Question-Answer pairs could be grouped in clusters, containing also co-references between them. Moreover, the systems were given the possibility to search for answers in Wikipedia. In addition to the main task, two other tasks were offered, namely the Answer Validation Exercise (AVE), which continued last year's successful pilot, and QUAST, aimed at evaluating the task of Question Answering in Speech Transcription. As general remark, it must be said that the task proved to be more difficult than expected, as in comparison with last year's results the Best Overall Accuracy dropped from 49,47% to 41,75% in the multilingual subtasks, and, more significantly, from 68,95% to 54% in the monolingual subtasks.
AB - The fifth QA campaign at CLEF, the first having been held in 2006. was characterized by continuity with the past and at the same time by innovation. In fact, topics were introduced, under which a number of Question-Answer pairs could be grouped in clusters, containing also co-references between them. Moreover, the systems were given the possibility to search for answers in Wikipedia. In addition to the main task, two other tasks were offered, namely the Answer Validation Exercise (AVE), which continued last year's successful pilot, and QUAST, aimed at evaluating the task of Question Answering in Speech Transcription. As general remark, it must be said that the task proved to be more difficult than expected, as in comparison with last year's results the Best Overall Accuracy dropped from 49,47% to 41,75% in the multilingual subtasks, and, more significantly, from 68,95% to 54% in the monolingual subtasks.
UR - http://www.scopus.com/inward/record.url?scp=84921983152&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:84921983152
SN - 1613-0073
VL - 1173
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2007 Cross Language Evaluation Forum Workshop, CLEF 2007, co-located with the 11th European Conference on Digital Libraries, ECDL 2007
Y2 - 19 September 2007 through 21 September 2007
ER -