Overview of QA4MRE main task at CLEF 2013

Richard Sutcliffe, Anselmo Peñas, Eduard Hovy, Pamela Forner, Álvaro Rodrigo, Corina Forascu, Yassine Benajiba, Petya Osenova

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper describes the Question Answering for Machine Reading (QA4MRE) Main Task at the 2013 Cross Language Evaluation Forum. In the main task, systems answered multiple-choice questions on documents concerned with four different topics. There were also two pilot tasks, Machine Reading on Biomedical Texts about Alzheimer's disease, and Japanese Entrance Exams. This paper describes the preparation of the data sets, the definition of the background collections, the metric used for the evaluation of the systems' submissions, and the results. We introduced two novelties this year: auxiliary questions to evaluate systems level of inference, and a portion of questions where none of the options were correct. Nineteen groups participated in the task submitting a total of 77 runs in five languages.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume1179
Publication statusPublished - 2013
Externally publishedYes
Event2013 Cross Language Evaluation Forum Conference, CLEF 2013 - Valencia, Spain
Duration: 23 Sep 201326 Sep 2013

Fingerprint

Dive into the research topics of 'Overview of QA4MRE main task at CLEF 2013'. Together they form a unique fingerprint.

Cite this