Evaluating machine reading systems through comprehension tests

Anselmo Pens, Eduard Hovy, Pamela Forner, Alvaro Rodrigo, Richard Sutcliffe, Corina Forascu, Caroline Sporleder

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper describes a methodology for testing and evaluating the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. The methodology is being used in QA4MRE (QA for Machine Reading Evaluation), one of the labs of CLEF. We report here the conclusions and lessons learned after the first campaign in 2011.

Original languageEnglish
Title of host publicationProceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012
EditorsMehmet Ugur Dogan, Joseph Mariani, Asuncion Moreno, Sara Goggi, Khalid Choukri, Nicoletta Calzolari, Jan Odijk, Thierry Declerck, Bente Maegaard, Stelios Piperidis, Helene Mazo, Olivier Hamon
PublisherEuropean Language Resources Association (ELRA)
Pages1143-1147
Number of pages5
ISBN (Electronic)9782951740877
Publication statusPublished - 2012
Event8th International Conference on Language Resources and Evaluation, LREC 2012 - Istanbul, Turkey
Duration: 21 May 201227 May 2012

Publication series

NameProceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012

Conference

Conference8th International Conference on Language Resources and Evaluation, LREC 2012
Country/TerritoryTurkey
CityIstanbul
Period21/05/1227/05/12

Keywords

  • Evaluation
  • Machine Reading
  • Question Answering

Fingerprint

Dive into the research topics of 'Evaluating machine reading systems through comprehension tests'. Together they form a unique fingerprint.

Cite this