Abstract
This paper describes a methodology for testing and evaluating the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. The methodology is being used in QA4MRE (QA for Machine Reading Evaluation), one of the labs of CLEF. We report here the conclusions and lessons learned after the first campaign in 2011.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012 |
| Editors | Mehmet Ugur Dogan, Joseph Mariani, Asuncion Moreno, Sara Goggi, Khalid Choukri, Nicoletta Calzolari, Jan Odijk, Thierry Declerck, Bente Maegaard, Stelios Piperidis, Helene Mazo, Olivier Hamon |
| Publisher | European Language Resources Association (ELRA) |
| Pages | 1143-1147 |
| Number of pages | 5 |
| ISBN (Electronic) | 9782951740877 |
| Publication status | Published - 2012 |
| Event | 8th International Conference on Language Resources and Evaluation, LREC 2012 - Istanbul, Turkey Duration: 21 May 2012 → 27 May 2012 |
Publication series
| Name | Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012 |
|---|
Conference
| Conference | 8th International Conference on Language Resources and Evaluation, LREC 2012 |
|---|---|
| Country/Territory | Turkey |
| City | Istanbul |
| Period | 21/05/12 → 27/05/12 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 4 Quality Education
Keywords
- Evaluation
- Machine Reading
- Question Answering
Fingerprint
Dive into the research topics of 'Evaluating machine reading systems through comprehension tests'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver