On the Application of Sentence Transformers to Automatic Short Answer Grading in Blended Assessment

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In Natural Language Processing, automatic short answer grading remains a necessary launch-pad for the analysis of human responses in a blended learning setting. This study presents pre-trained neural language models that use context dependent Sentence-Transformers to automatically grade student responses with two different input settings. It is found that the use of these models achieves promising results when compared to conventional Bidirectional Encoder Representation Transformer, (BERT), approaches when applying various text similarity-based tasks. This work presents experiments using the benchmark Mohler dataset to test these new models. In summary, an excellent Pearson Correlation score of 0.82 and a Root Mean Square Error of 0.69 is exhibited across a representative experiment sample size.

Original languageEnglish
Title of host publication2022 33rd Irish Signals and Systems Conference, ISSC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665452274
DOIs
Publication statusPublished - 2022
Event33rd Irish Signals and Systems Conference, ISSC 2022 - Cork, Ireland
Duration: 9 Jun 202210 Jun 2022

Publication series

Name2022 33rd Irish Signals and Systems Conference, ISSC 2022

Conference

Conference33rd Irish Signals and Systems Conference, ISSC 2022
Country/TerritoryIreland
CityCork
Period9/06/2210/06/22

Keywords

  • Automatic Short Answer Grading
  • BERT
  • BiLSTM
  • Natural Language Processing

Fingerprint

Dive into the research topics of 'On the Application of Sentence Transformers to Automatic Short Answer Grading in Blended Assessment'. Together they form a unique fingerprint.

Cite this