TY - JOUR
T1 - The pattern of reporting and presenting validity evidence of extended matching questions (EMQs) in health professions education
T2 - a systematic review
AU - Taha, Mohamed H.
AU - Mohammed, Hosam Eldeen Elsadig Gasmalla
AU - Abdalla, Mohamed Elhassan
AU - Yusoff, Muhamad Saiful Bahri
AU - Mohd Napiah, Mohd Kamal
AU - Wadi, Majed M.
N1 - Publisher Copyright:
© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
PY - 2024
Y1 - 2024
N2 - The Extended matching Questions (EMQs), or R-type questions, are format of selected-response. The validity evidence for this format is crucial, but there have been reports of misunderstandings about validity. It is unclear what kinds of evidence should be presented and how to present them to support their educational impact. This review explores the pattern and quality of reporting the sources of validity evidence of EMQs in health professions education, encompassing content, response process, internal structure, relationship to other variables, and consequences. A systematic search in the electronic databases including MEDLINE via PubMed, Scopus, Web of Science, CINAHL, and ERIC was conducted to extract studies that utilize EMQs. The framework for a unitary concept of validity was applied to extract data. A total of 218 titles were initially selected, the final number of titles was 19. The most reported pieces of evidence were the reliability coefficient, followed by the relationship to another variable. Additionally, the adopted definition of validity is mostly the old tripartite concept. This study found that reporting and presenting validity evidence appeared to be deficient. The available evidence can hardly provide a strong validity argument that supports the educational impact of EMQs. This review calls for more work on developing a tool to measure the reporting and presenting validity evidence.
AB - The Extended matching Questions (EMQs), or R-type questions, are format of selected-response. The validity evidence for this format is crucial, but there have been reports of misunderstandings about validity. It is unclear what kinds of evidence should be presented and how to present them to support their educational impact. This review explores the pattern and quality of reporting the sources of validity evidence of EMQs in health professions education, encompassing content, response process, internal structure, relationship to other variables, and consequences. A systematic search in the electronic databases including MEDLINE via PubMed, Scopus, Web of Science, CINAHL, and ERIC was conducted to extract studies that utilize EMQs. The framework for a unitary concept of validity was applied to extract data. A total of 218 titles were initially selected, the final number of titles was 19. The most reported pieces of evidence were the reliability coefficient, followed by the relationship to another variable. Additionally, the adopted definition of validity is mostly the old tripartite concept. This study found that reporting and presenting validity evidence appeared to be deficient. The available evidence can hardly provide a strong validity argument that supports the educational impact of EMQs. This review calls for more work on developing a tool to measure the reporting and presenting validity evidence.
KW - EMIs
KW - EMQs
KW - extended matching items
KW - Extended matching questions
KW - health professions education
KW - R-type MCQ
KW - reliability
KW - validity
UR - http://www.scopus.com/inward/record.url?scp=85207728869&partnerID=8YFLogxK
U2 - 10.1080/10872981.2024.2412392
DO - 10.1080/10872981.2024.2412392
M3 - Review article
C2 - 39445670
SN - 1087-2981
VL - 29
JO - Medical Education Online
JF - Medical Education Online
IS - 1
M1 - 2412392
ER -