Evaluating the Quality of Higher Education Instructor-constructed Multiple-Choice Tests: Impact on Student Grades

Show simple item record

dc.contributor.author Brown, Gavin en
dc.contributor.author Abdulnabi, H en
dc.date.accessioned 2017-07-24T03:32:38Z en
dc.date.issued 2017-06 en
dc.identifier.citation Frontiers in Education 2:24 Jun 2017 en
dc.identifier.issn 2504-284X en
dc.identifier.uri http://hdl.handle.net/2292/34456 en
dc.description.abstract Multiple-choice questions (MCQs) are commonly used in higher education assessment tasks because they can be easily and accurately scored, while giving good coverage of instructional content in a short time. However, studies that have evaluated the quality of MCQs used in higher education assessments have found many flawed items, resulting in misleading insights about student performance and contaminating important decisions. Thus, MCQs need to be evaluated statistically to ensure high-quality items are used as the basis of inference. This study evaluated the quality of 100 instructor-written MCQs used in an undergraduate midterm test (50 items) and final exam (50 items), making up 50% of the course grade, using the responses of 372 students enrolled in one first-year undergraduate general education course. Item difficulty, discrimination, and chance properties were determined using classical test theory and item response theory (IRT) statistical item analysis models. The two-parameter logistic (2PL) model consistently had the best fit to the data. The impact on overall course grades between the original raw score model and the IRT 2PL model showed 70% of students would receive the same grade (i.e., D to A), but only one-third would get the same mark using the standard augmented grade scale (i.e., A+ to D−). The analyses show that higher education institutions need to ensure MCQs are evaluated before student grading decisions are made. en
dc.publisher Frontiers Media en
dc.relation.ispartofseries Frontiers in Education en
dc.rights Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher. en
dc.rights.uri https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm en
dc.rights.uri http://creativecommons.org/licenses/by/4.0/ en
dc.title Evaluating the Quality of Higher Education Instructor-constructed Multiple-Choice Tests: Impact on Student Grades en
dc.type Journal Article en
dc.identifier.doi 10.3389/feduc.2017.00024 en
pubs.volume 2 en
dc.description.version AM - Accepted Manuscript en
dc.rights.holder Copyright: The Authors en
pubs.publication-status Published en
dc.rights.accessrights http://purl.org/eprint/accessRights/OpenAccess en
pubs.subtype Article en
pubs.elements-id 618592 en
pubs.org-id Education and Social Work en
pubs.org-id Learning Development and Professional Practice en
pubs.number 24 en
pubs.record-created-at-source-date 2017-03-25 en


Files in this item

Find Full text

This item appears in the following Collection(s)

Show simple item record

Share

Search ResearchSpace


Browse

Statistics