Brown, Gavin2024-11-202024-11-202024-11-26(2024, November 25-27). [Presentation]. Annual meeting of the New Zealand Association for Research in Education, Hamilton, NZ.https://hdl.handle.net/2292/70709Analysis of test questions before score creation is a key psychometric process to ensure that the score best estimates a person’s ability. Classical test theory (CTT) creates test scores based on the block of items in a test. CTT usually removes items with too high or too low difficulty and removes items with zero or negative discrimination to create total scores. Item response theory (IRT) estimates item characteristics on a latent ability scale independent of which combination of items are in a test. IRT estimates item difficulty, item discrimination, so poorly performing items can be removed. There are three main IRT models, so the differences between the Rasch, 2PL, and 3PL models will be discussed. This workshop demonstrates how to obtain CTT and IRT values for any dichotomously score test containing MCQ or short answer questions. We will also look at using the AIC index to evaluate which model best fits the data. The workshop uses the free software R and the free RStudio interface. You will need your own device with R and R Studio. Please install the following packages before the workshop (psych, ltm, mirt).Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher.https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htmTest Item Analysis: Removing bad items before score estimationPresentation2024-11-04Copyright: The authorshttp://purl.org/eprint/accessRights/OpenAccess