Test Item Analysis: Removing bad items before score estimation
Reference
Degree Grantor
Abstract
Analysis of test questions before score creation is a key psychometric process to ensure that the score best estimates a person’s ability. Classical test theory (CTT) creates test scores based on the block of items in a test. CTT usually removes items with too high or too low difficulty and removes items with zero or negative discrimination to create total scores. Item response theory (IRT) estimates item characteristics on a latent ability scale independent of which combination of items are in a test. IRT estimates item difficulty, item discrimination, so poorly performing items can be removed. There are three main IRT models, so the differences between the Rasch, 2PL, and 3PL models will be discussed. This workshop demonstrates how to obtain CTT and IRT values for any dichotomously score test containing MCQ or short answer questions. We will also look at using the AIC index to evaluate which model best fits the data. The workshop uses the free software R and the free RStudio interface. You will need your own device with R and R Studio. Please install the following packages before the workshop (psych, ltm, mirt).