Abstract:
The development of knowledge depends, in part, on obtaining theoretically structured self-report data. Since the introduction of the balanced, 5-point, agreement rating scale and its summated scoring logic (Likert, 1932), researchers have often adopted this approach because of its simplicity. Each response option system has an impact on survey takers; respondents tend to assume that the response options have been intelligently selected to frame normal attitudes and use that information to guide their response (Schwarz, 1999). Hence, response options can elicit unintended response styles (e.g., central tendency, social desirability; OECD, 2013) or be subject to cultural norms (Shulruf, Hattie, & Dixon, 2008). Unsurprisingly, all response option systems have strengths and weaknesses.
Problems in response option design can contribute to (1) low within-item discrimination among respondents, (2) challenges for use of parametric statistics, (3) challenges in interpreting scale sums or means, and (4) obstacles arising from participant response styles. Careful attention to response scale options can lead to robust item variance, minimise ceiling and floor effects, and increase scale reliability or consistency.
In this chapter, we will address a number of common issues in the design of rating scales including:
• Number of options presented (i.e., length of response scale),
• Labelling of options,
• Semantic values for anchor labels,
• Use of neutral or don’t know responses,
• Use of odd vs. even number of options,
• Directional packing vs. balanced options, and
• Capture of frequencies and quantities.