Abstract:
This article explores interrater reliability and rater effects in performance ratings at the senior-executive level. Studies have shown that substantial rater effects affect the validity of multi-source ratings, but it is unclear whether these effects hold true at the senior-executive level. We present a study of 189 senior executives in New Zealand and Australia, whose performance was rated by an average of 4.23 raters: superiors, peers, and subordinates. Intra-class correlation coefficients revealed strong rater effects, and a multi-trait multi-method analysis showed that those effects came from individual raters, rather than rater source (i.e. superior, peer, or subordinate). The findings suggest that it may be unwise to aggregate performance ratings at the senior-executive level, and to use such ratings, whether aggregated or single, to make critical decisions.