Abstract:
Recent years have witnessed a surge of interest in exploring diagnostic language assessments from various perspectives. This study, which was undertaken in the context of the Diagnostic English Language Needs Assessment (DELNA), a post-entry language assessment at the University of Auckland, proposes a design for a more diagnostic approach to the reading section of DELNA, which currently yields only a single overall score. The design draws on both theoretical and empirical evidence. In accordance with a general framework for assessment development, a needs analysis was conducted to understand the target domain of academic reading through surveys of students and language teachers at the university. This led to the identification of a wide range of reading and reading-related subskills. Factor analyses of the survey data also revealed a number of theoretically meaningful subdomains of academic reading. Meanwhile, a sample of test tasks from the current forms of the DELNA reading instrument were investigated by means of both expert judgement and student verbal protocol analyses. After comparing and contrasting the results of these test task analyses and those of the needs analysis, a big gap was found between students’ needs and what seems to be currently assessed in the current DELNA reading test tasks. A comparison of experts’ judgements on item content and students’ actual test-taking processes also provided the basis for a number of practical implications for designing tasks to measure different reading subskills. Drawing on information from these different sources, as well as taking into account practical constraints, the thesis presents a design for a diagnostic academic reading assessment that is applicable to DELNA and other similar assessment programmes.