Abstract:
The current international pandemic has precipitated great in using technology to evaluate learning and quality assure assessments. Administrative purposes (e.g., scholarship awarding, graduation, and certification) have different expectations than formative, diagnostic, or educational uses of assessment. Teachers and learners need assessment to identify strengths and weaknesses and resources for improved outcomes; something quite challenging for total score and rank order reporting. This tension between summative and formative expectations challenges what technology needs to do to appropriately assess learning. Related to these contrasting purposes are the consequences attached to assessment results. While society is relatively relaxed about the consequences that students experience for performance (e.g., Grade A or Fail, etc.), using such information to judge the value of educational institutions or teachers has generally proven a failure. Assuring the authenticity or validity of performance in online assessment through online proctoring can be invasive of privacy and inaccurate. Thus, establishing the correct level of consequence for what may be a technological failure is important. Rapid deployment of assessment technologies depends on a robust infrastructure and equitable access and opportunity. Many societies have not established robust broadband and hardware provision for all--computer assessments in such contexts are fundamentally biased.
In this talk, I will report how New Zealand developed and deployed, first a computer assisted and then fully online, testing system of reading, writing, and mathematics for use in compulsory schooling. The system supports diagnostic, formative purposes in graphical reports that identify who needs to be taught what next. At the same time, the system provides robust normative information related to curriculum expectations and grade norms so that accountability requirements can be met. These lessons speak to challenges facing higher education in an era of pandemic.