Investigating the comparative validity of computer- and paper-based writing tests and differences in impact on EFL test-takers and raters

Show simple item record

dc.contributor.advisor East, Marin en
dc.contributor.advisor Tolosa, Constanza en
dc.contributor.author Guapacha Chamorro, Maria en
dc.date.accessioned 2020-10-15T02:08:22Z
dc.date.available 2020-10-15T02:08:22Z
dc.date.issued 2020 en
dc.identifier.uri http://hdl.handle.net/2292/53273
dc.description.abstract This study investigates the validity and impact of computer-based (CB) and paper-based (PB) writing tests for direct assessment of second language (L2) writing. As the use of computers in academic writing and standardised language testing increases, it is necessary to examine whether CB tests are comparable to the traditional PB writing tests, typically used in the language classroom, for measuring students' academic writing ability. Language teachers and institutions need to examine the assessment methods used to prepare and assess their students. Likewise, English as a foreign language (EFL) university students need to be aware of the advantages and disadvantages and impacts of both types of assessment modes on their performance. Based on a sociocognitive framework for validating writing test performance (Shaw & Weir, 2007), the present study investigated the comparative validity of CB and PB writing tests and differences in impact on EFL test-takers and raters. The focus of this study was to identify whether there were differences between the types of test mode, CB and PB, and in how they impacted university students' cognitive processes, performance and preferences for test mode as well as raters' scorings, perceptions and preferences for test mode. The study was conducted in Colombia with a group of 38 EFL pre-service language teachers from two intact classes (as test-takers) and a group of three language teachers (as raters) as participants. The research design included a mixed-methods research; data were collected through questionnaires, scorings of written texts and interviews. Additional evidence of the impact of the text format (i.e. handwritten and typed texts) on raters' scorings was investigated by comparing scores awarded to original handwritten and typed texts with their transcribed versions (N = 465). The findings indicate that the writing medium matters in direct assessment of L2 writing and that it potentially affects the construct of writing. The writing medium affects EFL test-takers and raters in different ways. The writing medium affected students' cognitive processes, performance (writing time and text length) and preferences for test mode. The CB mode enabled more revisions, whereas the PB mode triggered more elaborate initial planning. Writing time and text length differed across individuals and, in the CB mode, students produced longer texts and spent less time on task. However, the writing medium did not affect students' overall scores and the scores for content, vocabulary and mechanics. The writing assessment delivery mode also influenced students' preferences, which appeared to be related to students' individual physical characteristics (handwriting and typing skills), psychological characteristics (students' learning and cognitive styles, comfort, concentration and personality type), experiential characteristics (students' familiarity with computers and pen and paper) and reader awareness (legibility and readability of texts). The findings suggest that the writing medium affected raters' scorings, perceptions and preferences for test mode. Regardless of high levels of inter-rater reliability for both handwritten and typed texts, raters tended to consistently award higher scores to handwritten texts, particularly with Organisation and Language Use. The writing medium affected raters' perceptions of text quality (Organisation, Language Use, Mechanics and text length), as evidenced in the interviews and the scorings. The writing assessment delivery mode also influenced raters' preferences for handwritten and typed texts. Differences in preferences appeared to be related to raters' individual physical characteristics (fatigue and visual impact), psychological characteristics (stress, personality type, cognitive style, concentration, attitudes), experiential characteristics (raters' assessment experience with both types of texts) and writer awareness (giving or not the choice of assessment type). The findings of this study identified the advantages and limitations of both CB and PB tests as well as concerns about test validity and fairness. The findings have implications for assessment practice in choosing test delivery modes for EFL writing assessments.
dc.publisher ResearchSpace@Auckland en
dc.relation.ispartof PhD Thesis - University of Auckland en
dc.relation.isreferencedby UoA99265323913702091 en
dc.rights Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. en
dc.rights.uri https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm en
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/nz/ en
dc.title Investigating the comparative validity of computer- and paper-based writing tests and differences in impact on EFL test-takers and raters en
dc.type Thesis en
thesis.degree.discipline Applied Linguistics
thesis.degree.grantor The University of Auckland en
thesis.degree.level Doctoral en
thesis.degree.name PhD en
dc.date.updated 2020-09-25T00:06:03Z en
dc.rights.holder Copyright: The author en
dc.identifier.wikidata Q112952187


Files in this item

Find Full text

This item appears in the following Collection(s)

Show simple item record

Share

Search ResearchSpace


Browse

Statistics