Journal Article


Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria

Abstract

Unreliability in marking is well documented yet we lack studies that have investigated assessors’ detailed use of assessment criteria. This project used a form of Kelly’s Repertory Grid method to examine the characteristics that 24 experienced, UK assessors notice in distinguishing between students’ performance in four contrasting subject disciplines: that is their implicit assessment criteria. Variation in the choice, ranking and scoring of criteria was evident. Inspection of the individual construct scores in a sub-sample of academic historians revealed five factors in the use of criteria that contribute to marking inconsistency. The results imply that whilst more effective and social marking processes that encourage sharing of standards in institutions and disciplinary communities may help align standards, assessment decisions at this level are so complex, intuitive and tacit that variability is inevitable. It concludes that universities should be more honest with themselves and with students and actively help students to understand that application of assessment criteria is a complex judgement and there is rarely an incontestable interpretation of their meaning.

Attached files

Authors

Den Outer, B
Bloxham, S
Hudson, J
Price, M

Oxford Brookes departments

Faculty of Business\Department of Business and Management

Dates

Year of publication: 2015
Date of RADAR deposit: 2016-09-09



Related resources

This RADAR resource is the Accepted Manuscript of Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria

Details

  • Owner: Unknown user
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 552