Assessment for learning - video transcript

My name is Greg Benfield. In this presentation I'm going to talk you through some the key ideas about improving effectiveness of assessment and feedback. I will have to leave it to you to decide how to interpret these ideas in relation to your context, by which I mean the subject you teach, the department and institution you are in, and whether or not your course is delivered primarily by traditional face-to-face methods or substantially online.

These are the intended learning outcomes for this session. We focus on current issues in assessment and feedback and on some practical issues for your teaching.

Many of the ideas in this presentation from the work of the ASKe centre for excellence in teaching and learning. I would urge you to read the ASKe book on this topic if you want to delve into it more deeply.

There is an extensive bibliography on assessment and feedback in the course VLE site.

Let’s start with assessment criteria. Starting probably in the late 1990s there was a push to improve assessment standards in higher education by making assessment criteria explicit. No doubt you are familiar with the type of criteria grid shown on this slide. Before this time this type of thing was quite rare. The core idea is that if you can clarify and write down the criteria on which a piece of student work will be assessed then this should help to improve inter-marker reliability and it should also help student learning, by making clear to students the qualities that will be looked for in their work. Clearly, this notion is only going to work if the assessment criteria that one develops for a given assignment are closely related to the learning outcomes of the course as a whole. Equally clearly, students need to know about the assessment criteria before they begin working on their assignments or they can't know what to aim for. Colleagues here at Brookes – Chris Rust, Margaret Price and Barry O'Donovan in particular – researched the impact of a large-scale implementation of explicit assessment criteria in the Business School at Brookes over three or four years in the early 2000s. They found that although students appreciated knowing the assessment criteria there was no significant improvement in student achievement as a result.

If you think about it, this makes sense. As subsequent work by these colleagues and others has shown, understanding how to interpret and apply assessment criteria is very complex. We academics gain and refine our understanding of the criteria we use in practice, by marking a wide variety of student work, in the process being exposed to a wide range of responses to the same issue or problem. It follows that students will only gain an understanding of assessment criteria through a similar process of engaging with the criteria in actual marking exercises.

For example, in increasingly many modules at Oxford Brookes we do marking exercises using sample pieces of student work. Students are given some sample pieces of work to mark in their own time and in a subsequent workshop they discuss their marks and compare them with the actual tutor marks given. This helps them to gain a richer understanding of what we mean when our assessment criteria refer to “critical analysis” and “using evidence”, to name just a couple of examples of things that might be interpreted very differently in different subjects, tasks and levels.

Okay, we turn now to some definitions. Frequently we refer to 2 basic kinds of assessment in higher education. Formative assessment, now often referred to as assessment for learning, is assessment whose essential purpose is helping students to learn. Formative assessment usually does not have marks attached to it. Summative assessment, now often referred to as assessment of learning, is assessment whose primary purpose is to judge how much has been learnt. Summative assessment usually comes at the end of a period of learning, it focuses on judging performance, on grading, on differentiating between students and usually carries marks. One of the things the research is very clear about is that summative assessment tends to be of limited or even no use for feedback on learning.

Now, if a course has an over-emphasis on summative assessment then a series of consequences may follow. Too many high risk assessments may encourage students to adopt surface or strategic approaches to their learning. They may also encourage students to adopt an atomised approach to each assessment so that they fail to make the links between what they have learned before and what they are trying to do now. An over-emphasis on summative assessment can encourage students to play it safe or avoid risk-taking. As I've already mentioned, summative assessments tend not to provide useful feedback and especially early in a course failure can seriously damage self-efficacy. And of course over emphasising summative assessment can also be overly time-consuming for staff.

I want to argue that in general in higher education we need to shift the balance of assessment more away from summative and towards formative assessment, assignments whose purpose to help students learn and give them feedback. Studies consistently point to assessment and especially feedback as being the areas students across the sector are most dissatisfied with in their experience at university. There is an abundance of research going back many decades showing the importance of feedback to learning. This slide and the next one point to a few of these. Learners need effective feedback in order to learn.

We also know that students who are struggling in their courses, for whatever reason, have the most to gain from improvements in our feedback processes.

A slide two slides back contained a quote saying students are hungry for feedback, yet many teachers are familiar with the phenomenon of stacks of uncollected work representing hours and hours of carefully constructed feedback sitting outside their offices until they eventually are thrown away.

This apparent paradox is an expression of the complexity of feedback processes.

There are a variety of well-documented problems with feedback. This slide gives you some of them and some papers to follow up on some of the dimensions of these problems, like the many reasons why these problems occur. For example, why don't students read their feedback?  Often this is not carelessness or lack of motivation; frequently they have very good reasons for not paying attention to it. We know, for example, that many students are only interested in the mark. They may believe that the feedback on this assignment will not be able to help them on any subsequent assignments and sometimes they can be right about this. If they have frequently received feedback in the past that has been unhelpful then this will colour their attitude to any subsequent feedback.

Here are some further examples, from the student perspective, of why students may not regard feedback as being helpful. Things like being difficult to read or uninterpretable because they are in some kind of shorthand are obvious. This is a problem that electronic feedback can solve quite quickly. Similarly, having to wade through a mass of information about what you've done wrong or badly is unlikely to motivate you to think about how to improve. What I think is most important about this slide though is the idea that we should not consider written feedback as a product sufficient in and of itself. If a student doesn't understand a concept or a skill there is no reason to think that even the most carefully crafted paragraph or two is going to fix that. Feedback is part of a process, a dialog. If we see feedback in this way then we should stop investing so much time in trying to construct the perfect written feedback and concentrate more on highlighting for students the areas in their work that need to improve and facilitating ways to discuss those areas in more detail.

I want to recommend these seven principles of good feedback practice by Nicol and McFarlane-Dick. They're based on an extensive review of literature and provide a simple framework in which to evaluate feedback practice. The paper is very accessible and gives good examples of how to use the principles to evaluate different kinds of assessment activities for the power of the feedback processes they involve.

So, what can we do to improve feedback processes? Here are some ideas. We might concentrate on managing expectations. Frequently there can be a mismatch between our intentions in giving feedback to students and students' expectations of feedback. For example, sometimes we might just want to highlight one or two general areas for improvement that will be important subsequently in similar assignments; other times we may want to give more detailed feedback specific to this particular piece of work. Students can only detect and understand such different purposes if we are explicit about it, in other words if we precede our feedback by saying something like, "in this feedback I only comment on the two most important areas for further development" and so on. We can also help ourselves by noting that students frequently don't recognise feedback when it's occurring. So we can do things like say “my feedback to you on it is” in a classroom discussion say, so that they become more alert to the wide variety of informal feedback mechanisms they experience. And we can work on the idea of encouraging students to read and use their feedback by requiring them to do so, for example having them comment on how they have used the feedback from an early draft in their final coursework.

Very quickly then here are some summary ideas about how to ensure feedback is fit for purpose. First, when designing a formative assessment try to ensure that students have motive, opportunity and means to use feedback. Motive derives from a belief that the feedback will be of use in another piece of work. Opportunity comes from a subsequent task, perhaps a similar piece of work or another draft of this one, in which they can actually apply the suggestions given in the feedback. Means is the tricky one; it's about providing opportunities and time to discuss and understand how to act on the feedback given. Having a formative first draft stage followed by a final submission is a classic way of achieving this. Furthermore, we can devote precious marking time to providing feedback at the formative stage and just give the mark for the summative submission. As I said earlier, we should consider alternatives to written feedback, which can be time-consuming to create. Oral feedback, including audio feedback, might be more effective, because it can be more personal and can address motivational and affective issues more easily than written feedback. We should find ways to encourage classroom discussion about assignments, for example through using marking exercises with exemplar assignments and the class marking criteria, and through peer review. And it's important to help students make the links between the assignments they do in our modules and those they are going to do subsequently. This means we need to take a programme view of assessment and explain how the feedback we give them on assignments is relevant to assignments they will do in other parts of their course.

Pay careful attention to what is feasible in the feedback you are giving. Sometimes it's a complete waste of time writing anything other than something like “please see me about this” or “you need to practice this more, so please visit the following website where you will find more exercises to do”. Sometimes quick and dirty feedback is more effective than very detailed feedback. Generic feedback, where at the very next opportunity after they were submitted you tell students "your assignments had the following strengths and weaknesses … did your work contain any of these?" can be very helpful and also encourage students' self-assessment ability. Regular computer-aided assessments can provide important, timely guidance to students about their understanding of course content. And finally, think about whether to withhold marks for a while and only give students the mark after they have had time to concentrate on the feedback.

Okay, I hope you found this useful. Here's a little checklist of practical things you might do to improve your own feedback practice. Cause the video here if you want to think about it or make notes.

There are two slides of references. Pause the video if you want to follow up on something.


About the course: Teaching Online Open Course (TOOC)