Video resource on evaluation, part 1

Selected and produced by Greg Benfield, OCSLD, Oxford Brookes University

Video part 1: http://www.youtube.com/watch?v=qq8gejFFPNY

The end of the video suggests reading this resource:

Now watch the evaluation video resource part 2.

Transcript

I am Greg Benfield. This is the first part of two audio-visual resources introducing the topic evaluation, more specifically educational evaluation. Before going on let me just give you some direction about using the resources on this topic – the ones you’ll find in the resources wiki. I would recommend that if you read nothing else you at least read two things: the Graham Gibbs' paper called Dimensions of Quality and the Hounsell book chapter Evaluating Courses and Teaching.

Before we narrow down to the topic, evaluating the effectiveness of our teaching, let's clarify some terms and lay out some general principles.

Broadly speaking then, what do I mean, ‘educational evaluation’? Evaluation generally involves learning about a programme or course by gathering information about its effectiveness or quality, and that information gathering should be related to decision-making. In other words educational evaluation is usually about systematically attempting to find out whether and how well some educational process is doing its job, and how it should be improved or modified. Educational process here encompasses a very broad range of possibilities, including course design, a teaching or learning technique, an assessment method, or an educational device like a piece of technology or an assessment tool like a new quiz or assignment. You will notice that this definition of evaluation does not include evaluating or making judgments about student learning. In the UK we refer to this type of evaluation as assessment or student assessment.

The slide you are looking at is intended to just give you a sense of the potential breadth of educational evaluation. Trying to evaluate a course could well include several of the categories listed on this slide. For example, the effectiveness of a course is very likely to be influenced by resources such as those you see in the top left hand side of the slide; things like the research interests and expertise of the teaching staff, the cost of providing teaching rooms and resources, and the methods and effectiveness of supporting students outside as well as inside of class. Equally important, especially from the student experience perspective, are things like those appearing on the bottom right hand side of the slide, things like whether and how well the course attends to student needs, the clarity and currency of the course's intended learning outcomes, and the assessment strategies that are used to support and monitor student learning.

Now as I've been talking some of you may have been rightly asking, what does he mean, ‘effectiveness’? Evaluation necessarily involves notions of quality standards. In education there are a myriad such measures of quality. Graham Gibbs' paper examines some of these common measures of quality by classifying them according to Biggs's 3 Ps: presage, or the things that are in place before students start learning; process, the things that affect student learning as it is happening; and product, or the outcomes of the learning. This slide mentions just a few of the common measures of quality used in each category.

I've chosen this quote by Graham Gibbs as an example of one of the complexities, the difficulties, of carrying out educational evaluation. These four criteria -- class size, student effort, who teaches, and quantity and quality of feedback -- are according to Gibbs the best predictors of student gain through engagement with their course. What Gibbs is concerned about with these four indicators of quality in the UK is that our commonly used quality assurance processes do not normally gather good data about them. Likewise, although instruments like the National Student Survey in the UK and the Course Experience Questionnaire in Australia purport to seek information about some of these things, they shed little light on these four criteria because they measure perceptions of student satisfaction. They do not measure engagement in the learning process or the value added students obtain from the resources they use at university.

So, one of the chief evaluation design issues concerns sources of evidence. We will come back to this idea again in the next section, but for now I just want to remind you of Brookfield's four critically reflective lenses, an idea you met in the first session. Almost all reflective practice will involve the first lens or source of evidence, one's own experiences. But what Brookfield's four lenses emphasise is the need to seek confirmation of one's own perceptions from a range of other sources. In general I think we can say that good reflective practice or educational evaluation should involve using at least three of these four lenses, preferably all four. Because educational variables are many and they interact with each other in complex ways, usually we seek multiple sources of evidence. To give an example, student feedback in an end of module valuation might suggest there is a problem with one of their assignments having been invalid (not assessing the intended outcomes) or requiring more of them than was justified at their current level. But we are more likely to have confidence in such an inference if, in addition to this feedback, the grades on the assignment were abnormally low. And even more confidence if colleagues who had previously taught the topics involved, or the external examiner perhaps, thought the assignment was too hard.

It is hard to overestimate the importance of using theory in educational evaluation. One thing that theory does is clarify both for the evaluator and their intended audience what is meant by ‘quality’ in the given context. If for example we are trying to decide the effectiveness of some learning activities that are being designed within a course then explicitly adopting a particular learning theory such as constructivism or experiential learning, each of which has well known and understood features, allows the evaluator to test whether these features are present in practice. To take another example, one might use the well-known course design framework of constructive alignment as an evaluative principle.  You might ask questions like, 'how well aligned are learning outcomes and assessments in the course?',  or 'how well aligned are learning activities and assessments?', and so on.

What I'm trying to suggest here is that there are quite a lot of frameworks or sets of principles of good practice in learning and teaching out there in the literature. If you can find one that is particularly suitable for your teaching context and that accords well with the values and principles you adhere to, then it is to your advantage to use it as a theoretical underpinning to your evaluation. It will help you to interrogate the practice you are examining and, perhaps as importantly, it will help you communicate your analytical approach and your findings and conclusions to your intended audience.

By way of an example, take a break from listening to me and have a look at the Graham et al. paper. You'll find the paper in the resources wiki. It describes how some online courses were evaluated using Chickering and Gamson's well-known seven principles of good practice in undergraduate education. See if you agree with me that, even if you are not familiar with Chickering and Gamson's seven principles, or the online courses in question, the use of the seven principles as a theoretical framework in this paper makes very clear to the reader the dimensions of quality that are being used to analyse these online courses. 

This is the end of part 1.