Video resource on evaluation, part 2

Selected and produced by Greg Benfield, OCSLD, Oxford Brookes University

Video part 2: http://www.youtube.com/watch?v=a8GPDsWuqnk

When the video instructs you to pause, use these links to the teaching examples:

The teaching examples will open in new windows, so you don't lose your place in the video. (If you do lose your place, resume at 1:35.)

Transcript

This is the second of a two part audio-visual introduction to evaluating our teaching. In this section I want to zero in on how we might gather evidence to evaluate our own teaching. To get started with this I'd like to suggest you do a little exercise. I suggest you pause this video and choose two different teaching contexts from the video selections on your screen to look at for a moment (both are listed in the resources wiki for this topic). My suggestion is that you choose one lecture and one tutorial. You don't need to look at all of each video. Watch a few minutes of each one, perhaps fast forward through them so that you can be sure you've got the gist of what's going on. As you are watching ask yourself, am I looking at an example of good teaching in this context (lecture or tutorial)? How would I know? What should I be looking for? What else do I need to know before I can answer such questions? Other than me, the external observer, who else has salient information about whether this teaching and learning situation is going well? Pause this video now and go off and take a look at a couple of those teaching situations. I'll still be here when you get back.

So, welcome back. With any luck watching those videos has given you lots of ideas about the dimensions of teaching quality in a particular situation. You probably thought about things like presentational techniques and how well they were executed. And you might well have asked yourself whether the video camera's viewpoint is sufficient to make a judgment about such criteria. Hopefully you will also have thought about needing to know things like where in the course this session fits; the intended learning outcomes, for the session and for the larger course within which it sits; the relevance of this session to any high-stakes assessments that students are required to take; and so on. In other words, even if one is evaluating a relatively simple and short teaching session, it is crucial to know about the surrounding context and it is absolutely imperative to have a clear and explicit focus on the particular dimensions of quality that are being evaluated.

Achieving this clarity and explicitness of focus really comes down to crafting good evaluation questions. The better and more specific your questions the easier it will be to determine how to gather evidence and to know when you are in a position to answer the questions i.e. to draw conclusions. Frequently what my colleagues and I do to help us is use an evaluation matrix like the one you see on your screen. This one is an example of a draft matrix we used to help plan the evaluation of some new online courses. Down the left-hand side you have the core evaluation questions. In this early draft they were: Why do students choose to study at a distance? What is the student experience of distance learning in the school? What is the staff experience of distance learning in the school? How can we improve support for distance learners? How can we improve staff online tutoring skills?

Each column represents a different data collection method. In this example going from left to right the data collection methods that are anticipated to be used at this stage in the design of the evaluation were: student feedback from end of module questionnaires; a survey of both students and staff on the courses; a further separate staff survey; a staff focus group; and documents produced for the annual programme review by the course team. You'll notice that some cells are left blank, signifying that that particular data collection technique is not thought to address the evaluation question for that row; while in other cells there are even more specific questions that have been developed to signify in more detail the purpose and relationship of that data collection technique to its associated evaluation question. I don't want to suggest that evaluating the effectiveness of a particular seminar event or lecture needs this level of rigour or sophistication. I most certainly do want to suggest that good evaluation, whether we are talking about individual reflective practice, or larger scale systematic course evaluation, relies on the formulation of specific evaluative questions.

In part 1 I reminded you of Brookfield's four critically reflective lenses and the importance of seeking multiple sources of evidence. The Hounsell chapter listed in the resources wiki for this topic is very good on this, particularly in providing some detail about ways of capturing data. On this slide you can see the most commonly used sources of data in educational evaluation. Surveys and questionnaires, interviews and focus groups are very common. So too is the use of student performance data, and artifacts of student work such as online discussions or their assignments. But don't forget about the huge variety of everyday data that can be drawn on, including informal things like common room discussions, impromptu conversations, and some of the variety of things that have been mentioned by tutors and recorded on this slide.

Educational evaluation needs to be tightly focused, whether it's large-scale or your own systematic reflective practice. It's very easy to waste a lot of time unnecessarily collecting information that isn't relevant if you haven't devoted enough time to focusing the field of your investigation. I have already discussed the importance of good evaluation questions in helping to achieve this focus. The other thing to say is that timing is an important variable that is often not taken into account. To state the obvious, the actors within the educational context, whether they are students, tutors or administrative staff, will have different perceptions of how well things are going and what's important depending upon what's going on in the course and where in the course they are. It's wise to think about things like mid-semester temperature readings; performance on formative assessments, and opportunities to gather quick and dirty data about new learning activities as they are happening, rather than waiting until the end of the course.

If you have time it's worth taking a look at a case study of an educational evaluation. This TESTA project case study is quite a good example. It's quite short to read and it illustrates things like the use of multiple sources of evidence and the surprising things that one can learn about how to improve a course that on the surface seems to be going quite well.

One last point. It should be obvious but it's important to critically evaluate one's evaluation design. We should always be asking, is the focus sufficiently sharp, have I identified the most important sources of evidence, are these sources of evidence sufficiently persuasive, is there something else I should be looking at, and of course, will I be able to do anything about this?

So, to conclude: You should try only to gather information that is useful and that you intend to use. Second, it's important to draw on multiple sources of evidence. Third, it's important to inform your sources, whether they are your students or colleagues, about what you did with the information they provided you with. And finally, evaluating our teaching should be purposeful. We always want to put ourselves in the position of being able to say, "the changes I made were…."

And that’s the end of this section.