Week 3: Evaluating learning designs

In week 3, we will showcase your group design and gain feedback from other members of the course. You are asked to set the learning activity in a wider course context and consider some of the broader issues of implementation, such as the learner experience at a course level. We also start to construct an evaluation tool to support further development of a learning design and lead into your own design work for the final week.

The aims of this week therefore are to:

Learning activity design, as we have said previously, is a mechanism to enable us to represent what we think is happening in the learning situation. It could be equated to a lesson plan for conventional teaching and generally operates around what is called a ‘unit of learning’. Fowler et al. describe this as “a boundary concept involving a defined set of actors (or roles), activities, methods and resources, but critically one that cannot be decomposed into a smaller unit. The Unit of Learning (UOL) can however be aggregated into larger units (e.g. from lectures to courses).” (p.130).

Ensuring that these units of learning provide a coherent learning experience is something that is central to any educational experience. Think about how you achieve this in the courses you currently teach, and you will probably find that there are lynchpins such as the intended learning outcomes, or preparations for the assessment. We are very well used to learning outcomes and the assessment of them giving structure to the week-by-week learning in class-room based learning, and every time the class convenes is an opportunity to reiterate both aspects, reflect on what has been learnt and what is to come, and hence provide some scaffolding for learners. How does this translate into an online environment for learning? In a recent guide on the quality of online learning, Tony Bates (2012) reminds us of further contextual aspects that shape our courses, for example that “despite the concept of academic freedom, the structure of face-to-face teaching is to a large extent almost predetermined by institutional and organizational requirements.” He cites the numerous institutional organizational requirements that influence face-to-face teaching. These include: programme approval and review processes, relationship between credits and contact time, length of a semester, instructor: student ratios, availability of classrooms or lab spaces, time and location of examinations. In a post-compulsory education context, Bates asks us to consider which of these are affected or apply differently in an online course. It is worth pausing to consider whether and how the quality processes in your institution fit with the needs of quality assurance and quality enhancement for online courses?

Quality is the focus of our attention this week. We will consider quality through the lens of evaluation, and we start by asking what kind of approach to evaluation is productive for evaluating learning activity design? You are presumably already familiar with evaluating student learning, student performance and generally getting feedback from students on their satisfaction with your courses. Some of the approaches that you already use will be helpful in evaluating your learning designs once they are enacted. The feedback you receive from your learners will alert you to areas of design which could be improved. Being open to student feedback and responding if things are unclear, or rectifying missing links or adjusting settings is one way in which you can continuously improve your online course (and thank you for pointing these things out as we have gone along in this course).

Using evaluative tools

At the early stages of course design, however, the activities will not yet be realised, and you may not yet have a cohort of learners available for dialogue, so what we need to find is an approach that can help at this early stage of concept design. One of the approaches we tried out in week 1 is to take a ‘best practice’ rubric and apply it. We referred to the Quality Matters rubric, but others could also be applied, such as the CSU CHICO Rubric for Online Instruction (http://www.csuchico.edu/celt/roi/index.shtml).

What is noticeable from your use of a rubric on this course were the different ratings that individuals gave to the same course. Does this, therefore make the instrument unreliable as a quality tool? How do the results help the designers or educators improve the course? The most popular course (in terms of how many of you visited it) was the Open University course on ‘Pain and Aspirin’. The popularity comes perhaps from the ease of access to the course, or maybe the appeal of the course title! What is interesting here, though, is why do some of you rate ‘learner interaction and engagement’ at the highest level, and others at the lowest? The same for ‘course introduction and overview’, ‘assessment and measurement’, and so forth. Perhaps this discrepancy in ratings tells us more about the variation in learner responses and expectations than it does about the quality of the course. For me, this highlights the importance of knowing and responding to your learners.

The other evaluative prompt we gave you, also in the course visits wiki, was to consider from the learners’ perspective ‘What worked’ and what did not in each of the courses. The responses here are brief, however already are some suggestions for improvement, which could be acted upon if we were to improve the design of the course. This qualitative feedback seems therefore to be a richer source of evaluation feedback if we are concerned with developing a quality course.

Another method might be to develop a structured tool to feed back on the learner experience. This could be along the lines of a heuristic evaluation tool, as is widely used in usability design, adapting what is essentially an expert walk-through. This would produce a checklist that someone in the role of a learner on the course, could apply in order to note down when certain heuristics, ie rules of thumb/expectations, were not adhered to, and offer suggestions on how to improve. Mor (2012) offers a table that can be adapted for this purpose, once a set of heuristics are agreed:

Location

Issue

Heuristic

Severity

Recommended action (optional)

Where was the issue noticed?

Describe the issue that you noticed           

Which heuristic does it violate?

How bad is it (0-5)? 0 - not a problem 5 - catastrophic, show stopper

Suggest how to rectify this issue

So, what would be suitable heuristics to apply to your own online learning designs? A very widely used example of a set of heuristics for online learning draws on the work of Chickering and Gamson and their ‘7 Principles for Good Practice in Undergraduate Education’ (1987). This has been taken up and adapted widely for online education, for example Graham et al (2001),

The 7 principles of good practice are:

  1. Encourages Contacts Between Students and Faculty
  2. Develops Reciprocity and Cooperation Among Students
  3. Uses Active Learning Techniques
  4. Gives Prompt Feedback
  5. Emphasizes Time on Task
  6. Communicates High Expectations
  7. Respects Diverse Talents and Ways of Learning

A similar set of questions is prompted in the course textbook, in Appendix 4: Learning Activity Design: a checklist and initially we are going to ask you to use this set of question prompts to evaluate the other group’s learning activity design in a discussion, which sets the design in a broader context, before setting about designing an evaluation tool for us to use in week 4 when we provide feedback on your own course designs.

Task 1: Presenting groupwork outputs (30 mins)

After spending a week working in your groups to construct a learning activity design that meets a specified requirement, we will start this week by sharing the outputs of this work and becoming familiar with some of the possible perspectives we might adopt in evaluating these outputs.

As soon as possible in the week, and no later than Wednesday, post a link to your group’s learning activity design into the Presentations discussion topic. Make sure that all of us are able to go directly to the final learning activity design. This might mean that you create a file to attach to the posting, or that you direct us to a wiki page or google page that contains just the finished article.

Task 2: Evaluating learning activity designs (2 hours)

Now, you are going to join the design team for the other group. Once you have viewed the other group’s learning activity design, we want you to consider some questions you may have for your new colleagues about this learning activity design. Use Appendix 4 Learning Activity Design: a checklist (p230-231) to prompt your probing into the design and the decisions behind it and ask a question in the Presentations discussion topic. If there are questions in the checklist that are clearly answered by the design, then let the group know that this is the case, from your perspective. You may wish to ask why particular tools were chosen, or the expectations of the tutor, or perhaps how learners will get feedback on their progress. Other questions may be about what further resources might enhance the learning activity.

Adapt at least two questions from the checklist and pose them to the other group. Back in your original group, respond to at least one of the questions that are raised. We will use these discussions to start to extend the design into the wider course context. Remember that this activity is one week of a 12-week course. Using this as an example, how does your learning activity design meet the learners’ needs, and at what stage in their learning?

Task 3: Developing an evaluation tool (1.5 hours)

Considering the different approaches to evaluation that we have experimented with in previous weeks, can we decide upon an evaluation tool to use in the final week of the course to evaluate your own learning designs? In the content for this week, we have suggested a few approaches you can consider, ie: quality rubric, heuristic checklist/walkthrough, a set of prompting questions or a couple of more open questions? Maybe you have other tools that would be of interest, too. Use the discussion to argue the case for the evaluation tool you would like to use next week. The outcome of this discussion will be used to shape the feedback on your own designs which will be shared next week, so focus on what it is that you would like feedback on from other educators, to help you in your own course design development?

Task 4: Planning your own learning activity design (1 hour)

Continue your personal log and start to plan a learning design for your own course. Consider already how you might present this to the rest of the group. Record the kind of resources and tasks you would like to include in your own course, and why (this will be picked up in week 4).

Key readings this week (1 hour)

Background resources and references: