#ocTEL week 6: design, assessment and emergency stops

When you learn to drive, your instructor makes sure you know how to parallel park, do a three-point turn and perform an emergency stop. In other words, he gives you exactly what you need to get through the driving test.

But after you pass, you throw away much of what he’s taught you (does anyone still ‘feed the steering wheel’?) and develop your own (quite possibly bad) habits. You learn how to do some basic maintenance work on your car, how to edge out at a busy junction and when it’s safe to switch lanes at high speed on the motorway. In other words the driving instructor has shown you how to pass the test, but he might not have prepared you fully for life on the road.

So if we design a course with assessment in mind, are we giving students the knowledge to pass an exam or building real skills? You’d hope it’s the latter, but I fear it’s often the former. Obviously you’ve got to make sure you pick the right assessment type early on; changing it after students have begun their studies would be a nightmare.

I thought this quotation from the JISC publication Effective Assessment in a Digital Age summed it up nicely:

Effective assessment designs take into account the longer-term purpose of learning: helping learners to become self-reliant, confident and able to make judgements about the quality of their own learning.

Here’s a summary of my thoughts on the suitability of three possible approaches to assessment for a postgraduate distance learning course:

Assessment approach How does it align with learning outcomes? What feedback is provided?  Pros Cons
  • Online MCQs
  • Tests understanding of concepts, but not in depth
  • Doesn’t develop independence or collaborative skills
  • Pre-prepared explanation of the correct answer; links back to course materials
  • Can be taken anywhere, anytime
  • Immediate, authoritative feedback from faculty
  • Requires minimal faculty intervention, so broad range of materials can be developed
  • Limited scope for personalisation
  • Might encourage rote learning
  • Can it be delivered securely for summative assessment?
  • Is it relevant in the humanities/social sciences?
  • Wiki
  • Develops independent learners who can collaborate with peers
  • Peer feedback
  • Encourages deeper learning
  • Creates learning communities
  • Can it be used for subjects that have a defined right / wrong answer (e.g. accounting) or that assess individual creativity (e.g. product design)?
  • Peer feedback requires monitoring from faculty in case it’s inaccurate / inappropriate
  • Assessment is cohort-based, not self-paced
  • E-portfolio
  • Develops self-directed learners who can respond to feedback
  • Just-in-time feedback from faculty and/or peers
  • Gives students ownership of learning
  • Creates learning communities
  • Encourages independence
  • Peer feedback requires monitoring from faculty in case it’s inaccurate / inappropriate
  • Assessment is cohort-based, not self-paced

One footnote to this: the JISC publication is based on its own UK-based research, which detected high levels of ownership of devices that could assist with technology-enabled assessment. Whether you could make the same assumptions about access and digital literacy in other parts of the world is debatable.


Leave a Reply

Your email address will not be published. Required fields are marked *