Online assessment: ‘will this be on the exam?’

One of the clearest memories from our trip to Japan many years ago was at the ticket office in a train station in Hiroshima. My sister-in-law, who was travelling with us, spoke some Japanese but she had to ask quite a complicated question (something to do with whether our tickets were valid on the bullet trains), so she asked for help in English at one point.

bullet_train‘Do you speak English?’ she asked. The friendly ticket seller nodded. She then asked her question, with the ticket seller nodding along and clearly understanding everything she was saying. The ticket seller then took out a piece of paper, wrote out the answer to the question in perfect English, and handed it back to my sister-in-law. When I asked why the ticket seller had written the answer instead of saying it, my sister-in-law told me that at the time the Japanese school system, which taught children English from a very young age, only assessed listening and writing skills in the final exam – there was no requirement to speak, either in class or in the exam. As a result, many Japanese people lacked the confidence to converse in English, particularly to a native speaker.

I’m telling this story because I’m about to study online assessment in the second semester of my MSc in digital education. Leedham (2009) says that students ‘largely study what is assessed, or more accurately, what they perceive the assessment system to require’ (p. 201), and the example of the Japanese ticket seller would seem to bear out that point.

My own experiences of assessment aren’t much better. I remember the panic of having 90% of my English degree hanging on the nine exams I sat in the space of ten days at the end of my final year. Surprisingly, my ability to memorise long quotations from the works of Walter Scott under extreme pressure didn’t do me many favours in the barren years of job-hunting that followed.

Traditional assessment methods, such as exams and essays, have many flaws (see the video below). There are modern alternatives – such as wikis, blogs and other multimodal writing forms – but do they solve the old problems, or introduce new ones?

Also, I’m wondering how realistic the aspirations some people have about online assessment are. For instance:

  • Rust (2007) said that assessment should aim to be ‘non-threatening and non-anxiety provoking’ (p. 230). Really?
  • Many new forms of writing require scaffolding from tutors – e.g. support guides and writing workshops. In which case, are they really sustainable at scale?
  • How do we make sure group assessment or self-rating is reliable, and doesn’t reward students who haven’t put in the work?
  • Is it realistic to expect tutors to have a meaningful feedback dialogue with their students, given that student numbers seem to be increasing while faculty time is squeezed?

I’ll probably look back at this post at the end of the semester and wonder how I could have been so naive. Anyway, as The Ramones once said: hey ho, let’s go.

References

Leedham, M. (2009) ‘From traditional essay to ‘Ready Steady Cook’ presentation: reasons for innovative changes in assignments’, Active Learning in Higher Education, 10 (3), pp. 191–206.

Rust, C. (2007) ‘Towards a scholarship of assessment’, Assessment & Evaluation in Higher Education, 32 (2), pp. 229–237.


Leave a Reply

Your email address will not be published. Required fields are marked *