Who is assessment for?

Homer Simpson sits an examAssessment is a huge part of who universities are. It provides evidence that programmes are rigorous and relevant, and can play a decisive role in how students approach their learning. Despite what Homer Simpson might think, most students want proof that they know their stuff; it’s one of the main reasons university chancellors aren’t crying themselves to sleep over the MOOC movement just yet.

But where assessment was once just a way for a university to rank students against its own academic standards, it now has to meet the needs of a wide range of stakeholders: cost-savvy students looking for value for money and the motivation to continue; governments trying to compete in a global economy; employers looking for transferable skills; university administrators keen to push up league tables.

Is it realistic to expect our current systems of assessment – predominantly exams and essays – to fulfil all of these roles? Peter Knight (2002) says that

Radical thinking is needed about what summative assessment is for, who it is for, what it can do, what it cannot do cheaply and what it ought not to be asked to do at all. (p. 284)

Knight goes on to suggest that we might need to take ‘a necessary backward step’ and only assess what we can measure in an affordable, reliable and fair way. Can online technologies help assessment to be all things to all people: rigourous yet scalable, personal yet cost-effective? Can it focus on both learning product and learning process? Can it produce graduates who are academically able and ready for the world of work? If not, we might need to start revising our expectations a bit.

There are also financial realities to consider. With the end of free university tuition (outside Scotland, at least), will the person who pays the tuition bill have the final say on what assessment (and, by implication, the university) eventually stands for – whether that’s the students themselves, their parents, sponsors or funding bodies?

References

Knight, P. T. (2002) ‘Summative assessment in higher education: practices in disarray’, Studies in Higher Education, 27 (3), pp. 275–286.


Online assessment: ‘will this be on the exam?’

One of the clearest memories from our trip to Japan many years ago was at the ticket office in a train station in Hiroshima. My sister-in-law, who was travelling with us, spoke some Japanese but she had to ask quite a complicated question (something to do with whether our tickets were valid on the bullet trains), so she asked for help in English at one point.

bullet_train‘Do you speak English?’ she asked. The friendly ticket seller nodded. She then asked her question, with the ticket seller nodding along and clearly understanding everything she was saying. The ticket seller then took out a piece of paper, wrote out the answer to the question in perfect English, and handed it back to my sister-in-law. When I asked why the ticket seller had written the answer instead of saying it, my sister-in-law told me that at the time the Japanese school system, which taught children English from a very young age, only assessed listening and writing skills in the final exam – there was no requirement to speak, either in class or in the exam. As a result, many Japanese people lacked the confidence to converse in English, particularly to a native speaker.

I’m telling this story because I’m about to study online assessment in the second semester of my MSc in digital education. Leedham (2009) says that students ‘largely study what is assessed, or more accurately, what they perceive the assessment system to require’ (p. 201), and the example of the Japanese ticket seller would seem to bear out that point.

My own experiences of assessment aren’t much better. I remember the panic of having 90% of my English degree hanging on the nine exams I sat in the space of ten days at the end of my final year. Surprisingly, my ability to memorise long quotations from the works of Walter Scott under extreme pressure didn’t do me many favours in the barren years of job-hunting that followed.

Traditional assessment methods, such as exams and essays, have many flaws (see the video below). There are modern alternatives – such as wikis, blogs and other multimodal writing forms – but do they solve the old problems, or introduce new ones?

Also, I’m wondering how realistic the aspirations some people have about online assessment are. For instance:

  • Rust (2007) said that assessment should aim to be ‘non-threatening and non-anxiety provoking’ (p. 230). Really?
  • Many new forms of writing require scaffolding from tutors – e.g. support guides and writing workshops. In which case, are they really sustainable at scale?
  • How do we make sure group assessment or self-rating is reliable, and doesn’t reward students who haven’t put in the work?
  • Is it realistic to expect tutors to have a meaningful feedback dialogue with their students, given that student numbers seem to be increasing while faculty time is squeezed?

I’ll probably look back at this post at the end of the semester and wonder how I could have been so naive. Anyway, as The Ramones once said: hey ho, let’s go.

References

Leedham, M. (2009) ‘From traditional essay to ‘Ready Steady Cook’ presentation: reasons for innovative changes in assignments’, Active Learning in Higher Education, 10 (3), pp. 191–206.

Rust, C. (2007) ‘Towards a scholarship of assessment’, Assessment & Evaluation in Higher Education, 32 (2), pp. 229–237.


Starting my MSc: responsible drinking and a fox in space

Fox in space

Second Life: me, as a fox, drinking coffee, in space. Not as much fun as it sounds.

This blog has been pretty quiet for the last few months. I’ve been busy studying ‘An Introduction to Digital Environments for Learning’, the first module of my part-time MSc in digital education at Edinburgh Uni.

It’s been great getting back into education. Compared to last time around, it’s amazing how much more you can learn as a father of three who gets to bed at a reasonable hour and drinks responsibly.

Anyway, the last 12 weeks have been pretty mind-blowing. In no particular order, here are the top five things I think I’ve learned:

  1. There’s no such thing as a ‘digital native’ or a ‘digital immigrant’. Contrary to a popular myth, people born after 1980 haven’t got oddly shaped brains just because they’ve grown up using digital technology. Universities’ widespread panic that they’ll be forced to ‘change or die’ by hordes of wild-eyed teenagers wielding smartphones has no basis in evidence.
  2. Technology and education can overlap. We shouldn’t start out with a traditional (face-to-face) model of education and then look for a piece of technology that allows us to ‘virtualise’ it. Technology can inform education too: for instance, look at how people across the world can collaborate through Hangouts and wikis.
  3. I. Hate. Second. Life. Is it a game? Is it a social network? Is it virtual reality? I’ve spent many hours in it, and I still haven’t got a clue what I’m doing. But maybe that’s just me.
  4. Everyone’s different. Some people learn through dialogue, and some learn in silence. Neither of these options is ‘good’ or ‘bad’, and universities have to enable choice rather than being proscriptive.
  5. Research, research, research. There’s an awful lot of hyperbole and conjecture around how people learn in the digital age. No one age group, region or subject area can be pigeonholed. The only way to keep learning relevant and useable is to research what works and what doesn’t with a particular group of students.

#ocTEL retrospective and looking forward

Anyone who looked at this blog before April (ok, so that’s probably my mum and me) might be wondering why a copy-editor and proofreader has been posting about something called ocTEL for the last few months.

As well as working as a freelance editor, my day job is publishing manager for a large business school. Over the last year or two the focus of my job has moved from traditional publishing to looking at how people learn through technology, and what resources (published or otherwise) we could produce to help them.  So I signed up for the open course in technology-enhanced learning (ocTEL) a free, open course run by the Association for Learning Technology (ALT).

Looking back at my first post, apparently I signed up ‘to try and engage with a community of practice, to see what’s happening in the world of digital education and hopefully to get a few new ideas’. I’m not sure I’ve engaged fully with a community of practice, but the other two have definitely happened.

The things that worked well for me in ocTEL were:

  • The learning resources and the way I was guided through them (the ‘if you only do one thing this week’ option kept me engaged when otherwise I might have dropped out due to time pressures).
  • Creating a personal learning blog, and bookmarking interesting resources via hyperlinks, gives me a record that I can refer back to.
  • Connections with others – mostly by following interesting people on Twitter.
  • The provision of a broad range of communication options (forums, email, blog posts, social networking).
  • The webinars, which really put the learning resources in context and were useful ‘thought composting time’, to borrow Imogen Bertin‘s nifty phrase. I watched most of them via the recordings, but managed to join in live in week 10.

These things could have gone better:

  • The initial email deluge as hundreds of delegates introduced themselves (and hundreds more compained about how many emails they were receiving).
  • The broad range of forum topics, with some students creating their own subtopics or groups, ended up giving the forums a slightly fragmented feel.
  • I agreed with the week 10 webinar attendees who suggested that a reading week in the middle of the course would have helped struggling students catch up.
  • The onus was on the students to create their own peer-group connections, but perhaps more could have been done to facilitate this – maybe a group assessment or project would have helped?

MOOCs (massive open online courses) have many downsides – mainly their messiness and the lack of a transferable qualification at the end of it. But ocTEL has opened up a route into technology-enhanced learning for me.

In September I’ll start a part-time MSc in digital education at the University of Edinburgh. I’m seeing this not so much as a career change as a natural development: the world of publishing is moving away from printed texts into e-books and mobile technology, and for many publishers the role of editor has now changed utterly (or been eliminated altogether). I’m hoping the MSc will enable me to create educational resources – whether they’re enhanced e-books, videos, online games or anything else – that are engaging, student-centred and based on sound principles of research.

Going back into education (albeit part-time) at the age of 36 is a bit of a daunting prospect, but exciting nonetheless.

I wonder if the student union still does 69p vodkas on a Friday night?


#ocTEL week 10: evaluating TEL

Designing in features of evaluation gives you a chance to see how your TEL resources are being used, and also provides evidence that might help people to buy into what you’re doing (or tell you if you’ve got it totally wrong).

One of my objectives is to enhance our offering with some specific new applications (we won’t be redesigning entire programmes). These resources should help students understand concepts that they tend to find difficult, thereby improving student performance and retention rates.

For formative evaluation, we plan to run focus groups where we present prototypes to faculty and student groups. The Napier University guidlines on evaluating TEL give some good tips about how to structure these.

Here are some ideas about summative evaluation.

Quantitative

Students self-rating their understanding of a concept before and after using the new resources might be a better option than a simple 1–5 stars rating system. Materials that are pitched at a very high level might get 5 stars from the most able students, but would be completely useless for those who are struggling. So rating understanding before and after would allow us to track the effect the resources are having, particularly for the people who are need them most. This data could then be analysed in a number of ways: by study route / country / language etc, or assessed against individual or cohort exam performance.

Qualitative

Student questionnaires would allow us to get feedback from a large sample of students, but this type of survey is always limited to the questions that we want to ask. So maybe focus groups would be useful too, so that students can put across the points they want to raise. Areas to cover include the design, content, academic level and ease of use (including accessibility, particularly for anyone with support needs). The week 10 webinar gave good tips about timing these – ideally just after students sit their exams, so they have more time.

We should also interview faculty to see how easy the resources are to create, how much support they have to give and how helpful they think they are.

So that brings me to the end of ocTEL. Next week I’ll post about what’s gone well and what could have gone better; it’s been a great experience though. Well done the Association for Learning Technology.


#ocTEL week 9: it looks like we might have made it…

The sun is shining here and our daughters are bouncing on the trampoline outside. It’s week 9 of ocTEL, and although I have to confess to some MOOC weariness I’m determined to be one of the 4% who make it to the end (as Damon Albarn almost said).

This week we looked at what makes a project succeed or fail. Thomas Cochrane’s (2012) article reflected on the critical factors in the success or failure of m-learning projects, while in the webinar two researchers from Imperial College London described their experiences of changing VLE and moving to a blended learning model.

I’m relatively new to the pedagogy lark, so if I tried to apply these lessons to my own experience I’d have to look slightly further back at a project I managed to build a new publishing system. I was working with a developer to convert around 80 courses from XML to Word and construct a system that would allow us to publish to print, HTML and ebook in-house quickly and easily. To add another level of difficulty, the courses had been translated into five languages and many were undergoing significant change at the time. So, just to be clear, this example is about changing the way we produced the course materials, not changing the instructional design – but I think some of the lessons I learned are appropriate.

The project was long (about 3 years), difficult and risky – I still remember the feeling of giddy panic as I updated the university’s risk register – but we got there in the end. The developer did a great job, we met the deadline and the new materials were successful.

Successes

  • The planning stage was fairly [ahem] short (see below), but there was a proper acceptance testing process. Freelance checkers logged into an online bug-tracking system, which allowed me to track, allocate and prioritise tasks right through to completion. This was a must, as the people working on the project were located throughout the UK and Europe.
  • Drawing up a proper contract with the developer helped to focus minds on what we wanted to do and by when, and gave us some comeback in the event of anything going drastically wrong.
  • The course materials were in a state of flux, but ‘freezing’ them for the duration of the project meant that sojurns in version control hell were mercifully short-lived.

Failures

  • In the week 9 webinar, the ICL researchers described a 1-year process of gathering requirements and drawing up a specification. Unfortunately I didn’t have this luxury, so the planning process was based largely on instinct and the good advice of a few key individuals. If I was doing it again I’d draw up a full project plan at the beginning, even if it caused a delay initially.
  • At the beginning of the project (early 2008), the authors told me they had no intention of updating the course materials anytime soon. Then the financial world collapsed, requiring not so much the amendment of business theories but the revision of examples as previously successful companies went under or were bailed out. The pressure to update as soon as possible made life difficult. I’m not sure what I could have done to anticipate this, other than taking the initial comments about not updating with a pinch of salt.
  • For all my focus on acceptance testing and quality, I have to admit that I didn’t consult the students about the look of the new materials. As soon as the new books were published I received irate emails from students about how much they hated the new font. (We had to change it; people get very excited about fonts, I’ve learned.) So a big lesson learned there: ask the students.

 

 


#ocTEL week 8: learning for learning’s sake?

The Saylor Foundation is a non-profit organisation whose “mission” is “to make education freely available to all” regardless of a student’s location or finances. Sounds good, eh?

The Saylor model is all about aggregation: it pulls in open or creative commons educational resources from places like MIT OpenCourseWare and Open Yale Courses. Its main advantage is its low cost base in terms of tutors, production and infrastructure. There’s some original material from the professors and an e-portfolio option that allows students to network, but from the student’s point of view it’s the curation of open resources by experts that offers the real enhancement.

Ok, the word “mission” immediately made me clench my teeth so I have to point out a couple of things. In an interview with The Chronicle of Higher Education, Michael Saylor says:

The benefit of rich families putting their child through Harvard is always going to exist. But it’s quite evident that there are 700 million peasants in China who are never going to go to Harvard.

But those 700 million Chinese “peasants”, along with around 75% of the world’s population, aren’t going to study via Saylor either, because the courses are only available in English.

Michael Saylor talks about delivering 12 million books via an iPad to someone in a Burmese jungle. The one slight problem being that 99.2% of the Burmese population currently have no access to broadband, like 61% of all people in the developing world.

And … unclench.

Saylor.org is pretty light on information about how the courses are actually assessed. (The brief screenshots I saw in the YouTube video looked like multiple-choice questions.) Once you complete the course and pass the online exam, you can print out your own certificate of completion. The Saylor Foundation is making efforts to link these certificates to college credits, but there’s no formal accreditation at the moment.

However, the value of university education isn’t in the course materials per se (although obviously every course needs high-quality materials): it’s in the rigour and transferability of the assessment and the ultimate qualification. Conventional universities spend vast portions of their time complying with various internal and external measures of academic quality and accreditation, including audits – none of which applies to the Saylor courses. Or MOOCs, for that matter.

Of course, learning for learning’s sake can be wonderful. In theory, someone in a developing country could log onto Saylor.org, teach themselves, say, mechanical engineering and start making changes that are of real benefit to themselves and their community. But no matter how good that learning experience is, just how does someone with no formal qualification in mechanical engineering begin to work in that field?

Maybe the value of learning via MOOCs or an organisation like The Saylor Foundation is to dip your toe in a subject before you embark on more formal learning. Certainly, this has been my experience with ocTEL (it’s leading me into a part-time MSc in digital education at Edinburgh University).

People engage in education because they believe in its transformational power: to give them a new career, to improve their quality of life or job satisfaction. And that only happens when you’ve got a universally recognised qualification in your hand at the end.


#ocTEL week 7: your place or mine?

I’ve not been the best at interacting with other participants on this course (mainly due to time constraints). But by the looks of it I’m not alone. The small group for distance learning forum, to which I subscribe, hasn’t seen any action for over a month; meanwhile, none of the forums on Activity 7.0 has more than ten posts.

Obviously ocTEL is an experiment in itself and one of its objectives is to find out how students want to interact, whether that’s via forums, blogs, social media or email. It’ll be really interesting to hear the ALT’s thoughts on the audience voting with their feet in this way.

I get the impression that quite a few people have dropped out by now. As far as I know, over 800 people registered for ocTEL but only around 12 made it to the live webinar for week 7. (Although many more may have watched the recording, as I did.) By the way, none of this is for the lack of planning or facilitation on the part of the organisers; maybe it’s just the nature of MOOCs. Again, there are interesting questions to be answered here about how students can be kept motivated to study when ultimately there isn’t a qualification at stake.

The interaction that I’ve enjoyed most in this course has been blog comments. When others comment on your own post, it feels like a really personal interaction and makes you reflect carefully on what you’ve written (and will write). I feel guilty about not getting round to commenting on others’ blogs, with the exception of Marcus Belben’s excellent post this week. Reflecting via a blog allows you to create and maintain a learning record that’s at once personal but also open to the world for comment.

All of which makes Martin Hawksey‘s week 5 webinar on facilitating student interaction via personal online spaces such as blogs all the more useful. Here was someone showing us not only how useful this interaction could be, but exactly how to set it up for yourself. It’ll be interesting to see how many other universities adopt this approach over the coming months.


#ocTEL week 6: design, assessment and emergency stops

When you learn to drive, your instructor makes sure you know how to parallel park, do a three-point turn and perform an emergency stop. In other words, he gives you exactly what you need to get through the driving test.

But after you pass, you throw away much of what he’s taught you (does anyone still ‘feed the steering wheel’?) and develop your own (quite possibly bad) habits. You learn how to do some basic maintenance work on your car, how to edge out at a busy junction and when it’s safe to switch lanes at high speed on the motorway. In other words the driving instructor has shown you how to pass the test, but he might not have prepared you fully for life on the road.

So if we design a course with assessment in mind, are we giving students the knowledge to pass an exam or building real skills? You’d hope it’s the latter, but I fear it’s often the former. Obviously you’ve got to make sure you pick the right assessment type early on; changing it after students have begun their studies would be a nightmare.

I thought this quotation from the JISC publication Effective Assessment in a Digital Age summed it up nicely:

Effective assessment designs take into account the longer-term purpose of learning: helping learners to become self-reliant, confident and able to make judgements about the quality of their own learning.

Here’s a summary of my thoughts on the suitability of three possible approaches to assessment for a postgraduate distance learning course:

Assessment approach How does it align with learning outcomes? What feedback is provided?  Pros Cons
  • Online MCQs
  • Tests understanding of concepts, but not in depth
  • Doesn’t develop independence or collaborative skills
  • Pre-prepared explanation of the correct answer; links back to course materials
  • Can be taken anywhere, anytime
  • Immediate, authoritative feedback from faculty
  • Requires minimal faculty intervention, so broad range of materials can be developed
  • Limited scope for personalisation
  • Might encourage rote learning
  • Can it be delivered securely for summative assessment?
  • Is it relevant in the humanities/social sciences?
  • Wiki
  • Develops independent learners who can collaborate with peers
  • Peer feedback
  • Encourages deeper learning
  • Creates learning communities
  • Can it be used for subjects that have a defined right / wrong answer (e.g. accounting) or that assess individual creativity (e.g. product design)?
  • Peer feedback requires monitoring from faculty in case it’s inaccurate / inappropriate
  • Assessment is cohort-based, not self-paced
  • E-portfolio
  • Develops self-directed learners who can respond to feedback
  • Just-in-time feedback from faculty and/or peers
  • Gives students ownership of learning
  • Creates learning communities
  • Encourages independence
  • Peer feedback requires monitoring from faculty in case it’s inaccurate / inappropriate
  • Assessment is cohort-based, not self-paced

One footnote to this: the JISC publication is based on its own UK-based research, which detected high levels of ownership of devices that could assist with technology-enabled assessment. Whether you could make the same assumptions about access and digital literacy in other parts of the world is debatable.


#ocTEL week 5: how do you make technology disappear?

After a two-week hiatus while I built sandcastles on the beach in Spain, it was good to get straight into a meaty bit of theory when I started catching up on ocTEL: David Kolb’s learning styles model and experiential learning theory.

Kolb's experiential learning theory

Anyone reading this post probably knows Kolb’s theory already (feel free to skip to the next paragraph if that’s you). Anyway, I’ll keep it brief. We learn through a four-stage cycle: concrete experience, reflective observations, abstract conceptualisation and active experimentation. And we usually lean towards one of four learning styles: diverging (learning by feeling and watching), assimilating (thinking and watching), converging (thinking and doing) or accommodating (feeling and doing).

As I began to understand the theory, two things struck me. First of all, I thought I was an assimilator (that’s thinking and watching) but I’m more like an accommodator (feeling and doing). Apologies for the number of times I’ve referred to my Dad in this blog, but I remember when he showed me how to swing a golf club for the first time. As an assimilator (he’s a former science teacher), he tried to train me by imparting sound theoretical advice – much of which I ignored because I was itching to get my hands on the club and give it a bash. I came back to the theory later as I tried to improve, but my initial instinct was to feel, try and learn from my mistakes.

Secondly, it was unnerving to see that at my university we lean heavily towards one of Kolb’s four learning styles at the moment: assimilating (watching and thinking). We make very little allowance for accommodating, converging or diverging learning styles. So that’s where we’ll need to focus our energy.

How you actually help people experiment, feel and gain new experience via technology is a tricky one. Maybe we should accept that it’s not about technology, but people: technology can never deliver active experimentation or concrete experience on its own, but it can bring a geographically disparate cohort and subject matter experts together online. Once they’re there, they can gain and transform experience in their own ways; the right activities and direction will enable them to experience some of the ‘concrete, tangible, felt qualities of the world’. All of this requires careful planning and a skilful facilitator. But if it happens, then the technology should just disappear.