#PLENK2010 Assessment in distributed networks

I have been struggling to clearly identify the issues associated with assessment in PLEs/PLNs – which are probably similar to those in MOOCs or distributed networks.

There seem to be a number of questions.

  • Is it desirable/possible to assess learners in a course which takes place in a distributed network?
  • Is it possible/desirable to accredit learning in a course which takes place in a distributed network?
  • What assessment strategies would be appropriate?
  • Who should do the assessing?

Whether assessment is desirable in a PLENK/MOOC etc. will depend on the purpose of the course and the learning objectives that the course convenors had in mind when they designed the course. PLENK2010 does not include formal assessment and yet has attracted over 1000 participants, many of whom are still active in Week 5. Presumably these participants are not looking for their learning outcomes to be assessed. CCK08 attracted over 2000 participants and did include assessment for those who wished it – but the numbers were small (24 – I’m not sure if the number who could do the course for credit was limited or only 24 wanted it) – so it was not only possible for the course convenors to assess these participants but also to offer accreditation.

Both assessment and accreditation are possible across distributed networks if the numbers are manageable. It is not the distributed network that is the problem, although this might affect the assessment strategies that are used. It is the numbers. Just as it is not possible for course convenors of a MOOC to interact on an individual level with participants, so it is physically not possible for them to assess such large numbers of individuals, and without this assessment no accreditation can be offered other than perhaps a certificate of attendance – but even this would need to be monitored and would be contrary to the principles of autonomy expected in a MOOC.

So how to assess large numbers. Traditionally this been done through tests and exams which can be easily marked by assessors. Whilst these make the assessment process manageable for the tutors, they offer little more than a mark or grade to the students – since very often there is no feedback-feedforward loop associated with the grade. Also tests and exams are not the best assessment strategy for all situations and purposes.

So what better assessment strategies would work with large numbers? Actually this might be the wrong starting question. The starting point should be what learning objectives do we have, what outcomes do we expect these objective to lead to and what assessment strategy will enable the learner to achieve the learning objective as demonstrable through the outcome. There is a wealth of information now available on assessment strategies, both for formative and summative assessment. Focus in the UK has for many years now (from the time of Paul Black’s article, Inside the Black Box, to Gibbs and Simpson’s article – Conditions Under Which Assessment Supports Students’ Learning, to the REAP project and the work of the JISC) been on formative assessment and providing effective feedback. In Higher Education there has been even more of a push on this recently since students are demanding more and better feedback (National Student Survey) – so effective assessment strategies are there if we are aware of them and know how to use them. These include a whole range of possibilities including audio and video feedback-feedforward between students and tutors, students writing/negotiating their own assessment criteria, peer, group and self-assessment. But how can these strategies be used with MOOC-like numbers whilst maintaining the validity, reliability, authenticity and transparency of assessment?

There appear to be no easy answers to this question. Alec Couros – in his open course – is experimenting with the use of mentors – is this a way forward? We know that there are many trained teachers in PLENK2010. Could they be possible assessors? How would their credentials be checked? Would they work voluntarily?

Peer assessment has been suggested. I have experience of this, but have always found that student peer assessment whether it is based on their own negotiated criteria or criteria written by the tutor – often needs tutor moderation, if a grade which could lead to a degree qualification is involved. Similarly with self-assessment. We don’t know what we don’t know – so we may need someone else to point this out.

The nearest thing I have seen to trying to overcome the question of effectively teaching and assessing large numbers of students is in Michael Wesch’s 2008 video – A Portal to Media Literacy – where he shows how technology can support effective teaching and learning of large groups of students – but he is talking about hundreds, not thousands of students and himself admits that the one thing that didn’t work was asking students to grade themselves. This was two years ago – so I wonder if he has overcome that problem.

So – from these musings it seems to me that

  • Learning in large courses distributed over a range of networks is a worthwhile pursuit. They offer the learner diversity, autonomy and control over their own learning environment and extensive opportunities for open sharing of learning.
  • The purpose of these courses needs to be very clear from the outset – particularly with regard to assessment, i.e. course convenors need to be clear about the learning objectives, how learners might demonstrate that those objectives have been met through the outcomes they produce and whether or not those outcomes need to be assessed.
  • There has been plenty written about what effective assessment entails. The problem in MOOCs is how to apply these effective strategies to large numbers.
  • If we cannot rely on peer assessment and self-assessment (which we may not be able to do for validated/accredited courses), then we need more assessors.

Would a possibility be for an institution/group of institutions to build up a bank/community of trained assessors who could be called upon to voluntarily assess students in a MOOC (as Alec Couros has done with mentors).  Even if this was possible I could see a number of stumbling blocks, e.g. assessor credentials, subject expertise, moderation between assessors, would institutions allow accreditation to be awarded when the assessment has been done by people who don’t work for the institution?  – what else?

5 thoughts on “#PLENK2010 Assessment in distributed networks

  1. suifaijohnmak October 18, 2010 / 11:53 pm

    Hi Jenny,
    You have raised many important points. I don’t see assessment could be done that easily with MOOC for that large cohort of participants 1540. From our previous research of CCK08, we noted that some participants would have quited from the course if there were mandatory assessment components, due to numerous personal reasons like lack of time or need. Also, there needs to be lots of qualified assessors as you can’t expect 4 assessors for over 1500 participants. Besides these assessors must be certified to the level required for PLENK (i.e. the course conveners would need to ensure that they are satisfied with performance achieved by the assessors in the first place, and the assessment of participants to the standards required, based on the evidence produced). Besides, assessment validation is a complicated process, which involves validating the assessment processes and decision made by individual assessors and the chief assessor (course conveners). Given that each learner may be achieving the learning to a certain personal level, and that there is no learning outcomes set as yet, it is not et possible to assess individuals against any standards (national or local standards).
    I think it’s worthwhile to discuss more thoroughly on this assessment in MOOC, but this may be a course by itself.
    Thanks for your stimulating ideas.
    John

  2. VanessaVaile February 12, 2012 / 3:32 pm

    A late comment to be sure but that “knowledge half life” assumption does not apply to everything. If I find something on a search, whether on purpose or by chance, it’s like new and a present under the tree as far as I am concerned. Actually, I was looking for distributed networks to explain them to not-so-wired academics and apply them to coalition and community building. Seeing this, I remembered HASTAC blogger Cathy Davidson writing on grading and assessment and thought you might be interested,

    http://www.hastac.org/blogs/cathy-davidson/how-crowdsource-grading

    http://www.insidehighered.com/news/2010/05/03/grading

    Lots more ~ just google Cathy Davidson + assessment or grading. I have no idea how it would work in a MOOC but seems we are already part way there with autonomy. Worth thinking about… maybe ask “haystackers” (nic for HASTAC members) to help crowdsource it.

Leave a comment