Learning analytics: Starvation and telling us what we already know?

This article from Inside Higher Ed reports on some initial findings from the Gates Foundation data mining project in the US. The key finding reported in the article is

New students are more likely to drop out of online colleges if they take full courseloads than if they enroll part time

This article gives me pause to reflect on two observations – starvation and tell us what we already know – and talk a bit about patterns.

What we already know

Online students are for all intents and purposes distance education students. Long before online learning many universities across the world had long experience (and lots of research) with distance education. One of the reasons I started in e-learning was that I taught at one of these institutions and print-based distance education just didn’t cut it.

Here’s an excerpt from a paper I wrote in 1996 titled Computing by distance education: Problems and solutions”

Most distance students are either returning to study after a prolonged absence or studying seriously for the first time. The initial period of readjustment to the requirements of study can cause problems for some students.

The circumstances under which distance students study can also generate problems that lead to poor performance. These circumstances include workload, family commitments, illness, economic situation, geographic location and general lack of time.

For these and other reasons, anyone with time in a distance education institution new that a large percentage of distance education students would drop out in the first year because they weren’t all that familiar with the requirements of DE study and over-extended themselves in some way.

I don’t find it at all surprising that students of “online colleges” would suffer the same problem. Especially if they took on more courses. All we’ve done is exchange the platform/medium.

I really don’t think this particular finding is, as suggested in the article,

challenging conventional wisdom about student success.

It does, as suggested further into the article, raise questions about some of the assumptions built into financial aid practices, but doesn’t really challenge wisdom about student success.

Starvation

This sounds like a big project. A $1 million grant, 6 institutions, 640,000 students, and 3 million course-level records. All focused on at-risk students or student success. Given the other rhetoric around learning analytics it isn’t hard to see what management are really interested in learning analytics. A perfect tool for them to see what is going on, be informed and subsequently take action. Leaving aside all of the likely problems arising from management taking action, my biggest worry is that this approach to learning analytics is going to starve the other uses of “learning analytics”.

Taking a higher ed focus, there are (at least) four roles at universities for whom learning analytics might provide benefits, including:

  1. Administration – retention, success, at-risk, efficiencies etc.
  2. Students – seeing their performance in the context of others (somewhat related to success, retention etc)
  3. Teachers – knowing what’s going on in their courses and what happens when they make changes.
  4. Researchers – as a research method that complements other quantitative and qualitative methods for figuring out the why, why, how, who etc with e-learning.

So which roles do you think learning analytics, as implemented at universities, is most likely to serve?

Who holds the purse strings?

Over the last 16 years or so I have observed universities spend tens of millions of dollars on administrative information systems. Enterprise Resource Plannings (ERP) systems like Peoplesoft etc have consumed vasts amounts of resources.

At the same time learning and teaching systems have had comparatively little spent on them. And that’s before you factor in staffing. Compare the number of analysts, programmers and associated support staff an institution employs around its ERP system with the number it employs around its LMS.

For various reasons, administrative systems tend to starve learning and teaching systems (let alone research systems) of funds and resources.

I can see the same thing happening with analytics.

Patterns

These sorts of patterns are interesting. We’ve started documenting some of these patterns over here. The trouble is that all too often the answer to “why” is provided via the schemata of the people involved. Rather than research.

For example, Col published yesterday about another pattern

The later a student first accesses the LMS course site, the lower their final grade will be

Now does this mean that the keener, better organised students are those that access the course site early in term? Or is it because these are the students who don’t have disruptive life situations to deal with?

5 thoughts on “Learning analytics: Starvation and telling us what we already know?

  1. beerc

    G’day DJ

    The starvation point you make is spot on. Admin systems get the big dollars compared to L&T systems. But while we wish L&T systems were funded equally, we wouldn’t like to see them managed the same way. ie Top down, big up front design without the ability to evolve. Especially considering Steven Malikowski’s research that suggests the evolutionary nature of LMS feature adoption by academic staff.

    Your post also jogged my memory about something else. Everyone is very excited about the ability of learning analytics to identify ‘at risk’ students and I must admit, I’m one of these. In fact I’ll blog this week about a significant project that is aiming to do just that. However I do think we are missing the point somewhat with all the hype around learning analytics and at risk students. The patterns I’m looking at cover the range from whole of university, faculties, schools, programs, courses, student cohorts and students. The nice linear patterns are most evident at the macro levels (university/faculty/school). At the lower levels, patterns are almost non-existent. Put simply, there is too much variation at the lower levels to make assertions about student risk absolute and automatic. The best the data can do is augment the perception and judgement of the academic teaching the class. ie the person operating in the complex system is the best person to make use of data that represents interactions within the system. I hope this makes sense. I’ll expand on this in my blog later this week.

    Col.

    1. There’s a couple of points I’d like to pick up on Col.

      > Malikowski’s evolutionary nature of LMS usage

      He used fairly limited data: only 3 years, about 6 years ago and only focused on on-campus students. So, have to be cautious drawing too much on that.

      There is also the problem of evolutionary dead-ends. Does LMS use keep on evolving or does it reach a certain level and plateau? I have a suspicion that most academics will become happy with a certain type of operation and will stick with it. e.g discussion forums, a quiz or two, information distribution and maybe assignment submission – at most.

      > Linear at macro, complex at micro

      I can see a lot more statistical and other theory work in your future. There will almost certainly be existing explanations, descriptions, tests etc for this.

      > Judgement of the academic…best person to use it.

      I think there’s an argument to be made that the student is perhaps the best person to make that judgement. After all, they know why their usage might be strange whereas the academic is going to make judgements. i.e. students who haven’t used the course site by week 3 are lazy.

      As it happens, came across another possible explanation for the unusual pattern around grades/participation with the AIC students. That of “witness learners” mentioned in Beaudoin (2002) and borrowed from early face-to-face work.

      Not a large leap to assume that the better students at some campuses are not participating in forums as much as their colleagues because they know it won’t help them. Whereas an academic teaching the course could see the non-participation as “at-risk” behaviour.

      In terms of who knows the individual context the best, it’s the student.

      Not to mention that some academics have truly strange schema.

      Beaudoin, M. (2002). Learning or lurking?: Tracking the “€œinvisible” online student. The Internet and Higher Education, 5(2), 147-155.

  2. Pingback: Learning analytics: Starvation and telling us what we already know? @djplaner via @pgsimoes « juandon. Innovación y conocimiento

  3. Pingback: Learning analytics: Starvation and telling us what we already know? | Distance Learning and IT | Scoop.it

  4. Pingback: LAK11 | Pearltrees

Leave a comment