Learning analytics and magic beans

My PhD is broadly about helping universities implement learning analytics. This post relates to some of the things that I’m seeing around the Australian Higher Education Sector with regards to learning analytics.

There are two broad trajectories that universities tend to take when implementing learning analytics (Colvin et al., 2015). One trajectory is focused on measurement and broader performativity precepts and retention interventions. The second trajectory is underpinned by pursuit of understanding where the emphasis is on learning, and recognition that retention is consequential to broader teaching, learning and engagement experiences for students.

The first trajectory seems to be where a lot of universities are at the moment, a situation that is at loggerheads with the second trajectory. Rather than seeing these trajectories converge as I hoped, they seem to be diverging in a worrying way.

The first trajectory, in particular, fits with the perpetual concern that universities have with student attrition and its (real and potential) impact on their bottom lines. However, it is becoming more apparent that this approach is flawed, especially when considered in relation to how universities approach the adoption of technology – single centralised systems that are implemented top-down, often using external consultants and off-the-shelf enterprise software (Jones & Clark, 2014).

It is becoming increasingly evident that one-size-fits-all approaches to learning analytics do not work (Colvin et al., 2015). Meaning-making from learning analytics data is dependent on a sound understanding of the learning and teaching context and requires a human in the sense-making loop (Clow, 2014). Simplistic approaches (such as those proposed by consulting companies peddling off-the-shelf software solutions) are doomed to fail (Macfadyen & Dawson, 2012). The use of generalised models encapsulated in these simplistic approaches poses a threat to the potential of learning analytics to improve the quality of learning and teaching practice (Gašević, Dawson, Rogers, & Gasevic, 2016; Liu, Rogers, & Pardo, 2015). These generalised models and simplistic approaches are especially absurd when you consider the remarkably complex and diverse learning and teaching contexts involved.

When algorithms are black boxes, this prevents academics from identifying teaching or curriculum issues that may be at play
 (Liu et al., 2015)

Learning analytics aside, such approaches are also incompatible with the actual nature of student attrition as a problem construct. Student attrition is only rarely caused by a single problem that an external agency like a university can assist with (Beer & Lawson, 2016). It is the complex interplay between multiple, ever-changing variables that results in student attrition, a notion that contrast with simplistic approaches to solutions (Beer & Lawson, 2017). The nature of student attrition further reinforces the point that one-size-fits-all approaches to learning analytics implementations aimed at helping with student attrition do not work. However, as organisations, we are still drawn to these simplistic solutions that are often proffered by consulting companies with their array of glossy brochures and anecdotal evidence.

Universities as organisations have long struggled to overcome their active inertia  preferring to apply familiar approaches. In my mind, the consulting companies are well aware of this and know exactly which buttons to push to peddle their solution. As such, I worry that we will see universities adopting off-the-shelf learning analytics systems with sexy names that are inherently rigid and are based on generalised models. The lure of predictive models based on mysterious (often proprietary) algorithms is strong and has always been a successful consulting tactic. This, despite ample evidence showing that predicting outcomes from systems that involve humans is utterly futile except in a very narrow set of circumstances (Allen & Boulton, 2011).

The only way that prediction becomes possible is when the system or agent is isolated from external influences; something that can only ever occur in laboratory conditions (Allen & Boulton, 2011)

Directly addressing student attrition through one-shot projects and special funding has had little to no impact on the problem in the past. Limiting the potential of learning analytics by focusing only on student attrition is unlikely to meaningfully contribute in the long term. Learning analytics is complexly entangled with learning and teaching contexts and thinking about it as just another IT project to be outsourced to the snappiest vender is a mistake. These sorts of projects fail more than they succeed, often because they lack the contextualisation to be useful across diverse contexts (Goldfinch, 2007). Learning analytics requires a learning approach, something that institutions are not going to achieve by buying off-the-shelf and limiting their learning analytics to a single dimension.

References

Allen, P., & Boulton, J. (2011). Complexity and limits to knowledge: The importance of uncertainty. In P. Allen, S. Maguire, & B. McKelvey (Eds.), The SAGE Handbook of Complexity and Management (pp. 164-181). London, England: SAGE.

Beer, C., & Lawson, C. (2016). The problem of student attrition in higher education: An alternative perspective. Journal of Further and Higher Education, 1-12. doi:10.1080/0309877X.2016.1177171

Beer, C., & Lawson, C. (2017). Framing attrition in higher education: A complex problem. Journal of Further and Higher Education, 1-12. doi:10.1080/0309877X.2017.1301402

Clow, D. (2014). Data wranglers: Human interpreters to help close the feedback loop. Paper presented at the Proceedings of the fourth international conference on learning analytics and knowledge, Indianapolis, IN, USA.

Colvin, C., Rogers, T., Wade, A., Dawson, S., Gašević, D., Buckingham Shum, S., . . . Corrin, L. (2015). Student retention and learning analytics: A snapshot of Australian practices and a framework for advancement. Canberra, ACT: Australian Government Office for Learning and Teaching.

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68-84.

Goldfinch, S. (2007). Pessimism, computer failure, and information systems development in the public sector. Public Administration Review, 67(5), 917-929.

Jones, D. T., & Clark, D. (2014). Breaking Bad to Bridge the Reality / Rhetoric Chasm. Paper presented at the ASCILITE2014 Rhetoric and Reality, Dunedin, New Zealand. Conference publication retrieved from http://ascilite2014.otago.ac.nz/

Liu, D. Y.-T., Rogers, T., & Pardo, A. (2015). Learning analytics-are we at risk of missing the point. Paper presented at the Proceedings of the 32nd ascilite conference, Perth, Australia.

Macfadyen, L. P., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Journal of Educational Technology & Society, 15(3), 149-163.

 

4 thoughts on “Learning analytics and magic beans”

Leave a comment