There's some really interesting thinking in this article that belies its simplistic 'platformization of the university' framing. I would indeed skip the unconvincing opening sections and go straight to the 'making good design choices' section. Here we are presented with what amounts to two sets of competing metrics, the 'fast' algorithmic metrics and the slow 'university' metrics. What makes them different? "Platforms promise to replace trust with telemetry. But telemetry cannot tell you whether silence in a seminar is laziness, confusion or careful thinking; it can only tell you it happened. To interpret silence, someone must risk an intervention... risk is what makes a teacher different from an algorithm. Risk is the price of meaning." We've seen this argument before, where the assertion that computers, unlike people, have nothing at stake in the interaction. But here, what's at stake is trust - does a computer care whether you trust it? And that's the thing... it might, if it was instructed carefully enough, and in which trust acquires a moral or ethical flavour.
Today: Total: [] [Share]

