The thing about intangibles is that they're really hard to talk about. Intangibles are, says Tony Bates, "what we feel or think to be there, but can’t quite put our finger on it (both realistically and figuratively)." And, says Bates, "Teachers and instructors as well often feel ‘intangibles’ in many contexts. One in particular is assessing ‘soft’ or ‘durable’ skills such as creativity." And it is in detecting these intangibles that in-person teachers have their advantage. Or so it is claimed. "But the same applies also to online learning. For instance, I have participated in asynchronous, online discussions that have been as rich if not richer than most in-class discussions." To me, intangibles are the result of a process of subsymbolic recognition on the part of an observer - that's why they're a 'feel' that can't be 'put into words'. They're an acquired skill, borne out of practice, which is why new online instructors find the online experience so pauce. But after you've done it a while, you begin to feel them.
Identifying and characterizing students suspected of academic dishonesty in SPOCs for credit through learning analytics
Daniel Jaramillo-Morillo, José Ruipérez-Valiente, Mario F. Sarasty, Gustavo Ramírez-Gonzalez, International Journal of Educational Technology in Higher Education, 2020/11/04
What behaviours would we take as evidence that students are cheating? This article detects groups of students working together though the similarity of their answers and the proximity in time that these answers were submitted. It then applies analytics to their online behaviour to see if any other sorts of analytics can catch the unauthorized collaborations. They get good grades, but that doesn't set them apart. Nor do their interactions with the online course. Nor does the timing of their activities. Indeed, as the authors recognize at the end of the paper, "we have no hard proof (like video feed) that students are performing such academic dishonesty together." This should be taken, I think, as a cautionary tale. How much of our honesty-algorithms are being based on our presumptions about cheating behaviour, and how much is supported by hard evidence? And what are out algorithms learning as a result?
This post (15 page PDF) offers a discussion about user experience between six students and five staff who participated in a trial of a virtual environment developed for sustainable tourism education. The approach is described in this paper as experiential education (EE) and that's what the literature review surveys, along with a shorter discussion of virtual reality. The results are presented, naturally, as a 3D graph, with axes describing the experiential, instrumental and affective aspects of user experience (see illustration). The positive experiences included include a sense of place, sensory appeal, natural movement, learning enrichment, and comprehensive vision, while on the negative side users experienced motion sickness and hardware issues.
One thing that's not sufficiently recognized is that people - and kids especially - are always learning, and they learn from everything. This means that they're learning a lot from the non-curricular aspects of education, including in this case AI-based exam proctoring. Like, for example, the AI that basically told a student to stop slouching; "Unsettled, she began to stare more robotically at her screen." Clive Thompson argues that it sets a scary civic precedent. “We are indoctrinating our youth to think that this is normal, says Lindsay Oliver, activism project manager at the Electronic Frontier Foundation. Students trained to accept digital surveillance may well be less likely to rebel against spyware deployed by their bosses at work or by abusive partners." We don't know if that's what they're actually learning, exactly - they might be learning that society doesn't trust them, or any number of things. But that's the problem. Via Aaron Davis.
This article discusses the fine line higher education institutions walk between stability and innovation. This is where experimentation comes in, writes Dinant Roode. And in higher education, we see experimentation in four key areas: by students, when they sign up for an educational programme; by educators, when they select a pedagogy; by evaluators, when they measure different dimensions of learning; and by the institutions as a whole, as a series of large-scale experiments. The article really feels like the author ran out of steam at the end. There are some good idea, just left hanging (even his list of four items ended at three; I had to infer a fourth item from the context). Anyhow, to me, this makes it look like everything in higher education is an experiment. This might explain why it's so difficult to standardize and to scale.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2020 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.