This article places smart personal assistants such as Amazon's Alexa or Google's Assistant into a lineage that includes concepts such as intelligent tutoring systems, tools that "provide assistance by answering questions in natural language, making recommendations, and performing actions." Now, true, the flurry of interest in personal assistants from a few years ago has subsided, but the concept is still intriguing. This article asks whether using these systems helps students develop their problem-solving skills and whether they influence the learning process of students. My immediate reaction would be "it's far too soon to get reliable empirical data." The tools just aren't ready. But this article gives us an early look with a tiny (n=45) study and suggests that "this new technology might be able to offer dynamic scaffolds in a more natural and sophisticated way."
Parler is a radical social network website sufficiently offensive to have been banned from most platforms and hosting services. David Weinberger's point in this post is to argue that traditional ethical frameworks do not give us a good argument for deplatforming Parler. The suggestion is, if you take on the perspective of 'the other side', there isn't a good argument that would convince them that they are wrong. But (in my view) this sort of approach is what Robert Nozick called 'coercive philosophy': "arguments are powerful, and best when they are knockdown, arguments force you to a conclusion." This just feeds into the ethos of sites like Parler; it draws us into engagement and battle with the beast. No. We as ordinary citizens living in a free and open society can recognize Parler for what it is, and we deplatform the site because it is repugnant and offensive, and we have no obligation to convince them of anything.
True, a couple of comments make the point that this article is more rant than research. Fair enough. But I still think it's a good point to make to say that we shouldn't think of lifelong learning as being the same as spending a lifetime taking courses. After a while, people learn to be able to learn on their own, and so we should expect that by 'lifelong learning' we mean more learning and fewer courses. Now it is true that a lot of people call themselves 'lifelong learners'. I am one. But I'm quite sure I won't be going back to university any time soon.
Alfie Kohn raises the question of what assessment signifies, exactly. Why, he asks, do we listen to misleading traffic reports or weather predictions? "Am I really so addicted to data that I prefer misleading information to none at all?" maybe - and this allows him to be more forgiving of teachers who applaud (or bemoan) standardized test scores even whent hey know the scores are largely meaningless. "But however understandable that impulse is," he writes, "we have a duty to resist it, at least when it can do real harm." Why? "First, because these tests measure what matters least about learning.... (and) Second, every time a study that relies on test scores as the primary dependent variable is published or cited, those tests gain further legitimacy."
If you need to stay current with these three terms this article will be a useful guide. Data analytics is "the science of analysing data sets to find trends, answer questions, and draw conclusions." Artificial intelligence (AI) is "the ability to give computers the ability to replicate human intelligence." Meanwhile, in machine learning "computers are programmed to learn automatically." The article also covers how the terms overlap, how they differ, and some other key terms in the field. It also looks at starting salaries in each field and the different skills required.
This is a post from Joanne Jacobs uncritically restating some of the latest nonsense on online learning from Forbes. What we read from Natalie Wexler in Forbes is the same old paean to direct instruction and learning facts that a certain segment of the community has been promoting for years now. This time the argument is based on a recent book by British writer Daisy Christodoulou that we are told "makes clear, the education establishment—in the U.S., the U.K., and no doubt other places—doesn’t have a clue" because "ed tech is mostly just replicating existing ineffectual approaches to teaching, and sometimes making them worse." It then goes on to propose more existing ineffectual approaches to teaching, for example, having students memorize facts.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2021 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.