Here's the argument: "We as educators have to shift from teaching students in subjects, to teaching students in skills." Why? "It is the human brain (and heart) that has to get behind the simple steps of a solution that lead into bigger things and the mathematical concepts behind them. And that’s where an educator steps in, to have a conversation about skills and concepts." Honestly, that's not a very good argument all. There are good reasons to focus on skills rather than subjects (skills are practical, subjects aren't, for example). This isn't one of them. Worse, it seems to me to be pandering to teachers (as in, 'sure we do technology but we still need you, we really do').
This is a challenging proposition. It is not the assertion that you can only believe things you know to be true - that's too strong. But it is the proposition that you ought not be able to believe things you know to be false (things like: the moon landing was fake, the world is flat, and other more venal beliefs I won't repeat here). Moreover, if the belief is morally wrong, "we condemn not only the potential acts that spring from such beliefs, but the content of the belief itself, the act of believing it, and thus the believer." This contradicts the long-touted idea that people should be able to believe whatever they want. But if belief causes action, and some actions are reprehensible, then so shouldn't be the beliefs? But if we can't believe whatever we want, well, who then decides?
I'm not sure what 'quietly' means in this context, given that it's all over the media, but it's no surprise, given China's advanced artificial intelligence capability (as evidenced, for example, by its facial recognition systems). But according to the author, "parents were not informed, access to the system terminals was limited to authorised staff, test results were strictly classified, and in some classes even the pupils were unaware that their work had been read and scored by a machine."
Though such stories are an Audrey Watters hate read, there's no question big data and predictive algorithms are breaking into schools. The question is, what are the long-term effects of this. That's what this paper addresses. "Each shift in pedagogical decision-making has the potential for unintended consequences because of inaccurate or unrepresentative data, algorithmic bias or disparate impact, scientism replacing more holistic and contextualized personal evaluation, and the exclusion of noncomputable variables and nonquantifiable learning outcomes." It should go without saying: let's be careful out there.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2018 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.