This article is careful to tell you how you think about this new technology (it's bad, creepy) before telling you what it is. For me, that's usually a warning sign: don't trust what follows. The column is based on a CNN story titled This AI reads children's emotions as they learn. I think it would be more accurate to say it interprets children's emotions. Still, so what? Humans do that all the time; it helps us communicate. Peter Greene informs us that "AI doesn't grok emotions any more than it actually thinks." Again, so what? From where I sit, the main issue is that the AI's analysis is wasted on superficial functionality (specifically: content recommendation). Yes, emotions are complex, and yes, bias has been a problem for AIs, but these are eminently solvable problems (more solvable for an AI, I would say, than for a hardened teacher who persistently misreads a foreigner's expression as a scowl). The rest of the language about the AI 'reading your heart' is nonsense. Reprinted for some reason by NEPC, which is where I found it.
I had to check the study to be certain, but yes, this highly touted report from published Wednesday by the National Association of Student Personnel Administrators and funded by the Bill and Melinda Gates Foundation was based on the opinions of 18 students. As Ryan Johnston reports, "the results were promising for institutions, the study showed, with students sharing that they trusted their universities to handle their data more than any other private company that they use, including social media companies like Facebook." Well, it showed no such thing. The idea that such a study could represent "students" in any way, shape or form is laughable, and yet that is the language used throughout. This sort of work does a disservice to the entire field. While I think it's great that those 18 students have opinions, and I encourage them to share them widely, I think it is disingenuous for organizations to disguise these carefully selected opinions as 'data' and pretend that they have conducted 'research', and I wonder just how much of what we read comes from the students and how much is the authors' own views being imposed on us.
I decided to link to this item even before reading it. That's how dire I think is the need for this advice. The suggestions themselves are, well, meh. One is called 'the muddiest point', and involves the use of Zoom chat to gather information on what remains unclear to students. Another is called 'Think-Pair-Share' and involves using the breakout room settings to pair students to discuss an item, then having them report back to the main group. The third, 'Peer Instruction', involves polling students with course-related questions, showing them the answers, then having them discuss their own answers in breakout rooms. None of this sounds particularly 'active' to me (at best, it's 'collaborative') but it's better than a talking head.
This article wanders a bit, depends a lot on the reader to do their own interpretation of the images (word clouds are far less informative than some people think), and appears (as one commenter noted) to be missing a section (that would include the heading for the OU site). On the other hand, it encourages and describes a tour around a number of different institutional "domain of one's own" sites (and yes that sounds like an oxymoron).
Using content to train AIs was probably not on the radar when the first CC licenses were developed. But today we have this new kind of use that isn't exactly reuse, isn't exactly copying, isn't exactly... anything. What we have in this article is Creative Commons dancing on a very fine line, one where they want to say that using content to train AI doesn't infringe on copyright, and at the same time want to say that this use "must be balanced with equally valid considerations to ensure sharing ultimately benefits the public." It really feels to me that they're punting this one. I get the feeling they really really want to endorse the use of open content to train AI, and are responding to questions are ethical considerations by saying essentially that such questions are out the scope of CC licenses.
Alastair Creelman notes that "on-site conferences are always exclusive events due to costs, travel restrictions, linguistic barriers and accessibility issues" and links to an article by Holly J. Niner and Sophia N. Wassermann Better for Whom? Leveling the Injustices of International Conferences by Moving Online. "The big question is whether or not a return to the on-site format is at all desirable," says Creelman, "and the authors focus on a factor they call the privilege of preferring an in-person option." As the authors argue, "On an individual level, those of us able to attend a conference no matter where it is held should be cognizant of the fact that the option to prefer an in-person conference is predicated on the ability to attend one." A point well made. But importantly: the same point could be made about in-person education, especially higher education, and especially international education.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2021 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.