[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

What Would an AI Grading App Look Like?
Steven D. Krause, 2023/03/13


Icon

This is a long discussion about using something like chatGPT do do grading tasks, and is therefore a good demonstration of the need for people to understand that there are different types of AI that perform different tasks. I outline them here. In a nutshell, grading is a classification task - this is a recognition problem (which is what a grammar-checker is doing). The provision of feedback is a requirement for an explanation of that classification. Maybe generative AI could help here, but chatGPT wasn't designed for anything like this. There's no point in asking it to "offer advice on how to revise and improve the following text" if the AI has no real mechanism for recognizing and classifying good or bad content. People should stop treating the latest AI thing as though it were a tool for doing everything.

Web: [Direct Link] [This Post]


'Horribly Unethical': Startup Experimented on Suicidal Teens on Social Media With Chatbot
Chloe Xiang, Vice, 2023/03/13


Icon

The story: "Koko, a mental health nonprofit, found at-risk teens on platforms like Facebook and Tumblr, then tested an unproven intervention on them without obtaining informed consent. 'It's nuanced,' said the founder." I don't think it's particularly nuanced in this instance, and that most people world agree the research was unethical. But it's not exactly like there are ethical protocols for developing AI-enabled products and services. Sure, there are guidelines for the ethical development of AI, but not really for when the AI should be applied in a health, educational or social setting. I'm not talking about whether the AI is ethically sourced, or creative, or bland, or whether it is accurate, but whether it is a good idea to use intervention X in application area Y - we don't have a research protocol for testing that, or at least, I haven't found one.

Web: [Direct Link] [This Post]


Developing and Validating a Scale for University Teacher's Caring Behavior in Online Teaching
MDPI, 2023/03/13


Icon

I can't imaging proponents of a 'pedagogy of care' being comfortable with a mechanism for measuring care in teaching, but at the same time, it seems there ought to be a mechanism for saying whether someone is a caring teacher or not. A conundrum I won't try to solve here. Anyhow, this article asserts that " concluded that teachers' caring behavior mainly includes three dimensions: Conscientiousness, support, and inclusiveness." Can these be measured? This article addresses the question in a Chinese cultural context; "teachers' caring behavior and students' perceptions of caring are different due to the differences between Chinese and Western cultures. In Chinese society, the concept and expectation of teacher care are more likely to be paternalistic." It's a smallish study (n=365 at most). The authors develop a scale, test the scale, then apply the scale. The main lesson (as I draw it) is that it's not a simple subject; care is complex, and whether it is perceived is impacted by a large number of factors.

Web: [Direct Link] [This Post]


Special Reports - Global Edition
University World News, 2023/03/13


Icon

This is a collection of articles from University World News on generative AI in education. A lot of it is what we've seen before, but I appreciate the global focus. Articles include:

And much more.

Web: [Direct Link] [This Post]


Automation Can Make Professional Content Mediocre
Pernille Tranberg, Dataetisk Tænkehandletank, 2023/03/13


Icon

So the question I have on reading this article is this: what if AI produced better content, not mediocre content. And what if we didn't have this culture where people are valued only for creating 100% original work? Then it wouldn't matter whether we flagged an article as AI-generated. Just like, in an era of perfect auto-translation, it wouldn't matter whether it was "translated with the help of www.DeepL.com/Translator." Now I agree with the article that there's a problem with fake news, and systems that use AI to churn out content optimized for search engines. But these are issues that existed long before AI was widely used to generate content. And that's the thing with a lot of the criticism of the ethics of AI content generation. It presupposes that the humans will be ethical. But everything from Fox News to current Twitter management to the BBC executive suite makes it clear that humans in media can be very very unethical. AI simply makes it faster, easier and cheaper.

Web: [Direct Link] [This Post]


What schools want: Recruiting senior leaders in England
Belinda C. Hughes, et al., BERA Blog, 2023/03/13


Icon

This is a fascinating discussion of the expectations being placed on candidates for school leadership. The point of departure is a reference to a Guardian article describing a job ad warning candidates that "they would have to work 'ridiculously hard', be 'wedded' to their job and that 'we cannot carry anyone'." The problem with the advertisement, assert the authors, is that it said the quiet part out loud. Indeed, we see similar expectations elsewhere: for teachers, in health care, and even (these days) to work at Twitter. Now I can imagine having a real passion for one's work - my own career is a case in point. But I can't imagine anyone being passionately committed to doing what they're told, not rocking the boat, and and to "'display candour' unless disagreeing with the school vision, which must be enacted." You can have passion, or you can have obedience, but you can't have both. And passion is and always mush be voluntary, not part of the job, but something fun beyond the job.

Web: [Direct Link] [This Post]


Census GPT
2023/03/13


Icon

"What people are missing is the evidenced fact that generative AI suddenly solves lots of general problems," writes Donald Clark in a LinkedIn post. "Once tied up to databases, for example in censusgpt.com it also becomes a precision tool. In other words it can be a spear or general wave." Here I link to Census GPT to provide an illustration, but note that this is just one minor example. As I describe in my talk in December, generative AI combined with trusted data redefines the role of educational institutions. That's why it doesn't really matter right now what chatGPT can or can't do. We're only at the very beginning. Here's the Census GPT code on GitHub.

Web: [Direct Link] [This Post]


Leveraging MOOCs for learners in economically disadvantaged regions
Long Ma, Chei Sian Lee, Education and Information Technologies, 2023/03/13


Icon

Learners in economically disadvantaged regions (EDR) "encounter numerous challenges when using MOOCs," according to this report. "Accessing content from MOOCs outside the classroom can be difficult due to inadequate infrastructure and network services." Also, learners "may also lack the required computing skills or may not be able to afford the computing devices (and) have poorer levels of proficiency in the English language compared to the more affluent area." So MOOCs are often blended with in-person learning, either asynchronously (for example, they are assigned parts of a MOOC outside classroom time) or in-class. This article studies the latter approach, called 'embedded MOOCs', using the Attention, Relevance, Confidence and Satisfaction (ARCS) model, and concludes "embedded MOOCs approach could effectively stimulate students' learning motivations" while highlighting "the importance of social support and offline interactions with instructors and peer students in addition to the online learning materials." The study was well-designed but small, involving only 154 business students at a university in China, and therefore would benefit from replication elsewhere.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.