[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

What are universities for? Canadian higher education is at a critical crossroads
Marc Spooner, The Conversation, 2023/01/31


Icon

While Marc Spooner argues we should "avoid pitting these conceptions of higher education against one another," it is difficult to reconcile competing visions of university as either a mechanism for earning gainful employment or as contributing to a more enlightened and reflective society. Not everybody has the time, wealth and leisure to devote to the latter objective (though lowering tuition costs helps) as desirable as it may be. And some governments - particularly those with a business and economic focus - see little value in generating anything other than employment. I think universities haven't helped here, at least, not in North America. Far from being stand-alone institutions you have to commit your entire life to if you want to join, they should be more and more integrated with the community, being a part of everybody's lives rather than being everything for only a few. Via Academic Matters.

Web: [Direct Link] [This Post]



In make keywords

What Happens When AI Doesn't Understand Students? An example for creative and equitable AI policy in education
Russell Shilling, Getting Smart, 2023/01/31


Icon

"Speech recognition technologies offer a specific example of where we can start crafting specific policy and solutions for developing effective and equitable education technologies to support teachers and improve student outcomes," writes Russell Shilling. There are many ways speech recognition can fail: people speak differently as they age, people from different cultures may pronounce or use words differently, or people may have speech impediments. Failure to recognize some speech types may be depicted as a form of bias, and measures should be taken to ensure AI is less biased, argues Shilling. He focuses on a four part solution focusing on funding, quality, scrutiny and evaluation. I'm sympathetic, but it feels like an old-world solution to a new-world problem. Automated speech recognition (ASR) should be adaptive, generating individual personal models for each user, rather than being based on one model that is all things to all people.

Web: [Direct Link] [This Post]



In make keywords

The practical guide to using AI to do stuff
Ethan Mollick, One Useful Thing, 2023/01/31


Icon

AI is here so we may as well learn how to use it. Thus argues Ethan Mollick in this Substack post, and I can't really disagree. He offers a number of ideas, the best of which is to generate new business ideas. "Despite of (or in fact, because of) all its constraints and weirdness, AI is perfect for idea generation... Will all these ideas be good or even sane? Of course not. But they can spark further thinking on your part." He then offers a list of 50 "brilliant ideas" for building a business around dental hygiene. And, you know, they're not bad.

Web: [Direct Link] [This Post]



In make keywords

The Voice Of ChatGPT Is Now On The Air
Lewin Day, Hackaday, 2023/01/31


Icon

I think we'll see more of these pop-up instances of AI everywhere. In this current example, someone connected chatGPT to a ham radio. "Radio amateurs can call in to ChatGPT with questions, and can receive actual spoken responses from the AI. We can imagine within the next month, AIs will be chatting it up all over the airwaves with similar setups." I'm not sure is this means the revival of things like voice assistants (does anyone still use Alexa?) but it would be interesting to see if we can have a conversation with a household appliance about the news of the day. Or maybe just listen to music.

Web: [Direct Link] [This Post]



In make keywords

Introducing: ChatGPT Edu-Mega-Prompts
Philippa Hardman, 2023/01/31


Icon

I don't know whether this is true, but Philippa Hardman reports that "most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content quiz)." It's hard to substantiate a statistical fact like this. But more significantly, she offers a solution in the form of a chatGPT "Edu-Mega-Prompt". You don't have to follow it exactly, but it does seem reasonable that building in constraints (like the AI's role) and purpose (like the instructional strategy and context) would produce a better result from chatGPT. And of course you can revise the recommended learning strategy before implementing it.

Web: [Direct Link] [This Post]



In make keywords

Budgets, Control, Incentives, Rankings | HESA
Alex Usher, HESA, 2023/01/31


Icon

I have long maintained that ranking reflect more the priorities of those issuing the ranking than those being ranked. Rankings are, in other words, a tool for lobbyists. Sometimes this leads to undesirable results. This, in my view, is a case in point. Here's Alex Usher summarizing the thinking:

"Me: what are you going to do with the extra money?"
Them: "Invest it in research"
Me: "Why?"
Them: "To rise in the Rankings."
Me: "Why is that important?"
Them: "Helps attract more international students"
Me: "And why does that matter?"
Them: "More money!"
Me: "And what will you do with…
Them: "Research!"
It was a perfect circle.  An academic Ouroboros.

It reads like a Ponzi scheme to me. And while Usher agrees we should not rush out and adopt the Australian model, he does suggest Canadian institutions could learn from it. But my take is that the lesson essentially involves subsuming public institutions to the interests of those creating the rankings. And by inserting 'making money' as a primary institutional objective, we subvert the public good these institutions are supposed to provide.

Web: [Direct Link] [This Post]



In make keywords
We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.


In make keywords

What Happens When AI Doesn't Understand Students? An example for creative and equitable AI policy in education
Russell Shilling, Getting Smart, 2023/01/31


Icon

"Speech recognition technologies offer a specific example of where we can start crafting specific policy and solutions for developing effective and equitable education technologies to support teachers and improve student outcomes," writes Russell Shilling. There are many ways speech recognition can fail: people speak differently as they age, people from different cultures may pronounce or use words differently, or people may have speech impediments. Failure to recognize some speech types may be depicted as a form of bias, and measures should be taken to ensure AI is less biased, argues Shilling. He focuses on a four part solution focusing on funding, quality, scrutiny and evaluation. I'm sympathetic, but it feels like an old-world solution to a new-world problem. Automated speech recognition (ASR) should be adaptive, generating individual personal models for each user, rather than being based on one model that is all things to all people.

Web: [Direct Link] [This Post]



In make keywords

The practical guide to using AI to do stuff
Ethan Mollick, One Useful Thing, 2023/01/31


Icon

AI is here so we may as well learn how to use it. Thus argues Ethan Mollick in this Substack post, and I can't really disagree. He offers a number of ideas, the best of which is to generate new business ideas. "Despite of (or in fact, because of) all its constraints and weirdness, AI is perfect for idea generation... Will all these ideas be good or even sane? Of course not. But they can spark further thinking on your part." He then offers a list of 50 "brilliant ideas" for building a business around dental hygiene. And, you know, they're not bad.

Web: [Direct Link] [This Post]



In make keywords

The Voice Of ChatGPT Is Now On The Air
Lewin Day, Hackaday, 2023/01/31


Icon

I think we'll see more of these pop-up instances of AI everywhere. In this current example, someone connected chatGPT to a ham radio. "Radio amateurs can call in to ChatGPT with questions, and can receive actual spoken responses from the AI. We can imagine within the next month, AIs will be chatting it up all over the airwaves with similar setups." I'm not sure is this means the revival of things like voice assistants (does anyone still use Alexa?) but it would be interesting to see if we can have a conversation with a household appliance about the news of the day. Or maybe just listen to music.

Web: [Direct Link] [This Post]



In make keywords

Introducing: ChatGPT Edu-Mega-Prompts
Philippa Hardman, 2023/01/31


Icon

I don't know whether this is true, but Philippa Hardman reports that "most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content quiz)." It's hard to substantiate a statistical fact like this. But more significantly, she offers a solution in the form of a chatGPT "Edu-Mega-Prompt". You don't have to follow it exactly, but it does seem reasonable that building in constraints (like the AI's role) and purpose (like the instructional strategy and context) would produce a better result from chatGPT. And of course you can revise the recommended learning strategy before implementing it.

Web: [Direct Link] [This Post]



In make keywords

Budgets, Control, Incentives, Rankings | HESA
Alex Usher, HESA, 2023/01/31


Icon

I have long maintained that ranking reflect more the priorities of those issuing the ranking than those being ranked. Rankings are, in other words, a tool for lobbyists. Sometimes this leads to undesirable results. This, in my view, is a case in point. Here's Alex Usher summarizing the thinking:

"Me: what are you going to do with the extra money?"
Them: "Invest it in research"
Me: "Why?"
Them: "To rise in the Rankings."
Me: "Why is that important?"
Them: "Helps attract more international students"
Me: "And why does that matter?"
Them: "More money!"
Me: "And what will you do with…
Them: "Research!"
It was a perfect circle.  An academic Ouroboros.

It reads like a Ponzi scheme to me. And while Usher agrees we should not rush out and adopt the Australian model, he does suggest Canadian institutions could learn from it. But my take is that the lesson essentially involves subsuming public institutions to the interests of those creating the rankings. And by inserting 'making money' as a primary institutional objective, we subvert the public good these institutions are supposed to provide.

Web: [Direct Link] [This Post]



In make keywords
We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.


In make keywords