[Home] [Top] [Archives] [About] [Options]

OLDaily

Sorry about the blank email yesterday. Today's email contains yesterday's items as well as today's.
Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Athabasca U, Alberta government reach agreement
Cailynn Klingbeil, University Affairs, 2023/02/01


Icon

The dispute between Athabasca University and the Alberta government has been resolved for now with the university agreeing to increase its on-site staff count in the town of Athabasca by 30. But this is just one instance of a greater interest in government exercising more control over universities, and universities (sometimes) pushing back. It's not just the extreme case of Florida governor Ron DeSantis taking over a university. Less obviously, it's the Ontario government running value for money audits on four smaller universities. In an example of pushback, Alex Usher reports on faculty at Memorial University demanding via strike action to "fully participate in the university... institutional systems of peer review and decision-making processes". There are no easy answers here - universities definitely don't help make their own case, but they do need to be publicly funded in order to be accessible to the wider community, though the excesses of elected officials shows how important it is to have an independent system.

Web: [Direct Link] [This Post]


AI21 Labs' mission to make large language models get their facts right
Ben Dickson, TechTalks, 2023/02/01


Icon

I've mentioned this sort of process in recent talks, though it is still technology in its very early stages. The idea is to tailor the input data for an AI to ensure it gets its facts right. Here we read about one such technique, "retrieval augmented language modeling (RALM), which tries to train language models to fetch information from external sources... A RALM model, on the other hand, adds a 'knowledge retriever' to find the document that is most likely to contain information relevant to the prompt. It will then use the content of that document as part of its prompt to generate more reliable output." Longer-term, I think, models will consult larger bodies of structured data to add to and inform output generation.

Web: [Direct Link] [This Post]


AI, Technical Architecture and the Future of Education
Improvisation Blog, 2023/02/01


Icon

I quite like Mark Johnson's take on the impact of AI in learning - to be honest, he had me interested as soon as he said there's no time to be worrying about how to 'stop students from cheating'. And his take on AI itself is useful: "AI is a silly description. 'Artificial Anticipation' is much better. The technology is new. It is not a database; it consists of a document called a model (which is a file) that can be thought of as being like a 'sieve'. The configuration of the structure of the sieve is produced through a process called 'training'". What's interesting is that we can have 'conversations' with these 'documents', these conversations can be in private, and can be about whatever we happen to introduce to the document.

Web: [Direct Link] [This Post]


Open science, closed doors: The perils and potential of open science for research in practice
Richard A. Guzzo, Benjamin Schneider, Haig R. Nalbantian, Industrial and Organizational Psychology, 2023/02/01


Icon

This is quite a good paper outlining four incompatibilities between the fundamental principles underlying open science practices and scientific progress through applied research. These incompatibilities are: limits on sharing of personal or proprietary data, inability to pre-register hypotheses to be tested in complex data sets (and hence, preference for theory-driven research), the demand for replication of specific cases, and evolving definitions of 'good science' (as "a study with clear hypotheses using standard, favored methodologies and having as much control as possible over the research process and outcomes"). The result, they argue, is a narrowing down of what can be published to trivial findings in areas of little importance, to the point that "textbook authors and journalists no longer rely on the research published in academic journals."

Web: [Direct Link] [This Post]


Working with Broken
Tony Hirst, OUseful.Info, the blog..., 2023/02/01


Icon

As Tony Hirst reports, "OpenAI announce the release of an AI generated text identification tool that they admit is broken ('not fully reliable', as they euphemistically describe it)." And he asks, "is this the new way of doing things? Giving up on the myth that things work properly, and instead accept that we have to work with tools that are known to be a bit broken? That we have to find ways of working with them that accommodate that?" I think the answer is "yes and no". No, we should not use "not fully reliable" tools for mission-critical systems like financial transactions or heavy machinery operations. But for a lot of tasks, 'not fully reliable' is good enough. It's a principle we've long since learned to apply with humans, because humans are often 'not fully reliable', and we'll now need to learn to adopt with machines. Image: Reddit.

Web: [Direct Link] [This Post]


Students Got $10K to Upgrade Their HS. It Drove a Citywide "Wave of Democracy"
Asher Lehrer-Small, The 74, 2023/02/01


Icon

I have always supported the idea of direct democracy. So it's interesting to see the idea implemented as an elective class at a Rhode Island school, where students were given the decision on how to allocate $10,000 of funding. The amount of money is trivial, but the creation of agency is real. "The interesting thing about participatory budgeting is that the deeper you get into it, the more you quickly realize, it is not about the money... [the process] is a tool that can really create and strengthen your civic infrastructure." Of course, though, the money does matter, and that's the weak point in what has been tried thus far, because finding the funds has been the hardest part of the project.

Web: [Direct Link] [This Post]


What are universities for? Canadian higher education is at a critical crossroads
Marc Spooner, The Conversation, 2023/02/01


Icon

While Marc Spooner argues we should "avoid pitting these conceptions of higher education against one another," it is difficult to reconcile competing visions of university as either a mechanism for earning gainful employment or as contributing to a more enlightened and reflective society. Not everybody has the time, wealth and leisure to devote to the latter objective (though lowering tuition costs helps) as desirable as it may be. And some governments - particularly those with a business and economic focus - see little value in generating anything other than employment. I think universities haven't helped here, at least, not in North America. Far from being stand-alone institutions you have to commit your entire life to if you want to join, they should be more and more integrated with the community, being a part of everybody's lives rather than being everything for only a few. Via Academic Matters.

Web: [Direct Link] [This Post]


What Happens When AI Doesn't Understand Students? An example for creative and equitable AI policy in education
Russell Shilling, Getting Smart, 2023/02/01


Icon

"Speech recognition technologies offer a specific example of where we can start crafting specific policy and solutions for developing effective and equitable education technologies to support teachers and improve student outcomes," writes Russell Shilling. There are many ways speech recognition can fail: people speak differently as they age, people from different cultures may pronounce or use words differently, or people may have speech impediments. Failure to recognize some speech types may be depicted as a form of bias, and measures should be taken to ensure AI is less biased, argues Shilling. He focuses on a four part solution focusing on funding, quality, scrutiny and evaluation. I'm sympathetic, but it feels like an old-world solution to a new-world problem. Automated speech recognition (ASR) should be adaptive, generating individual personal models for each user, rather than being based on one model that is all things to all people.

Web: [Direct Link] [This Post]


The practical guide to using AI to do stuff
Ethan Mollick, One Useful Thing, 2023/02/01


Icon

AI is here so we may as well learn how to use it. Thus argues Ethan Mollick in this Substack post, and I can't really disagree. He offers a number of ideas, the best of which is to generate new business ideas. "Despite of (or in fact, because of) all its constraints and weirdness, AI is perfect for idea generation... Will all these ideas be good or even sane? Of course not. But they can spark further thinking on your part." He then offers a list of 50 "brilliant ideas" for building a business around dental hygiene. And, you know, they're not bad.

Web: [Direct Link] [This Post]


The Voice Of ChatGPT Is Now On The Air
Lewin Day, Hackaday, 2023/02/01


Icon

I think we'll see more of these pop-up instances of AI everywhere. In this current example, someone connected chatGPT to a ham radio. "Radio amateurs can call in to ChatGPT with questions, and can receive actual spoken responses from the AI. We can imagine within the next month, AIs will be chatting it up all over the airwaves with similar setups." I'm not sure is this means the revival of things like voice assistants (does anyone still use Alexa?) but it would be interesting to see if we can have a conversation with a household appliance about the news of the day. Or maybe just listen to music.

Web: [Direct Link] [This Post]


Introducing: ChatGPT Edu-Mega-Prompts
Philippa Hardman, 2023/02/01


Icon

I don't know whether this is true, but Philippa Hardman reports that "most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content quiz)." It's hard to substantiate a statistical fact like this. But more significantly, she offers a solution in the form of a chatGPT "Edu-Mega-Prompt". You don't have to follow it exactly, but it does seem reasonable that building in constraints (like the AI's role) and purpose (like the instructional strategy and context) would produce a better result from chatGPT. And of course you can revise the recommended learning strategy before implementing it.

Web: [Direct Link] [This Post]


Learning analytics as data ecology: a tentative proposal
Paul Prinsloo, Mohammad Khalil, Sharon Slade, Journal of Computing in Higher Education, 2023/02/01


Icon

Learning analytics can be understood as a "data ecosystem with dynamic interdependencies and interrelationships," write the authors, but the question needs to be asked about "the extent to which learning analytics takes cognizance of the reality, the potential and the risks of being part of a broader data ecology." And, they conclude, it mostly doesn't. This conclusion is based on a definition of a data ecology, which the article offers, and a list of the roles or actors involved. These are compared against a set of 11 analytics frameworks drawn from the literature. "Most of the frameworks analysed here do acknowledge LA as part of institutional ecosystems, and to a lesser extent, as part of intra-institutional ecosystems. There is, however, a lack of understanding of LA as part of an increasingly commercial data ecology, directly impacting on students' privacy and their right to data sovereignty."

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.