[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

This Time is Different. Part 1.
George Siemens, elearnspace, 2023/01/23


Icon

George Siemens (we assume, though the article isn't attributed to an author) offers three separate 'core assertions' that add up to the idea that while society is changing, as evidenced by the changing ways we use information, educational institutions have not been adapting, because they are embedded in their own systems and networks that resist change. "This time is different" refers to AI, which Siemens asserts is much more than just a fad. I wouldn't disagree, though if I were making the same point I would be less inclined to use the term 'information' and also less inclined to emphasize and credit managers and "leaders in universities, big tech, startups, and non-profits." But there's going to be pushback. Here's Tony Hirst: "I am starting to realise how many people are digitally excluded, don't have ready internet access, can't buy things online, and don't discover things online. And I would rather spend my time in their world than this digital one." I would respond, though, that these same people were excluded before digital technology; we just didn't see, hear, or care about them. Well, most of us.

Web: [Direct Link] [This Post]


With ChatGPT, We All Act as Well-Behaved Children of Tech Capitalism
Thomas Telving, DataEthics, 2023/01/23


Icon

Should we even be using chatGPT? Thomas Telving argues that the tech giants "can't just go around launching products into the global market that disrupt something as vital and important to all of us as the education sector." But we need some mechanism to address issues of access, quality and equity in the sector. People have called for AI-generated text to be clearly marked. I thought an icon for 'human-authored text' might be a good idea. And we've already seen fallout from CNet's quiet use of AI to write articles. But it might never happen. People objected when chatGPT was listed as a co-author on an academic paper. And we may already have passed the point of no return; according to report from Reuters, media companies are already quietly integrating AI into their products. And media giants, like Amazon, already depend on it.

Web: [Direct Link] [This Post]


Can ChatGPT write decent course outlines?
Terry Freedman, ICT & Computing in Education, 2023/01/23


Icon

The chatGPT stories seem to come in waves. Here's another wave, focused mostly on applications. Terry Freedman considers whether AI can write course outlines (looks like a 'yes'). Philippa Hardman gives us three major uses of AI in education: teaching assistance, research and report writing, and data analysis, for example, by correcting data used in manufacturing. Coursera's CEO has been using chatGPT to "bang out work emails". Jackie Gerstein links to guides created by educators Torrey Trust, Andrew Herft, and Matt Miller for using ChatGPT in educational settings. MIT's Comparative Media Studies/Writing department offers "advice and responses from faculty on ChatGPT and A.I.-assisted writing." AI is the new UI, says Donald Clark. Or as Dave Cormier predicts, "Starting this year, we're going to be returned a mishmash of all the information that is available on the Internet...The algorithm is going to convert that information to knowledge for me." For example, here's Tim Stahmer's AI-written case against charter schools. Finally, we're seeing tools developed to detect AI-generated text, one developed by a college student and another called Giant Language Model Test Room (GLTR).

Web: [Direct Link] [This Post]


On the Opportunities and Risks of Foundation Models
Rishi Bommasani, et al., arXiv, 2023/01/23


Icon

If you want to dive into the details of generative AI technology (and you do, you do) there are some academic papers to look at: Murray Shanahan explains large language models (11 page PDF) as "generative mathematical models of the statistical distribution of tokens in the vast public corpus of human-generated text." Simon Willison offers an illustrative example. The generative AI we're using today is based on foundational models that can be applied on a wide range of tasks; this looong report (214 page PDF) describes the advantages and risks. One risk is described in detail in another paper: "in the most severe case, known as 'detached hallucinations', the output is completely detached from the source, which not only reveals fundamental limitations of current models, but also risks misleading users and undermining trust." We can detect when we're hallucinating; the AI cannot.

Web: [Direct Link] [This Post]


What is generative AI?
McKinsey, 2023/01/23


Icon

Here are some more chatGPT and generative AI overviews. We'll begin with this quick intro doc from McKinsey outlining generative AI. Here's a short overview in a PDF for execs. Remember, it's not just chatGPT that can do this; here's Claude, "Anthropic's Rival to ChatGPT", which incorporates reinforcement learning from human feedback (RLHF). Here's a full taxonomy (22 page PDF) of the popular generative AI models, and a similar report produced by Antler (which focuses on companies and VC). It's fair to note chatGPT's limitations and weaknesses, but future AI will be a lot better.

Web: [Direct Link] [This Post]


Twitter Bans Third-Party Apps Without Warning
Ryan Whitwam, ExtremeTech, 2023/01/23


Icon

Not that we needed it, but this story adds evidence of the danger of depending on a technology platform that someone else owns. Twitter began shutting off API access to clients like Twitterific and Fenix late last week, and according to Engadget, "quietly updated its developer agreement on Jan 19 to clarify those supposedly long-standing rules" that were being enforced. I should point out that it hasn't disabled all API access, as some people have reported - I am still able to use the Twitter API to publish @OLDaily posts.

Web: [Direct Link] [This Post]


Six of the biggest myths about online learning
FutureLearn, 2023/01/23


Icon

I want to juxtapose two articles: this article, which I think of as super-introductory, addressing myths about online learning, and Ben Werduller's post on working from home, which makes very clear the case that "forcing in-person work is a sure-fire sign that leadership is stuck in their ways, unable to change, even in the face of evidence that it's detrimental to their businesses." And it results in absurdities - I would drive into the office only to sit in front of my computer, working and learning online. It underlines to me our failure as a society to advance the education of our leaders beyond the level of myth, superstition and fear. From what I've seen of executive education, providers are more likely to cater to their biases and prejudices than to challenge them. Which is unfortunate.

Web: [Direct Link] [This Post]


Faculty are Losing Interest in Adopting OER
David Wiley, improving learning, 2023/01/23


Icon

It's hard to reconcile the headline of this post with what I read in the report it cites, specifically, that "The disruptions of recent years have yielded a substantial increase in the use and creation of open educational resources (OER), textbooks, course modules, and video lectures." But it's because right after that we read that "faculty are less interested in creating and using them for their courses as incentives for integrating OER into instructional approaches have not changed since 2018." Wiley explains, "Some of the faculty who adopted and used those resources simply aren't interested in doing it again...the overwhelming majority of OER offer very little support for faculty. It is, objectively, more work for faculty when they switch from adopting a full courseware solution to using a free PDF." And institutional resources and support haven't made up for that difference.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.