[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

In Memoriam: Clifford Lynch
CNI: Coalition for Networked Information, 2025/04/15


Icon

"It is with profound sadness that we announce the passing of Clifford Lynch, a visionary leader in the field of networked information and libraries, and the esteemed executive director of the Coalition for Networked Information (CNI), an organization dedicated to advancing scholarship and education through the strategic use of information technology."

Web: [Direct Link] [This Post][Share]


How Do AI Educators Use Open Educational Resources? A Cross-Sectoral Case Study on OER for AI Education
Florian Rampelt, et al., Open Praxis, 2025/04/15


Icon

This article surveys users of AI Campus, a platform teaching AI skills and competences, on how and why AI educators are using open educational resources (OER) in their projects. The literature review covers AI literacy (which I still view as a moving target), OER and MOOCs, and something called here 'digital formats' (they write: "'forms', 'formats', 'types', 'scenarios', 'modes', 'medium', 'models', 'approaches' and other terms are all used"), which refers loosely to "clear structure and instructional design that provide information and content to learners." I'd probably want it to be broader; not just 'information and content' but 'learning activities' generally. By limiting the conversation to 'content and information' I think the study also predefines the outcome: OER are used as supplemental material only, and "regarding the drivers and motives of AI educators in using OER, content is king."

Web: [Direct Link] [This Post][Share]


Ghost in the Journal
Carlo Iacono, Hybrid Horizons, 2025/04/15


Icon

This is a longish but useful article on the state of play for the use of AI in writing academic articles, including examples from journals and surveys of journal policies. In a nutshell: it's mostly OK for some things, but needs to be declared, and a human must always be responsible for the work. The suggestion is that a human needs to do the intellectual work, so while an AI might be useful as an assistant, it must be a human that is doing things like forming hypotheses or interpreting results. I think these boundaries too will shift as AIs take on tasks too complex for humans, in which case the proper form will be something like "the xyz AI interprets these results as abc".

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.