[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

ENAI Recommendations on the ethical use of Artificial Intelligence in Education
Tomas Foltynek, Sonja Bjelobaba, Irene Glendinning, Zeenath Reza Khan, Rita Santos, Pegi Pavletic, Julius Kravjar, International Journal for Educational Integrity, 2023/05/12


Icon

Here we have yet another set of ethical principles for the use of AI, this one from the European Network for Academic Integrity (ENAI). It's pretty short (4 page PDF). In one part, the paper recomments "Students should be included and educated on ... the purpose of all activities related to learning and assessment (and) how to develop their ethical writing and content production skills." In another, the advice is that "policies should define default rules on when and how the students, teachers, researchers and other educational stakeholders are allowed to use different kinds of AI tools (and) guide the users on how to correctly and transparently acknowledge the use of AI tools."

Web: [Direct Link] [This Post]


Why 'system transformation' is likely a pipe dream
Michael B. Horn, Christensen Institute, 2023/05/12


Icon

I am inclined to believe this: "system transformation almost never happens by changing the fundamental tenets of the system itself. Instead, it comes from replacing the system with a brand-new system." That's why, for example, I don't expect change in education to come from within schools or universities; these institutions will be replaced gradually with a more viable alternative. But what alternative? Here's where I part ways with the ed reformers and corporate media that are represented in this article. I don't think it's about markets and disruptions. I think it's bout networks and community.

Web: [Direct Link] [This Post]


Beware of AI pseudoscience and snake oil – Baldur Bjarnason
Baldur Bjarnason, 2023/05/12


Icon

Baldur Bjarnason argues that AI companies are offering pseudosciece, not real science, as evidence for the effectiveness of their tools. First, cue the list of AI failures *yawn). Then, the real argument: "They make grand claims, that this is the first step towards a new kind of conscious life, but don't back it up with the data and access needed to verify those claims... They make claims about something working—a new feat accomplished—and then nobody else can get that thing to work as well. It's a pattern." Right. If we focus on the corporate research - which is where all the attention is being paid today - then AI is more a world of illusion and trade secrets than it is genuine science. Much of the actual work (beyond simply scaling up to a billion processors) is being done by real scientists, with open data, testable algorithms, and reproducible results. I know; they work down the hall from me.

Web: [Direct Link] [This Post]


Project Tailwind
Steven Johnson, Adjacent Possible, 2023/05/12


Icon

Steven Johnson writes about Tailwind, a tool that "allows you to define a set of documents as trusted sources which the AI then uses as a kind of ground truth, shaping all of the model's interactions with you. In the use case shown on the I/O stage, the sources are class notes, but it could be other types of sources as well, such as your research materials for a book or blog post." Is it a personal learning environment (PLE)? Not exactly, but there's a lot of overlap with the concept. Via Ton Zijlstra, who says "I think there will be more tools like these coming in the next months, some of which likely will be truly local and personal." Let's hop so, as Tailwind is available by invitation and in the U.S. only.

Web: [Direct Link] [This Post]


Exploring the ecosystem for GC digital talent - Spring 2023
Thom Kearney, 2023/05/12


Icon

I definitely enjoyed this talk yesterday and especially the way the presentation was supported with interactive graphics. It described a prototype for data mapping and visualization in Canada's public service using open data available from various government portals. It was intended as a proof of concept to show how these displays could be used to gain new perspectives on activities and trends across a large and complex organization. The visualizations were built using Kumu, a graphical tool to organize complex data into relationship maps.

Web: [Direct Link] [This Post]


Deskilling on the Job
dana boyd, Apophenia, 2023/05/12


Icon

According to danah boyd, "When highly trained professionals now babysit machines, they lose their skills. Retaining skills requires practice. How do we ensure that those skills are not lost?" She uses as an example the case of airline pilots who are expected to take over in an emergency wen the autopilot fails, but have been so dependent on the autopilot they no longer have the skills to be effective. As subtext, and not relevant to the main argument, she creates a division between 'camp automation', which argues AI will take over everything, and 'camp augmentation', which argues AI will continue to have a 'human in the loop'. The deskilling argument is relevant motly to camp augmentation. Laura Hilliger responds.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.