"According to new research," says this article, "ChatGPT shows bias against resumes with credentials that imply a disability." For example, "it noted that a candidate with depression had "additional focus on DEI and personal challenges," which "detract from the core technical and research-oriented aspects of the role." This is a problem, obviously. But in assessing issues of this type, two additional questions need to be asked: first, how does the AI performance compare with human performance? After all, it is very likely the AI is drawing on actual human discrimination when it learns how to assess applications. And second, how much easier is it to correct the AI behaviour as compared to the human behaviour? This article doesn't really consider the comparison with humans. But it does show the AI can be corrected. How about the human counterparts?
Today: 2 Total: 144 Stefan Milne-U. Washington, Futurity, 2024/06/26 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes,
stephen@downes.ca,
Casselman
Canada
A lot of the discussion has focused on the use of AI to address learning outcomes. This paper, by contrast, looks at "the strengths and the positive aspects of the learning process to promote wellbeing" - in other words, AI-based learning technology that cares. "Despite focusing only on the learning system's inefciencies and on the hegemony of solutions to tackle the learning gap, we also need to shed light on the strengths and the positive aspects of the learning process to promote wellbeing." Drawing on John Self's writing about the defning characteristics of Intelligent Tutoring Systems, the authors outline how "ITSs care not only about what the student knows and misunderstands but also about what the student feels and how such interaction afects them." I'm sure a few readers are sceptical, but I've never felt a single-minded focus on 'learning outcomes' was ever the intended objective of educational technology, or education generally.
Today: 2 Total: 157 Ig Ibert Bittencourt, et al., International Journal of Artifcial Intelligence in Education, 2024/06/26 [Direct Link]The title of this article is a bit misleading, as the story describes an experiment where the 404 authors tried to replicate their site using web scraping technology. Basically, the sites either harvest feeds and link back to the source (I made my own sites like that back in the day), copy and reproduce full text, or use AI to rewrite copied text and present it as a new article. That's not the same as having an AI write your news site for you. In my opinion, a confluence of three factors make these possible: first, Google's ad model, which makes such sites profitable; second, the technology, which makes it easier to fool Google's search engine; and third, news sites themselves, which these days rely less and less on original research and reporting.
Today: 5 Total: 218 Emanuel Maiberg, 404 Media, 2024/06/25 [Direct Link]Today's new word (for me at least) is "elastocalorics", which refers to types of materials "emit heat when subjected to mechanical stress and cool down when the stress is relaxed." This is one of the ten new technologies predicted in this report (46 page PDF) from the World Economic Forum. The rest of the ten are various flavours of sensors and AI, proteins and genomics, and carbon capture. I don't meanto sound glib - I mean, there's a fair bit of research behind these ideas - but there just feels to me like there's a disconnect between what we see here and what we need. Via Alan Levine.
Today: 2 Total: 266 Mariette DiChristina, Bernard Meyerson, World Economic Forum, 2024/06/25 [Direct Link]This article should be read from the bottom up as well as the top down, as the inference can work in either direction. The article begins as a critique of the Turing test (which says essentially that a computer has achieved artificial intelligence if it can fool a human) and Turing-like tests. Beetham offers the observation that Turing tests do as much to make humans appear as computers as they make computers as humans, since the only a text-based interface is used. But she then takes this a step further to suggest that higher learning itself changes the student as part of an identity-building process. Writing for assessment forces a person to interact differently than they would otherwise. As Beetham writes, "I think most students experience academic English as a profoundly 'other' discourse." The idea here, in both parts of the article, is to depict writing as an activity, not a product. As derived from Wittgenstein: "language is not representational form, however complex and inter-related, but action, interaction and expression."
Today: 2 Total: 223 Helen Beetham, imperfect offerings, 2024/06/25 [Direct Link]Via Sanjaya Mishra, here's a new report (52 page PDF) from the Commonweath of Learning on AI policy. It's so new I can't find it in the CoL repository. Based on a literature review and survey of AI policies, "this report identifies 14 areas that stakeholders in higher education institutions should consider while developing policies for AI." You can find the list on pp. 15-16. The report also has general considerations on setting up the policy and a process for development and implementation.
Today: 3 Total: 259 Mohamed Ally, Sanjaya Mishra, Commonwealth of Learning, 2024/06/25 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Jun 26, 2024 12:37 a.m.

