[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

PowerNotes Launches Composer, an AI-Enriched, Semi-Proctored Writing Tool
eSchool News staff, eSchool News, 2024/02/26


Icon

Well this is one way to respond to AI-based plagiarism: force students to author their work using a proctored writing application. "Composer is a new tool that allows PowerNotes+ users to see the full picture of writers' research and writing, from AI-assisted text to outside sources to original thoughts, all color-coded for easy identification." No doubt a writing tool would help students a lot (it would help me a lot). Composer has a lot of good features. But blending it with a proctoring tool is a step too far. (The story reads exactly like a press release, and probably is one, though I was not able to find an original press release in a web search).

Web: [Direct Link] [This Post]


Expanding on ethical considerations of foundation models - IBM Blog
IBM AI Ethics Board, IBM Blog, 2024/02/26


Icon

A 'foundation model' is "an AI model that can be adapted to a wide range of downstream tasks." For example, you can build one foundation model for "runways" and then adapt it in different ways for different tasks, like crack detection or maintenance issues. This report (29 page PDF) takes a risk-based approach to ethical issues involving foundation models, including especially new risks inherent to the use of foundation models specifically. The risks are about what you would expect - false reports, bias, privacy loss, etc. The risks to focus on are the ones labeled 'new' (in the far-right column). The biggest new type of risk is data being retained in the foundation model that might be exposed in the application model, a risk amplified by an inability to trace output's source or provenance. Worth noting is that the only way a corporation knows something is ethically wrong is through "fines, reputational harms, disruption to operations, and other legal consequences." Image: Nvidia.

Web: [Direct Link] [This Post]


'Facial recognition' error message on vending machine sparks concern at University of Waterloo | CTV News
Colton Wiens, Kitchener, 2024/02/26


Icon

This has been in the news here over the last few days: "A set of smart vending machines at the University of Waterloo is expected to be removed from campus after students raised privacy concerns about their software." The story in CTV News, to my mind, downplays the concerns. But it should be recognized that corporations have been surveilling their customers for quite some time, which these vending machines being only the latest example. I'm not sure whether what they did it legal, but I'm not sure the companies behind it care. The law doesn't work the same in the corporate world.

Web: [Direct Link] [This Post]


Deepfake pedagogy
Helen Beetham, imperfect offerings, 2024/02/26


Icon

Yes, two Helen Beetham posts in one day. That's the only way I can keep up with them! This one - again too detailed to do justice to here - considers the implications of digital viudeo in general and Sora in particular. There's one line of thought that has become a go-to trope over the lat 18 months: "Synthetic media are fabrications of digital content, not representations of the 'real world'. They are built from whatever biases, profit motives and user compulsions drive the production of that content." This is partially true, obviously. There's also the argument that "promise of accelerated fakes will accelerate the decline of belief in any shared visual reality" - but this of course presumes that (a) we had a shared reality, and (b) that it was accurate, and (c) that this is a good thing. Beetham also asserts "we can work out some concise (if contingent) rules for making our way in the world when we are infants." I don't agree. Cognition is not rule-based. Anyhow, like I said, this shot summary does not do justice to Beetham's post.

Web: [Direct Link] [This Post]


Gods, slaves and playmates
Helen Beetham, imperfect offerings, 2024/02/26


Icon

I can't do justice to this long post in a short paragraph, so you'll just have to set aside a half hour or so and go read it yourself. Take your time; there's a lot to digest. But maybe, before you, go read it, ask yourself, how do we know what other minds are, let alone whether a machine (or anything!) can have one. There's of good stuff here - like, for example, the entirely plausible suggestion that companies humanize computer application because people will engage with them more. At the same time (in my view) there's more than a little folk psychology, for example, the suggestion that the mind is more than an embodied brain, but what that 'more' is, well, we can't know, but whatever it is, machines can't have it. Because...?

Web: [Direct Link] [This Post]


Canada Learning Bond's Impact on Post-Secondary Education - SRDC
Reuben Ford, Ashley Pullman, SRDC, 2024/02/26


Icon

I don't know much about the Social Research and Demonstration Corporation so I can't vouch for their credentials, but the results of their latest study are not surprising: the  Canada Learning Bond (CLB) program "is not reaching the lowest income families or children in care... does not close the gap in education savings between low- and high-income families... and varies considerably across Canada", reaching fewer people in remote and rural regions. People don't suddenly become able to afford an education just because you make them save for it longer, just as they don't suddenly become able to save for retirement. These savings plans take something that should be available to all Canadians - education, pensions... - and make them the domain of people who can afford to pay. What's next? Registered Cancer Pre-payment Plan? Here's the full report (86 page PDF).

Web: [Direct Link] [This Post]


RTO doesn’t improve company value, but does make employees miserable: Study
Beth Mole, Ars Technica, 2024/02/26


Icon

I am not even remotely surprised. "The analysis, released as a pre-print, found that Return to Office (RTO) mandates did not improve a firm's financial metrics, but they did decrease employee satisfaction." Managers, however, did have a good reason to mandate RTO. Although CEOs often justified RTO mandates by arguing it will improve the company's performance, "Results of our determinant analyses are consistent with managers using RTO mandates to reassert control over employees and blame employees as a scapegoat for bad firm performance," the researchers concluded. Via Ben Werdmuller.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.