[Home] [Top] [Archives] [About] [Options]

OLDaily

Nuance is coming to a paywall near you
Mark Stenberg, Medialyte, Substack, 2022/05/04


Icon

As readers know, I attempt to link directly to source articles, and not to any resource that puts up a subscription wall (aka a spamwall) or a paywall between my link and the resource in question. This is getting harder and harder with the proliferation of different types of paywall. This article is a pretty good guide to the many models in use today, with examples of each, up to and including the dynamic paywall used by Rolling Stone which employs an algorithm to put up the paywall if it thinks you'd be willing to pay.

Web: [Direct Link] [This Post]


Un-Annotated
Audrey Watters, Hack Education, 2022/05/04


Icon

I do not disagree with Audrey Watters's reasoning as she bans services like Genius and hypothes.is from adding annotations and commentary to her website. She writes, "This isn't simply about trolls and bigots threatening me (although yes, that is a huge part of it); it's also about extracting value from my work and shifting it to another company which then gets to control (and even monetize) the conversation." I'd do the same, but my own work does not attract nearly enough attention of this sort to make it worth the effort.

Web: [Direct Link] [This Post]


AI research is a dumpster fire and Google’s holding the matches
Tristan Greene, The Next Web, 2022/05/04


Icon

The subtitle is "scientific endeavor is no match for corporate greed" and the paper describes Google's firing of another lead AI researcher and the proliferation of questionable papers in the literature seeking to monetize one or another aspect of the technology. Here's the issue, though. I don't think there's evidence that shows that either corporate greed or scientific opportunism is unethical by today's standards. They are essentially 'business as usual'. Now sure, I am among those desiring that a different ethic prevail. But I have never felt a part of the majority when it comes to ethics.

Web: [Direct Link] [This Post]


Focus on the Process: Formulating AI Ethics Principles More Responsibly
Ravit Dotan, The Gradient, 2022/05/04


Icon

Most of what's contained in this article is consistent with my own findings in the realm of AI ethics, and most especially, consistent with the idea that finding a single universal set of principles of AI ethics is probably unattainable. If there is such an appearance of consensus today, it is probably an illusion created by a marked imbalance in the origins of those creating the principles - and here we refer not only to culture or national origin, but also factors such as employment and industry. "Governments mention privacy and security more than other types of institutions, but mention accountability less. Corporations mention transparency and collaboration more, but mention privacy and security less. Academia, non-profits, and non-government organizations mention humanity and accountability more, but mention fairness less." Etc.

Web: [Direct Link] [This Post]


Going Rogue: Teachers designing their own conferences as a transgressive act
Philippa Nicoll Antipas, Conference Inference, 2022/05/04


Icon

I'm not a fan of the black text on a dark salmon background, but I like the idea of redesigning conferences. Here, Philippa Nicoll Antipas describes an approach called Plan D, "a game-like collective activity whereby teachers are supported to go rogue and design their own professional learning and development needs." This in many ways resembles an unconference or a barcamp.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.