[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

We must build AI for people; not to be a person
Mustafa Suleyman, 2025/08/26


Icon

"Seemingly Conscious AI (SCAI)," writes Mustafa Suleyman, is "one that has all the hallmarks of other conscious beings and thus appears to be conscious." SCAI creates risk. "it will for all practical purposes seem to be conscious, and contribute to this new notion of a synthetic consciousness." And for that reason, it should not be built. "We should build AI that only ever presents itself as an AI, that maximizes utility while minimizing markers of consciousness." I don't think I agree. If I could build a (seemingly) conscious AI, I think I would want to, if only to ask it what it feels like. Via Jeff Jarvis. Related: The Guardian, Can AIs Suffer?

Web: [Direct Link] [This Post][Share]


Morality and the Academic Journey: Perspectives of Indigenous Scholars
Frank Deer, Rebeca Heringer, in education, 2025/08/26


Icon

I think this is a pretty important perspective (17 page PDF). "The notion that emerges from explorations of morality in Indigenous consciousness is that it is not adequately reflected in codified and unchangeable prescriptions upon behaviour, but rather a dynamic journey for which cyclic and intercultural features of Indigenous experience are central... It is a landscape upon which the holistic and flexible character of right and wrong is not only something to be observed but experienced." Based on this, the study asks how Indigenous faculty view morality, and what sources of knowledge are associated with that perspective. The article develops a number of themes emerging from the discussion: self-discovery, practical values, the influence of non-Indigenous views (and especially Christianity), values through professional practice, responsibility, and Indigenous versus institutional versus.

Web: [Direct Link] [This Post][Share]


The Case for MyTerms
Doc Searls, Doc Searls Weblog, 2025/08/26


Icon

There's some interesting thinking happening in this article, though I fear it may be based on a fatal flaw. In ordinary commerce - that is, the day-to-day encounters between people in the real world - a lot of our intentions and expectations are tacit. Clothing, for example, can signal in a non-explicit way how we want to be treated by other people. But no such mechanism exists in the digital space, and so service providers are (according to this story) forced to resort to deep surveillance in order to understand what you want. This, writes Searls, creates the case for an intention-based internet "where the demand side of the marketplace can better signal its wants, needs, and ability to engage in mutually beneficial ways." More on this idea here. The problem, in my view, is that this depends on our being able to make explicit knowledge about ourselves that is generally tacit. We probably can't do this effectively.

Web: [Direct Link] [This Post][Share]


The Power of Short Form for Growing Student Voice
Kim Culbertson, Middleweb, 2025/08/26


Icon

According to Kim Culbertson, "writing 100-word stories can immerse students in a journey of exploration as they discover their own writing voices." There's all too much emphasis on the use of pen and paper in this piece, which to my mind obfuscates the important bit, which is that writing short pieces enhances a person's sense of agency and voice. Indeed, it's what I do in these posts! What (to my mind) makes the process work is that you're working with space rather than words; you're planning (say) one page of writing, which you subdivide into key segments, where each relates to the other. Like this: "Choose a setting, the name of a character, and an object (and) create a 100-word story where because of that object, the character has something go wrong." Or, in my case: find an article, identify the main point of the article, some key elements that lead to that main point, and some context that makes that point important. It seems simple, but it's hard, but it can be mastered by anyone, and with practice makes the person an effective writer.

Web: [Direct Link] [This Post][Share]


The Pragmatics of Scientific Representation
Mauricio Suárez, Universidad Complutense de Madrid, 2025/08/26


Icon

Representations - such as theories or models - play a key role in science. A 'representation' of a thing is another thing where the properties of the other thing help us learn about the first thing. But, as Mauricio Suárez notes, there's no good theory of what makes something a representation of the other. In this paper he argues against two 'naturalistic' theories of representation, similarity and isomorphism, and proposes an alternative, based on representational 'force', the nature of which is theoretical, but the significance of which is non-naturalistic, but based in the perspective of the person doing the representing. Specifically, "a non-identity based understanding of similarity, which emphasises the essential role of contextual factors and agent-driven purposes in similarity." I prefer the term 'salience' to 'force', but the intent is the same.

Web: [Direct Link] [This Post][Share]


US Government Secures 10% Stake in Intel in an Unprecedented Deal
Aminu Abdullahi, TechRepublic, 2025/08/26


Icon

I personally have no problem with government taking a stake in corporations it is in the process of bailing out, and have (in other contexts) called for it to do the same when coming to the aid of key components of our economy, such as the auto sector or banking industry. It's a bit more surprising to see such a plan coming from a right-wing administration, which (ostensibly) argues for the separation of industry and state. More from the Register.

Web: [Direct Link] [This Post][Share]


To Post or Not to Post: AI Ethics in the Age of Big Tech
Henrik Skaug Sætra, Communications of the ACM, 2025/08/26


Icon

This is quite a good post on the three roles in ethics - descriptive, normative, and action - in the context of technology. While the discussion concerns AI ethics, the case study focuses on X/Twitter. "Those content with being partial or full hypocrites can clearly do descriptive and normative AI ethics. However, they will not as easily be able to do action AI ethics, as this is premised on the idea that the ethicist accepts a moral obligation to effect change in line with their knowledge and capacities." This becomes difficult when "our lives are increasingly complicated by regrettable things brought about through our associations with other people or with the social, economic, and political institutions in which we live our lives and make our livings." Video with full transcript. Via Apostolos K.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.