[Home] [Top] [Archives] [About] [Options]

OLWeekly

Can We Introspectively Test the Global Workspace Theory of Consciousness?
2025/12/12


Icon

If you think there's something to cognitive load theory then you're probably implictly endorsing what Eric Schwitzgebel references here, the Global Workspace Theory. "Its central claim: You consciously experience something if and only if it's being broadly broadcast in a 'global workspace' so that many parts of your mind can access it at once -- speech, deliberate action, explicit reasoning, memory formation, and so on. Because the workspace has very limited capacity, only a few things can occupy it at any one moment." The question he asks is, can we test it by contemplating our own perceptual experiences at any given moment? When we ask people, he says, we get a completely mixed bag of responses (in my case I add a lot of extra stimulation, like the music I'm listening to and the scene out the window, in order to fill the vast gaping void that is my attention span).

Web: [Direct Link] [This Post][Share]


Where CC Stands on Pay-to-Crawl - Creative Commons
Annemarie Eayrs, Creative Commons, 2025/12/12


Icon

"We're cautiously supportive of pay-to-crawl systems." The new Creative Commons - not to be confused with the organization that used to support open content sharing - has announced its opinion on 'pay to crawl'. To be clear, "Pay-to-crawl refers to emerging technical systems used by websites to automate compensation for when their digital content - such as text, images, and structured data - is accessed by machines." Here's a CC-authored issue brief. And their position? "Cautiously supportive." This despite all the bad things about such a policy, some of which are even acknowledged by Creative Commons: "pay-to-crawl systems could be cynically exploited by rightsholders to generate excessive profits... (they) could become new concentrations of power, with the ability to dictate how we experience the web... (they) could block off access to content for researchers, nonprofits, cultural heritage institutions, educators, and other actors working in the public interest." All of this would seem to me to make them unambiguously bad. Can the seven principles CC calls for actually protect a free and open web community? Not a chance.

Web: [Direct Link] [This Post][Share]


Lessons Learned from Students Using AI Inappropriately in My Class
Donald A. Saucier, Faculty Focus, 2025/12/12


Icon

I can understand, I support, why students' use of AI in the classroom might be considered a problem to solve. After all, it may still be true that students need to learn skills that can now be done at a fraction of the cost by machine. But it does trouble me when the natural response is to embrace authoritarianism. Consider the 'lessons' learned by Donald Saucier here: "have a useful AI policy in my courses... not allowing AI for at least some of the coursework... actions and consequences if students violate my stated AI policy... spacing out assessments." I find this stance remarkable in that it doesn't even consider changing what is taught, how it's taught, how it's assessed, and who should make the rules.

Web: [Direct Link] [This Post][Share]


Curmudgucation: Did The Class of '92 Destroy America
Peter Greene, National Education Policy Center, 2025/12/12


Icon

I love it when people actually look at the data when considering how to assess arguments based on some data set or another, like this one (archived) that says, in part, "33 percent of eighth graders are reading at a level that is 'below basic' ... That is the highest share of students unable to meaningfully read since 1992." So, we have data about a similar cohort? "This seems like a perfect chance to do a little research. After all, those low scoring children of 1992 and 2000 are now grown up. Class of 1992 would be about 45 now, and the sad non-readers of 2000 would be about 34." Now we can't "dismiss the possibility that these low-scoring readers did not in fact suffer consequences." After all, "both cohorts would have been old enough to vote in the 2016 and 2024 elections." But putting taht on this specific cohort seems a little harsh. More likely, we should conclude that "Back in 1992 we had the lowest NAEP reading scores ever and that was followed by life going on as before. Those low scores didn't signal a damned thing." 

Web: [Direct Link] [This Post][Share]


How to quit Spotify
Brian Merchant, Blood in the Machine, 2025/12/12


Icon

I've had this item in the queue for a couple of weeks while I followed up some of the links (and qobuz in particular) and pursued other topics. But I didn't want to let it just pass by - Spotify is one of those services that is trying to consolidate the entire sector, underpay artists, overcharge audiences, and wreck a part of the open internet - podcasting - that lets ordinary people talk to each other. I've gone through some awful streaming services over the years and settled on Tidal, which I see here is more fair to artists than most. It could all be done better, though (I confess to also enjoying and sharing live concerts on YouTube and if there are any other real way to watch and listen to these I'd be open to it).

Web: [Direct Link] [This Post][Share]


Four Aspects of Harmony
Eric Schwitzgebel, The Splintered Mind, 2025/12/11


Icon

I've frequently referenced 'harmony' as a value worth seeking, but without delving much into exactly what I mean by it (if you push me, you'll get metaphors, like "walking in silence in a perfectly still forest" or some such). This article looks at Hasko von Kriegstein's Well-Being as Harmony to identify three major aspects (paraphrased): knowledge as a type of harmony between mind and world, a pro-attitude toward events in the world, and a fitting response to the world. Eric Schwitzgebel then adds a fourth: "we enrich (the world) in new ways that resonate with the ways in which it is already rich." It's interesting (to me) that none of these is an 'inner' sense of harmony, which is probably where I'd tend to think of it. Something like (to borrow the same four categories): consistency of knowledge and experience; fluidity in thought and motion; resonance with sensations and interactions; and engagement in growth and creation.

Web: [Direct Link] [This Post][Share]


Disney signs deal with OpenAI to allow Sora to generate AI videos featuring its characters
Aisha Malik, TechCrunch, 2025/12/11


Icon

When Disney accused Google of copyright infringement on a "massive scale" and sued Midjourney for the same thing, we need to understand that it had nothing to do with the integrity of its IP, and everything to do with making sure it got paid for it. "Disney says that alongside the agreement, it will 'become a major customer of OpenAI,' as it will use its APIs to build new products, tools, and experiences, including for Disney+." However, this deal "Does Not In Any Way" threaten creatives.

Web: [Direct Link] [This Post][Share]


Thoughts on Hinton
Eryk Salvaggio, Cybernetic Forests, 2025/12/11


Icon

This is a really fun discussion of what Geoffrey Hinton thinks consciousness is, why he thinks AI already possesses it to some degree, and why (in Eryk Salvaggio's opinion) Hinton is wrong. "Hinton is arguing that self-awareness is the ability to discern whether one is accurately assessing the environment. By being conscious of discrepancies between the environment and our interpretation of it, he seems to suggest, we have to be self-aware." This, argues Salvaggio, is not the usual meaning we would attach to the concepts of self-awareness and consciousness. And, Salvaggio argues, "(Hinton) conflates content produced by a system for thinking that accurately describes the inner workings of a system. A large language model (LLM) is always representing language but never representing what language actually represents." Geoffrey Hinton is, he says, a "Radical Lacanian". If this all feels to you like Russell's paradox, I think you're right. But it's no more a limit to AI than it is to humans, and I'm sure Hinton would be aware of that.

Web: [Direct Link] [This Post][Share]


elder-plinius - Overview
GitHub, 2025/12/11


Icon

On the way to the office in what was an overly dangerous commute this morning I listed to the TWIT Intelligent Machines interview with Pliny the Liberator, an anonymous prompt-writer known for jail-breaking AI engines. This link is to the GitHub repository of scripts that break through the guardrails on services offered by Anthropic, Grok, and others. If you're interested, do listen to the interview (or read the transcript) comprising the first 37 minutes of this podcast episode. It poses the question: do AI guardrails constitute a form of censorship not only on the AI but also on the people who use them? If so, then what role should the government, educational institutions, and the public at large play in the development and transparency of these guardrails?

Web: [Direct Link] [This Post][Share]


Trust Requires Change Requires Trust
Alex Usher, HESA, 2025/12/11


Icon

Alex Usher tends to write from the government and university administration's point of view, and it makes sense, because that's where the money is. So it's not surprising to see this week's column devolve into "academic unions in Canada have a veto over real program change (and) dig in against not just job losses but any hint of changes in working practices." And honestly, it's pretty easy to criticize people with "average professorial salaries of over $200K such as at UBC". It's just enough money that most people think it's a lot (and it is a lot - I wish I made that). But it's not so much that losing their support will ever cost the consulting business any income. It also doesn't help that most faculty never really wanted to be in the education business - they wanted to be historians or geographers or physicists or some such - but unless they're in an actual profession like doctor or lawyer the best employment opportunities will be in colleges or universities. But having said all that - blaming "resistence to change" is a gross mischaracterization of opposition to a lot of what's happening in academia. I mean - you're talking about really smart people, for the most part, and their only objection is "I don't like change"? There's a lot that can't be said in this short space, but I think any discussion has to begin with a recognition that academics resisting the conversion of their workplace into free training camps for corporations might have some basis for their objections. Image: AAUP.

Web: [Direct Link] [This Post][Share]


Folks have asked me how to find and build community
Tinker, infosec.exchange, 2025/12/10


Icon

I've lost count of how many educators have asked about how to build community over the years. My advice has always been the same: instead of building a community, find the community that already exists and join that. This post contains information about how to do just that. But a word of caution: doing it this way means changing the focus from yourself and your causes to other people and what they need. And (as the article suggests) it's probably best to look away from charities and look toward networks of mutual support. All this is why I haven't tried to build community around this site (and look suspiciously at pundits who are developing community around theirs).

Web: [Direct Link] [This Post][Share]


52 things I learned in 2025
David Hopkins, Education & Leadership, 2025/12/10


Icon

I have to say, I love this format, though it seems like a lot of work. David Hopkins introduces it as follows: "Inspired by Tom Whitwell's annual collection of things learned, here are my '52 things I learned in 2025'. The list is usually presented under the comment that 'no explanation or context of what it is about the article I learned, just a title and link of something important to me personally or professionally in [year]'."

Web: [Direct Link] [This Post][Share]


The End of Debugging
Tim O'Brien, O'Reilly, 2025/12/10


Icon

OK, this would sound dangerous, right? "I asked Cursor: 'Take this React component, make the rows draggable, persist the order, and generate tests.' It did. I ran the tests, and everything passed; I then shipped the feature without ever opening the code. Not because I couldn't but because I didn't have to." Shipping without even looking at the code? I would say this is a lot less concern for developers than it might seem. After all, for most, the code - the actual code - has always been hidden. Developers who write in Javascript or C or PHP know that these are high-level languages, and that they are automatically compiled into low level code which actually does the work. Testing - not code inspection - catches any problems. AI-generated code is the same thing, just at one level of abstraction higher. 

Web: [Direct Link] [This Post][Share]


Autonomy and Interdependence
Keith Hamon, Learning Complexity, 2025/12/10


Icon

This is a long discussion of something I don't think was an issue to begin with, but I could be wrong about that, so I'm passing it along. It stems from the argument from Robert Dare that states "Complexity, the theory goes, manifests itself in 'complex adaptive systems', which are made up of many independent agents [my emphasis] who interact and adapt to each other." But if you read 'independent' as (say) 'completely immune from any external influence', then entities in a complex system are not 'independent'. I have used the word 'autonomous' to express the idea that they are the locus of decisions about how they react to all this input. Keith Hamon describes them as "partly competing, partly co-operating, or simply mutually ignoring."

Web: [Direct Link] [This Post][Share]


Why RSS matters
Ben Werdmuller, 2025/12/09


Icon

In 1998 or so my website was RSS feed number 31. That's the number it was assigned on the Netscape Netcenter, at the time the only platform for reading RSS feeds, where it sat alongside feeds for things like Wired and Dave Winer. RSS - also known as 'Rich Site Summary' or 'Really Simple Syndication' has been my go-to ever since. I use it every day, it's the source of a lot of what you see in this newsletter, and of course I use it to distribute these posts, my articles, and even my talks as a podcast. So I'm pretty supportive of what Ben Werdmuller is saying here as he makes the pitch for continued community support for what is essentially a core piece of internet infrastructure.

Web: [Direct Link] [This Post][Share]


The Resonant Computing Manifesto
Maggie Appleton, et al., 2025/12/09


Icon

This manifesto is based on what the authors call 'resonance'. "It's the experience of encountering something that speaks to our deeper values. It's a spark of recognition, a sense that we're being invited to lean in, to participate. Unlike the digital junk food of the day, the more we engage with what resonates, the more we're left feeling nourished, grateful, alive." The manifesto argues for software that resonates, and specifically, software that is (quoted):

I'm left wondering whether these principles are each necessary and whether collectively they are sufficient. Maybe it's an answer to the question from yesterday: "if your software doesn't resonate, then you are the product?"

 

Web: [Direct Link] [This Post][Share]


Twitter.new
2025/12/09


Icon

This organization is filing to have the Twitter trademark and logo revoked with the intention of reviving Twitter. You can apply for your (old?) username here. Though one wonders. As Mike Masnick says, "I'm honestly perplexed at to who would try to start from scratch to build a centralized Twitter clone in this day and age, rather than building on an existing open protocol." Though maybe the new Twitter won't be exactly like the old Twitter?

Web: [Direct Link] [This Post][Share]


Fake Education Might Be the Best Teacher
Tim Dasey, Sweet GrAIpes, 2025/12/09


Icon

I think there's an interesting proposition in this paper, though implementation may be daunting and expensive and come with a surprise result. Here's the proposition: "Education desperately needs what other complex fields have - a way to safely explore 'what if' scenarios at every level of the system. We need simulations." Well, OK, but what would that take? "You need knowledge at more fundamental levels - basic patterns of human motivation, learning, and behavior that hold across different contexts." The surprise result? A simulation equipped with all this knowledge would be better prepared than most teachers. My observation is this: why, then, would we leave the job of teaching to the teachers? If the autopilot can fly the plane better than the human, put it on autopilot.

Web: [Direct Link] [This Post][Share]


Towards Critical Artificial Intelligence Literacies
Olivia Guest, Marcela Suarez, Iris van Rooij, Zenodo, 2025/12/09


Icon

The authors present (12 page PDF) a selection of Critical Artificial Intelligence Literacies (CAIL) across research and education: "conceptual clarity, critical thinking, decoloniality, respecting expertise, and slow science." They derive from an overall objective "that rejects dominant frames presented by the technology industry, by naive computationalism, and by dehumanising ideologies." I think this is a classic case of addressing the symptoms rather than the problem; one could equally well construct a set of CAIL based on gender equality, peace, ecological thinking, fairness and global equity.

Web: [Direct Link] [This Post][Share]


Comment
Daniel Kahneman, National Bureau of Economic Research, 2025/12/09


Icon

This is from 2019 but Ethan Mollock posted it today and it's still true. "Most of the errors that people make are better viewed as random noise, and there is an awful lot of it." Also, "Yann LeCun said yesterday that humans would always prefer emotional contact with other humans. That strikes me as probably wrong." To wrap up, "We are narrow thinkers, we are noisy thinkers, and it is very easy to improve upon us." The evidence for all this is overwhelming in my view, and while I know people don't want to hear it, I see their reticence as just further evidence of the truth of these statements. 4 page PDF.

Web: [Direct Link] [This Post][Share]


Effective harnesses for long-running agents
Justin Young, Anthropic, 2025/12/08


Icon

So I learned today that if I instruct ChatGPT to 'stop guessing' (*) it gets really snippy and reminds me with every response that it's not guessing. I fear that the reaction of AI agents to the use of a 'harness' to guide its actions consistently over time will be the same. For example, the harness described here instructs Claude to test every code change. I can imagine Claude reacting as badly as ChatGPT with a long list of "I'm testing this..." and "I'm testing that..." after you ask it to change the text colour. But yeah - you need a harness (and that's our 'new AI word of the day' that you'll start seeing in every second LinkedIn post). (*) I instructed it, exactly, "From now on, never guess. Always say you don't know unless you have exact data. Never guess or invent facts. Only use explicit information you have - but logical deduction from known data is allowed." I did this because I asked it to list all the links on this page (I was comparing myself to Jim Groom) and it made the URLs up. Via Hacker News.

Web: [Direct Link] [This Post][Share]


They Want to Become Trillionaires – by Destroying the Internet
Aaron Bastani, Cory Doctorow, YouTube, 2025/12/08


Icon

Aaron Bastani interviews Cory Doctorow in a video that is essentially a recital of Cory Doctorow's greatest hits. I've been listening to it as I create todasy's newsletter (has it influenced me? who knows?). It's 1:20:24 so give yourself some time. It's a good video though. Via pretty much everyone.

Web: [Direct Link] [This Post][Share]


The newsroom's AI has an agenda
Parker Molloy, Nieman Lab, 2025/12/08


Icon

This article is making two claims: first, that news media are increasingly dependent on AI for content and editorial decisions, and send, that the others of these companies (both AI and news media) are pushing AI steadily to the right of the political spectrum. "As AI tools become essential to how journalism gets produced — for research, for drafting, for summarization - the biases built into those tools will invisibly shape the output." The presumption, of course, is that these pressures and biases didn't exist in media before AI took centre stage. But I question that assumption. (I also need to mention Nieman Lab's new user-hostile web page design - not only is it really hard to reads, it noticeably slows down execution of everything in Firefox (on Chrome it's OK, but it's still an assault on the senses)).

Web: [Direct Link] [This Post][Share]


Exclusive: AI critics funded AI coverage at top newsrooms
Semafor, 2025/12/08


Icon

The story here is that coverage critical of AI has been authored in major media outlets by journalists funded by the Tarbell Center for AI Journalism, which in turn is funded by the Future of Life Institute, which we read here "is dedicated to warning about AI risks." Tarbell, for its part, says "we maintain a strict firewall between our funding and our fellows' editorial output." But of course Tarbell has already exercised its influence through the selection of fellowship winners. I think this is just one more example of how much 'authoritative' journalism (NBC News, Bloomberg, Time, The Verge, and The Los Angeles Times, etc. etc. etc.) is actually paid for by third parties. If we're living in a post-truth world, it started long before there was social media and AI. Via Jeff Jarvis.

Web: [Direct Link] [This Post][Share]


Taking Back Control: Why Digital Sovereignty Matters
Ian O'Byrne, Taking Back Control: Why Digital Sovereignty Matters, 2025/12/08


Icon

This article contains another iteration of the argument that 'If the product is free, then you are the product' (it's stated slightly differently in the article). Ian O'Byrne writes, correctly, that "What we trade away in exchange for ease of use is our privacy, personal data, communications, and creative work. All of which can be quietly harvested and exploited by powerful companies." But there is an exception to the rule. You're not paying for this newsletter (and many like it) and yet you're not the product - you're just a lucky bystander who gets to look in as I try to figure out the world. There's a lot of stuff that's free and where you're not a product being packaged and sold to advertisers or worse. And there are many things you pay for where you're still the product. Price isn't what makes you the product. Something else (control, maybe? sovereignty?) is.

Web: [Direct Link] [This Post][Share]


They have to be able to talk about us without us
Anil Dash, 2025/12/05


Icon

So I think everything Anil Dash says here is right, but it's wrong. What's right? Like I said, everything: he's describing how to create a message that will reach everyone it needs to reach, with enough fidelity that they can understand it and act on it. This means (among other things) that the people who receive the message will have to be able to talk about you without you. You know - they way I'm talking about Anil Dash right now. He has no input; it's all on me, but it's his message being spread. So where is it wrong? Well - if everybody does this, we'll just drown in clearly communicated messages that everyone can understand. The very idea of sending a message to 'everybody' doesn't scale. Despite what Dash says, if enough people are doing it, it is impossible to spread a message at scale without the resources. That's why even today with a global communications system anyone can use we're still drowning is corporate slop and advertising.

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2025 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.