Stephen Downes

Knowledge, Learning, Community

'Transparency' is one of those 'ethical AI' virtues that sounds good in the abstract, but becomes harder to define (and reach consensus on) the closer you look at it. Here the European Commission offers a first draft (32 page PDF), though what we have is not so much an ethical code as the beginning of a legal framework. Still, it's progress. So, what is transparency? Here's one take: "marking and detection of AI-generated and manipulated content." This raises questions of technical feasibility (especially for smaller enterprises), agreement on open standards and specifications, and trust and cooperation along the value chain. Additionally, such marking needs to be detectable by the people and systems that access the content. This requires "understandable and accessible disclosure of verification and detection results," whatever that means, and "literacy for AI content provenance and verification." So - is it a part of AI ethics to require (in some sense) AI literacy training? How can we have "transparency" otherwise? There's also language on measurement and markings, leading to the question of what sort or how much AI-assistance counts as 'AI manipulation' or 'deepfakes'. See also: Deepfakes leveled up in 2025.

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2026
Last Updated: Jan 05, 2026 2:24 p.m.

Canadian Flag Creative Commons License.