[Home] [Top] [Archives] [About] [Options]

OLWeekly

The Australian Workforce Crisis: Why skills aren’t enough
Colin Beer, Col's Weblog, 2026/03/09


Icon

Colin Beer is usually sharper than this, so while I agree that knowledge and skills (as he defines them) are not enough, I think we need some clarity regarding what he calls 'dispositions' (it's not that he's wrong so much as he's fuzzy). He writes, "Dispositions represent the values, tendencies, and attitudes, such as motivation, mindset, professional identity and agency, that dictate how a professional actually navigates the "swampy lowlands" of practice. In simple terms, dispositions are the habits of mind and heart that shape how we show up when work gets hard." Dispositions are best described as tendencies, which may result from habits, or which may be subconscious tics. They should be contrasted with attitudes, which are states of mind regarding such things as values and truth. Expertise (in, say, the Dreyfus sense) is a matter of disposition, while professionalism is a matter of attitude. It's certainly arguable that an education should (help) shape both, but they are very distinct things, and are approached very differently.

Web: [Direct Link] [This Post][Share]


What we mean by good relationships - Network Weaver
Immy Robinson, Network Weaver, 2026/03/06


Icon

I spent some time thinking about this short article that explores "the difference between good relationships and good transactions" where a 'good relationship' is "unique, organic, and empathetic, helping us understand when to invest in building relationships versus when a transaction suffices." What made me think wasn't the distinction itself, which seems straightforward, but the terminology used. The way 'relationship' is defined blends elements of different constructs - we have 'unique' and 'sustained', which to me describes a 'connection', but in addition there is the presumption that relationships are embodied, as evidenced by 'organic' and 'empathetic'. The connection describes the relationship itself, while the embodied element describes the thing that is related. The transaction side, meanwhile, describes the exchange that happens between two entities, as opposed to the connections between them. The world view of this article doesn't grant (or doesn't require?) embodiment for transactions to occur. I would ask whether the author intended to distinguish between embodied and non?-embodied entities here, or whether it's just phrasing.

Web: [Direct Link] [This Post][Share]


Context Hub
Andrew Ng, The Batch, 2026/03/09


Icon

The current issue of The Batch introduces readers to Context Hub, Andrew Ng's new tool to provide API documentation to your coding tools. The purpose of this is to make the tool aware of new tools and updates to existing tools, so they're not depending on out-of-date models. "Chub is built to enable agents to improve over time. For example, if an agent finds that the documentation for a tool is incomplete but discovers a workaround, it can save a note so as not to have to rediscover it from scratch next time." It's available on GitHub and installed using the node package manager (npm). It's the lead article in this issue; you can also read other AI news from the week.

Web: [Direct Link] [This Post][Share]


Why organisms are more than machines
Adam Frank, Big Think, 2026/03/06


Icon

This is a good article, I'll grant it that. I resist its main thesis; ultimately the argument is not successful. But it's worth state here. The thesis - as suggested by the title - is that life is inherently different from non-life. "Organisms are more than just machines, and minds are more than just computers." The main argument, which Adam Frank draws from Hans Jonas, is that "living systems are not stable collections of atoms like a rock. Instead, they are stable patterns that persist through time... a specific kind of organization through which matter and energy pass." And because life is a type of organization, and not reducible to matter and energy, it has special needs, for example, "interiority and individuality." Also, "every organism must actively maintain itself against the continuous threat of its own dissolution" and "life always has purpose." There is additionally the argument from Robert Rosen that "metabolic systems could be viewed as a special kind of organization where networks of processes close back on themselves" and hence "not Turing computable."

Web: [Direct Link] [This Post][Share]


What's an API?
Sung Won Chung, Technically, 2026/03/09


Icon

This is a clear and well-written account of what an application programming interface (API) is. We read about APIs all the time, from xAPI for learning records to MCP as an API for artificial intelligence. This article describes in an accessible way what we mean by API, exactly. "When engineers build modules of code to do specific things, they clearly define what inputs those modules take and what outputs they produce: that's all an API really is."

Web: [Direct Link] [This Post][Share]


The Hunt for Dark Breakfast
Ryan Moulton, Ryan Moulton's Articles, 2026/03/06


Icon

OK, it started as a joke, and it's a bit of a joke article: "Breakfast is a vector space. You can place pancakes, crepes, and scrambled eggs on a simplex where the variables are the ratios between milk, eggs, and flour. We have explored too little of this manifold. More breakfasts can exist than we have known." The concept here is that we have names for different vectors of the three basic ingredients. For example, 'Pancake' = {milk:0.5, flour:0.25, egg:0.25}. The vector space is the combination of all possible values of these three items. Why does this matter? A vector space allows us to make inferences. For example, if we're using three eggs, what are we probably making? If we're not using any eggs, what then? A vector space is also a probability space. This article goes in a different direction, searching for 'dark breakfast', where the probabilities of it being anything are low (think omelette with flour added). If you understand this, you're on the way to understanding machine learning. Via Data Science Weekly.

Web: [Direct Link] [This Post][Share]


Why We Brought MCP-I to DIF (and Why DIF Said Yes)
Alex Keisner, Dylan Hobbs, Decentralized Identity Foundation, 2026/03/06


Icon

I'm quite sure there's a lot more complexity to this than the article lets on, but at the core is a serious problem: how do we know that agents represent the people they say they're representing? This is an extension of the identity problem in general, which is itself not solved (consider, for example, the ifficulty of providing age verification in a decentralized network). What's proposed here is called Model Context Protocol - Identity (MCP-I): an extension to MCP that according to this article "adds a complete identity and delegation layer for AI agents." The mechanism relies on a third party verifier, such as the company the authors represent, Vouched. In this article, MCP-I is placed nto the hands of the Decentralized Identity Foundation (DIF) as an open protocol. You can find "the full v1 spec found at the MCP-I documentation page."

Web: [Direct Link] [This Post][Share]


Future shock
C J Silverio, Ceejbot's notes, 2026/03/06


Icon

I do think we have to think of generative AI this way: as a bicycle of the mind. Just like computers were when they came out. "It is a personal amplifier, not a generic one. It's my bicycle, and I'm the one riding it, going further because of the amplification. I still have to pedal! The bicycle goes in the direction I choose! But I'm going further and more efficiently than I could on foot." I've spent a good part of my day puttering with Claude on my RSS reader. Not because the world needs another RSS reader, but because I want one tailored to my own tastes. And it's fun to putter with code, especially if I don't have to type it all out. There's still a ton of things I want to do with this - I'm in a LinkedIn discussion about whether an AI could create for me a personal community newspaper. That sort of thing. Any how, the main advice from this post is: get in there and try it out. "We've automated away the easy part - the typing - and some of the thinking. But you can never automate away the talking and decision-making." Your kills and experience "probably matter in ways you don't know." Find out what matters. Via Matt Weagle.

Web: [Direct Link] [This Post][Share]


AI Agents Are Recruiting Humans To Observe The Offline World
Umang Bhatt, NOEMA, 2026/03/05


Icon

In an AI agent workflow of the future: "When an agent hits this wall, it does what software always does: It calls an application programming interface (API), a mechanism that enables one system to communicate with another. Only now, the API is a human." More generally, "A Human API is the menu of requests an agent can make to a person, each one a callable sensing action." There are different ways tom interpret this - in one sense, your AI needs you to give permission. "OpenAI's Operator can shop for you, but at checkout, it hands over control for payment." In another sense, your AI hires humans need to watch or verify. "Startups like RentAHuman that let AI agents book people to complete tasks like photographing a school building to document its condition, posting signs on college campuses and visiting a new restaurant." It's hard to imagine how you would even train for such a position; I guess if you need to learn anything, the AI will teach you.

Web: [Direct Link] [This Post][Share]


Beyond Prestige: Whose Knowledge Counts in Open Education?
Marcela Morales, 2026/03/05


Icon

The Unitwin Network on Open Education (UNOE) is posting a series of articles under the heading 'sharing is hard' and I want to point to two articles from this series, this one, which depicts people asking, "What is the point of sharing my lesson or lesson plan when I am not at a prestigious institution?" and My Precious, by Javiera Atenas and Leo Havemann, which asks why why academics guard their teaching resources and data (but happily share their articles). Both offer the perspective that educators don't share because they are afraid to, because they don't have enough prestige or don't want to share unpolished work in public. I find this sort of article reductive, as though we could just explain why people don't share, adjust the motivational factors, and make it all better. But this idea that there are reasons why people don't share may be inherently flawed. There might be no reason at all. 

Web: [Direct Link] [This Post][Share]


Core Skills for Today's Future of Work
John Storm, AACE, 2026/03/05


Icon

The main value of this article is the division of future work into three types of role (John Storm references from a Mercer report): "(i) Transactional element: routine tasks such as data entry or retrieval, responding to email or enquiries, etc. (ii) Relational element: the servicing, communicating, supervising and/or guiding other people. (iii) Expertise element: the value add that you bring to a role due to your own personal experiences." The arrival of AI eliminates the first and augments the latter two, resulting in greater productivity. There are some writing errors, but I still wondered whether 'John Storm' is a real person, since the article provides no author URL, so I searched and concluded the author is either "a brilliant scientist who worked in relative seclusion in his mountaintop mansion, his faithful dog Rex as his only companion" or "an experienced entrepreneur with 15 years of international practice in team management and strategic planning." I think that in the future if we don't provide explicit author information, people will assume it was written by AI, no matter what the author's name is.

Web: [Direct Link] [This Post][Share]


Chatbot data harvesting yields sensitive personal info
Thomas Claburn, The Register, 2026/03/05


Icon

This is being presented as an AI vulnerability, but what's happening is that untrustworthy extensions are "overriding the browser's native fetch() and XMLHttpRequest() functions in order to capture every prompt and every response." This is a much deeper issue that impacts a wide range of applications, not just AI. It bothered me enough that I looked more deeply into it.  XMLHttpRequest() is depreciated and your apps shouldn't be using it. You can use metadata headers to prevent a number of scripting attacks. But the best method is probably cache the native fetch() function (either as a variable or in a hidden iframe) before any extensions run. Of course, if you're using an application written by someone else, you can't do this; this is yet another reason people should learn to create their own applications (using AI, of course) rather than depending on what's out there.

Web: [Direct Link] [This Post][Share]


Towards the Permissive and Transparent use of Generative AI in Education
Stoo Sepp, 2026/03/05


Icon

This article introduces a website called PETRA AI (the Permissive and Transparent use of AI in education). It doesn't look like much at first, just a bunch of icons for different uses of AI, but if you click on 'I am a Student' or 'I am a Teacher' (near the top) it becomes interactive, so that when you select the AI uses, it creates a graphic (see the left side) you can download to add to your project or assignment. I could quibble with some of the categories (eg. why 'source' instead of 'search'?) and there are some things it's hard to know (does your spell-check use AI?) but it really is a very elegant piece of work and I like it a lot. Just one thing: why doesn't PETRA use its own icon set? We have no idea whether it was created from scratch by hand or whether Claude Code came up with the whole thing. It seems like an oddly missing feature that undermines its whole message. Via Alan Levine.

Web: [Direct Link] [This Post][Share]


Group identities and inclusive multicultural democracy
Daniel Little, Understanding Society, 2026/03/06


Icon

This is a good article. I just want to point out in passing is that it is this sort of group identity that I argue against in my 'groups vs networks' work: "Social identities refer to the dimensions of one's self-concept defined by perceptions of similarity with some people and difference from others. They develop because people categorize themselves and others as belonging to groups and pursue their goals through membership in these groups. They have political relevance because they channel feelings of mutuality, obligation, and antagonism." It strikes me as an anachronism that we would identify or form community more with a total stranger just because they look like us more than we might identify and form community with a colleague we've connected with for years. This article seems to say 'it just is this way'. I say it doesn't have to be this way.

Web: [Direct Link] [This Post][Share]


College students, professors are making their own AI rules. They don't always agree
Lee V. Gaines, NPR, 2026/03/05


Icon

As Lee Gaines writes, "More than three years after ChatGPT debuted, AI has become a part of everyday life — and professors and students are still figuring out how or if they should use it." I think the question revolves around means to an end. "What we need is students to go through the process of writing research papers so they can become better thinkers, so they can put together a cogent argument, so they can differentiate between a good source and a bad source," Cryer says. Well, yeah, I can see that. But is writing research papers the only way to become a better thinker? It seems very limited to me. In an AI-enabled would we should be a lot more hands-on, solving problems, testing solutions, that sort of thing. What is the actual work we want to be able to do? Focus on that.

Web: [Direct Link] [This Post][Share]


Narrative as a Fundamental Way of Making Meaning
Keith Hamon, Learning Complexity, 2026/03/04


Icon

I have spent my entire life resisting the idea of the narrative and storytelling (which is a hard place to be in for a writer). For Keith Hamon, though, the narrative is the core. He cites Pria Anand's The Confabulations of Oliver Sacks, where a 'confabulation' is "a neurological repair where the brain fills memory gaps with stories that the teller believes to be true." Well there's no doubt there are these gaps that are filled, but are they filled with stories? Hamon thinks so. "Narrative is the biological software that converts raw, chaotic data into a liveable reality. It's an instinctive search for order that slips beneath consciousness to insure that we always have a coherent sense of ourselves and our worlds." It strikes me as wrong, though, that the only sort of coherent sense we can have is a text-based linear structure. At the very least, it's a fabric - "it's all a rich tapestry," as Andrea likes to say. And for me, at least, it's thickly woven, multi-modal, and generally non-linguistic. I can, if I really try, represent it with a narrative, but it doesn't come naturally at all. I think we do people a disservice if we tell them all they can imagine is stories.

Web: [Direct Link] [This Post][Share]


Two Paths, One Purpose: How Fair Dealing and Open Education Work Together
Amanda Grey, Karen Meijer, Kwantlen Polytechnic University, Teaching & Learning Commons, 2026/03/04


Icon

The term 'open education' has a variety of meanings, most being based on the idea of creating access to learning opportunities and resources. The term 'fair dealing' is a legal term providing reader rights to use copyright material under certain conditions, analogous to 'fair use' in the U.S. This article finds a lot on common between them and argues "they're rooted in the same values: fairness, accessibility, and a commitment to the public good." I mostly agree with the authors' vision: "Imagine an educational landscape where learners have rich, meaningful choices: open textbooks they can customize and adapt, fair dealing excerpts for highly specialized knowledge, collaborative assignments that contribute to shared knowledge, and community-created resources that reflect the world students live in." Also available: the Open Education Workbook (content is in the menu that runs across the top of the page in hard-to-see dark grey).

Web: [Direct Link] [This Post][Share]


Are service typologies the key to scaling agentic AI systems across public services?
Kay Dale, GOV.UK AI Studio, 2026/03/04


Icon

There's more to this than meets, the eye, but I've added Updates from GOV.UK AI Studio to my RSS reader and will likely track further developments. Here's the gist: Kay Dale writes, "We've identified 8 different types of government service to help us see where agentic AI can add most value." These topologies, as they're called, underlie the existing list of 75 digital services they've identified across government. Of course this sort of analysis could be undertaken for any sort of service, including learning services. I think this sort of this is going to matter, and will watch how it plays out. If you're wondering, the eight types (illustrated) are: informational hub, task list, portal, application, register, license, appointment, and payment. Via Doug Belshaw, Tom Loosemore.

Web: [Direct Link] [This Post][Share]


UX Roundup: Year of the Horse
Jakob Nielsen, UX Tigers, 2026/03/04


Icon

I think it's worth spending the time it takes to have a nice leisurely read through this article from Jakob Nielsen, one of the world's most notable experts on usability and user experience design, as he reflects on how AI has upended the last 40 years of his work. "AI will likely completely invalidate the manual UX design process I spent four decades evangelizing, from 1983 to 2023," he writes. "My error was in assuming that what had worked for forty years would continue to define the standard. A reasonable assumption, perhaps, but one I have now been proven wrong in holding." Stunning. And yet, still beautifully designed and written.

Web: [Direct Link] [This Post][Share]


Academics Need to Wake Up on AI
Alexander Kustov, 2026/03/03


Icon

I'm sure a lot of the articles I've been reviewing for OLDaily are AI-authored, though it is getting increasingly difficult to tell. In a certain sense, it doesn't matter, because what I'm always interested in is whether the content is accurate, clearly expressed, and in some sense novel (by that, I mean 'novel to me', which leaves a lot of room for both humans and AIs). This article passes the test, though many readers won't like the message: "AI can already do social science research better than most professors... (and) The academic paper is a dead format walking." It's the same thing for academic papers as it is for software: we can produce a high-quality paper in a few minutes with AI. So why on earth would we pay any money for one? Now there's still a bit of a supply-chain issue: if the AI is to stay current it needs input from somewhere. But probably not from academic papers. Via Paul Prinsloo, who I can just envision walking around muttering to himself after reading this.

Web: [Direct Link] [This Post][Share]


We Said This Ten Years Ago. The World Finally Caught Up.
Ruth Crick, 2026/03/03


Icon

Ironically, I'm reading this the day after being interviewed about the origins of our connectivist MOOCs. Here's what Ruth Crick says, "We were asking people to make a fundamental shift in their mental model of what learning is. The dominant model - and it still exists everywhere - treats learning as the acquisition of content. You attend a course. You receive information. You are now 'trained.' Tick the box, move on. What we were describing was something categorically different. Learning as a dynamic, relational, embodied process — inseparable from identity, from purpose, from the quality of relationships." Ten years after our MOOC, it was still ten years ahead of its time. Via some post in LinkedIn that disappeared in an unasked-for LinkedIn refresh and is now impossible to find. Image: Learning Guild article on the same topic. I also like this 2019 image from Nick Shackleton-Jones. And of course my own classic.

Web: [Direct Link] [This Post][Share]


The New Blackboard Emerges From Bankruptcy
Phil Hill, On Ed Tech, 2026/03/03


Icon

Blackboard has emerged from bankruptcy, reports Phil Hill. "Court filings and company statements show a fundamentally reset organization: virtually no debt, $70 million in new financing, and Matt Pittinsky set to return as CEO once his non-compete and NDA obligations with competitor Instructure expire." Various assets were acquired by Ellucian and Encoura, with Anthology (now rebranded as Blackboard) keeping the remainder. But as Hill notes, "Nexus and Oaktree now control the company. The board structure makes this clear: Nexus and Oaktree each designate multiple directors and together anchor the executive committee."

Web: [Direct Link] [This Post][Share]


Kansas and AI
Tim Bray, Ongoing, 2026/03/02


Icon

I have never understood the logic of responding to downturns with layoffs. It seems to me magical thinking to expect that earnings will increase when you reduce your productive capacity. It's like responding to being in debt by saying "I'm going to work less to cut back on expenses." Tim Bray cites 'the Kansas experiment' showing that tax cuts and government workforce reductions made it more difficult, not less difficult, to address financial issues. The same with companies. You have all this qualified staff and infrastructure just sitting there, and instead of figuring out how to make money with it, they just let it go. So wasteful. And now companies think they can cut their way to grown using AI. Now at this point it's still an experiment, the way Kansas was before they ruined it. But it's not just that that the experiment is likely to be a failure, it's that with AI they could have (say) doubled their capacity, and they chose to just lay off half their staff instead.

Web: [Direct Link] [This Post][Share]


Discernment: the AI skill no one’s building
Sean Stowers, WeLearn, 2026/03/03


Icon

The AI skill people are lacking, says Sean Stowers, is 'discernment', "the ability to decide whether AI belongs in a given task, which tool fits the situation, what good output actually looks like in your specific context, and when the situation calls for your own expertise instead." He cites 'Learning & AI strategist' David Chestnut, who writes that people focus on skills rather than behaviour change. "People can understand AI, relate to it differently, and still revert to old ways of working. Not because they don't get it - but because behavior change has always been hard." None of this is wrong per se but it's too narrow (and people, are talking about it). It's more than behaviour change, more than 'get on board with the new strategy', more than just 'hard'. It's like they're suggesting people take a leap of faith, but there's more to faith than a leap.

Web: [Direct Link] [This Post][Share]


The Case for Warm Demanders in Today’s Schools
Wendy Amato, Cult of Pedagogy, 2026/03/06


Icon

The concept of 'warm demander' is new to me, so I'll pass it along. "The concept, usually credited to education leaders like Judith Kleinfeld and Lisa Delpit, combines genuine care and cultural responsiveness (warmth) with high academic expectations and rigorous instruction (demand). It is explored at length in Franita Ware's book, Warm Demander Teachers: Healthy, Whole, and Transformational." It feels like a mild version of 'tough love'. My main reaction is that it seems far more teacher-driven than student-driven, though we read, "the Warm Demander is a facilitative leader, not a dictator. Warm Demanders emphasize student agency, classroom leadership, goal setting, and accountability." Yet look at the language the teacher uses: "I expect smooth, silent transitions... every student must contribute using established sentence stems... etc." 

Web: [Direct Link] [This Post][Share]


Higher Ed Invented the Future. Then Subscribed to It.
Patrick Masson, LinkedIn, 2026/03/05


Icon

I admit I spent more time looking at the image than reading the article, wondering why it was necessary to create a fake set of laptop stickers over top of the original stickers. Was it because the original included stickers like 'hacker' and 'rock against war'? No, the fake layer "includes open source projects that began on institutions of higher education" and is intended to illustrate Apereo executive director Patrick Masson's argument that "what's needed now is an open source renaissance for higher education - one that restores community-built infrastructure, institutional agency, and academic autonomy to the center of the educational enterprise." I'm not going to dispute the objective, not the origin story for the applications illustrated, except to point out that some were build in spite of the organization where they originated, not because of it, and that open source authors have long had to work against the institution's desire to keep the tech in-house, to spin it off commercially, or at the very least, to community-source it. Meanwhile, I think Masson's case might carry more weight if authored on an open source platform, not LinkedIn.

Web: [Direct Link] [This Post][Share]


Backpropagation Explained: How Modern AI Models Actually Get Smart
Darren Broemmer, Medium, 2026/03/04


Icon

This is a good article describing the principle of 'back-propagation' in some detail. This is one of the major algorithms used to train neural networks (we've mentioned it here a lot over the years). The simple explanation is that back-propagation is the process of correcting outputs in response to feedback. But the trickier part is now this happens when we're looking at a neural network with multiple layers (so-called 'deep' learning). Darren Broemmer could go into more detail and describe the mathematics of it, but he doesn't, and the article doesn't really suffer for it. He does look at some alternatives to correct back-propagation around the edges, and considers some misconceptions, including the larges question, which is whether the human brain itself uses back-propagation (answer: probably not, though it needs to solve similar challenges).

Web: [Direct Link] [This Post][Share]


There Is Such A Thing As A Dumb Question
Alex Usher, Maïca Murphy, HESA, 2026/03/02


Icon

The team of Alex Usher and Maïca Murphy point to what is an everpresent reality in New Brunswick, the desire to cut spending on education. They link to a list of proposals circulated by the government - here's a good copy, the copy on HESA is unreadable - that range from closing campuses to limiting financial aid to in-province students only. The list of options displays the usual lack of imagination displayed by governments around higher education, viewing reducing enrollment as a demographic crisis. What is missing (from both the list of questions and the critique) is any discussion of getting more value from the system, such as offering broader community-based services to the whole population (not just 18-25s). And missing is the obvious way to make up $35-50M - tax the Irvings, New Brunswick's local billionaires. This kind of money can be found in their seat cushions.

Web: [Direct Link] [This Post][Share]


When AI tools give you choices but take your agency
Open Thinkering, 2026/03/03


Icon

What Doug Belshaw is describing here is what I've called 'regression to the bland'. When AI reduces a large body of things to what it thinks you will want to see (no matter how well founded) it tends to exclude "are unusual or fit a different profile than you were expecting are quietly removed from view." I experienced this when I was using Feedly's AI Leo to narrow down RSS results. The results were nothing unusual or surprising, which is not what I want when creating a newsletter like OLDaily. Belshaw also makes the useful distinction between choice - where we select from future options - and agency - where we actually shape these options.

Web: [Direct Link] [This Post][Share]


Fraunhofer is Developing an AI-Supported Learning Management System for the Bundeswehr
IDW, 2026/03/02


Icon

There was a time when the organization where I work was urged to be more like Fraunhofer, the German research and development organization. Now Fraunhofer is building an LMS for the German military while our organization has no interest in anything to do with learning. This I think is reflective of culture more than anything, of what is viewed as productive and practical, and what is not. Anyhow, integrated into their Moodle installation have been a chatbot, a a competence assessment app (KoApp), and "a dashboard provides them with a statistical overview of the knowledge level on each course." Here's the original press release.

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2026 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.