Stephen Downes

Knowledge, Learning, Community
Reframing Togetherness: Advances in artificial intelligence and the intersection of open learning


Unedited transcript from Google recorder.

(First five or ten minutes lost to the ether)

[Speaker 1]
Record all of my talks, video recording, audio recording, the slides, Etc, not because I'm huge. Ego egotist. But because it seems to me if I'm going to go through all that work to prepare a talk, it'd be nice if people got to see it after. I don't know why more people don't do that. In fact, to me, it's irrational. But seasoned academics spent a year preparing their work.

[Speaker 2]
See?

[Speaker 1]
Well, I do a conference they present to a room of two, four, six, eight, nine people, and then the content vanishes into the Earth, makes no sense to me. So I recorded.

[Speaker 2]
All right.

[Speaker 1]
We haven't been around with some score right now. Okay, so. Descriptive AI all kinds of ways we use AI today. To figure out what happened systems analysis checking from institutional compliance. All of these things are already in your systems. Your University software is permeated with trust me. Uh, what kind of thing happened? Audio and video transcription. Um, the Google thing that I are the the audio recorder I use will also produce a transcription, meaning I don't have to type out my whole talk. Spam detection accusenet. Does AI spam detection face detection kind of works kind of doesn't anymore? Sporting special needs. I was out of talk. Earlier this morning on accessibility, all the stuff that needs to be done for accessibility really important vital stuff. Hugely expensive, especially if we do it centrally. Um, and try to make it work for everyone. Stuff that AI can help with minimally. Even the transcription right for the people who? Our hard of hearing, being able to see the transcription of my voice life changing. Um. Automated grading again. Whole talk there that works in tests. It's shown to be more consistent than human grading. Because human graders, what do you call graded? You, you probably don't use the same standards with the first paper on the top of the pile and paper number 114 at the bottom of the pile. Your grading method is different. At least mine was not because I wanted it to be, but because I got tired. Uh, resource planning, learning design, user testing, identifying students at risk of failing. All of this is all being used as well. I mentioned these because.

[Speaker 3]
The

[Speaker 1]
Discussion, especially online, especially in our community about AI over the last couple of years, has focused exclusively. On generative AI and why it's evil. There's just one application and mostly. What people are talking about are misapplications about one application, and I'll talk a bit about that.

I'd love one day to just be able to debate. A lot of the AI Skeptics, and we'll just go through the issues one by one. And, and just see where it lands, and I'm not paid by AI companies or anything that I have zero stock and AI companies. And I'm an anti-capitalist, anti-corporate, non-shelf. Trust me. All right. What is artificial intelligence?

[Speaker 2]
All of that

[Speaker 1]
Stuff works on the same basic principles. Now, I've left out a bunch of slides that talk about neural networks. Input processing. Uh, connections, connection weights, plasticity of the network biases, and sensitivities of individual nodes. I've left all of that out. All of that is known. It's all simple, basic mathematics. It is quite literally, and you've seen that you read those papers, and you see that, uh? They're just adding stuff up. And if they got really fancy, they'll multiply some stuff. And then they'll add it up. That's it. That's all this is. AI is basic, simple mathematics, and it does basic simple things. That's, that's why it's so funny. You know, people talk about AI as if it could go away. It's not going to go away. Because now that people. Graphs, how it work anybody can create their own AI. If you have buckets of money, you can create really big, really powerful AI, but anybody can create AI. I could create it on my computer, we could sit down. I could write out the software. You can see this. Oh yeah, add this. Add this. Um, I have a few games.

[Speaker 2]
Um,

[Speaker 1]
The completion game. Is a fun one. This is the one. You, you Auto complete, right? So, um. You know him and? It's eggs! Everybody says, eggs, I. I did this with a Mexican audience the other day. They had not come up with eggs, uh, it's kind of fun. I, I did, Justin.

See, it's interesting, right? Justin Timberlake, Justin Bieber, Justin Trudeau, not so much anymore. All I got from Mexico is Justin Bieber. That was it. And one person went by. I don't get anything. That's the neat thing. Like, it's completion, but it's not automatic. It's not one rule for completion, right? They when you do completion auto completion, it depends on what you've already learned, what your background is, what your experiences are right. Nation is simply. Present a phenomenon to our neural network, depending on how the neurons in the network are connected. When you present a phenomena, certain neurons will light up. The signal will come out the other end, and that signal depends on how those neurons are connected.

[Speaker 2]
And

[Speaker 1]
The way those neurons are connected. Like I said, basic math. Additional multiplication. And so you get Justin Bieber, Justin Trudeau, or wrong. I don't know. Um. It's funny because I've heard critics of AI say. Really, you shouldn't make too much of it. It's nothing but completion, nothing but pattern recognition. Yeah. That's how the human brain works. Remember the repositioning of humans, right? We aren't these great cognitive. Things that are in blocked boxes that really you can't describe. We are pattern matchers. Our brain is a pattern matching machine, that's all it does.

Chomsky would disagree. Jerry Porter would disagree. A bunch of others would disagree, but the evidence. This is overwhelming on the side of pattern matching. Uh, Educators, and all this. Sesame Street. Which of these things don't belong pattern notching? Right, um? What number is this? How do we recognize numbers? Right, and that's, you know, the highest reasoning, right, mathematics, algebra, logic well? Here's the original image. It's normalized, sliced, diced, segmented. Here's our feature detection. In addition, right? We'll just X1, X2, Etc. The model, which is the previously existing state of connections, no pops the digit. We recognize this as a three. That's how it works. And then later on, we recognized 3x3 equals nine. We recognize each of those symbols and the interesting what chat GPT did, is it allowed us to recognize bigger things? The famous paper attention is all you need. It recognizes bigger things. He pays attention to more things now. Deference to the video. I'm grossly oversimplifying it. But that's basically it. So far, so good.

You can throw things at me if you disagree. It's okay. So? Um.

Me paying attention to bigger things. In our world of AI is known as context. That's what they called. It's a stupid name.

[Speaker 2]
Um,

[Speaker 1]
But it's context.

[Speaker 2]
Um,

[Speaker 1]
There really isn't. A good analogy and cognitive psychology for the. Pseudoscience of short-term memory, long-term memory processing, and coding, Etc doesn't really fit neatly into that. Uh, because they made it up. Um. But context. Is the stuff that you can put into the content that you're presenting to your neural network, along with any specific request. So? One. One of the things they started to do almost immediately is create. What are called system prompts so you know how chat GPT works right, you type something in a question, what is the capital of France? Um. That's a prompt. Before GPT oversees your prompt, it sees the system prompt because it can handle long questions. And the system prompt is a whole bunch of instructions for it to tell how to behave. When you ask your question? It's still only fancy autocomplete.

[Speaker 2]
That's

[Speaker 1]
What's really neat about this, right? When you have a large enough neural network, you can give a really complex input, and it can give you really complex output. Still, nothing more complicated than Justin Bieber. It's just all of this stuff in all of this stuff up, but the mechanics are exactly the same. Soviet system prompts, that's.

[Speaker 2]
Remember

[Speaker 1]
When? It was Google's Gemini or Google's image generating or something gave us. What was it? Um. And I think it was Nazis who were black or something. I'm not sure it was. It was something really inappropriate. It was also Asians. I'm also Asian. Yeah. It's well known. They tended not to be, but what they're doing quite audibly was trying to get. Uh, broader representation in their visual images, but it did not for all of them, despite the other contexts. So people made a big deal out of it. The U.S rate hated it. Well, that that was system prompting created that effect. More, more system prompting than the data that went in the system, prompting overruled the data that went in. Um. Think about, and you need to think about how to understand what's happening here, and I want to reframe you again. Okay. Um. All the training that you do. A large neural network. Uh, you know, all the con you remember how they say, oh, it's stealing all the content and putting it into the neuron now. Where you've heard that, right? All that training. Okay, it's actually completely ignoring the content. This is what kills me, right? Doesn't matter what the content was. Um, we kind of like to get good content because it'll be grammatically more accurate, but check GPT does not care what it says. All chat GPT is interested in is word order? That's all it cares about. That's why we need the system runs to give it stuff. It actually does care about, but. Creating the network. All the cares about is word order.

[Speaker 2]
And.

[Speaker 1]
I'm still over my friend. Hi, again, world, uh? It's not. Now, again, I'm oversimplifying when I say all it cares about is word order. But I'm oversimplifying, but I'm not making a category mistake here. I'm not saying all it cares about is word order one. It really cares about the meaning of the word. It does not care at all about. I mean, he's the Lord. It's just word. Order can be interpreted in multiple ways, right, and caring about word order? You know, processing it in different ways. Uh, subjecting it to different algorithms, different systems of recognition, right. But, you know, you feed it in War and Peace. Um, it doesn't know that this was a book. About the Napoleonic Wars doesn't know that. What it does now. Is that the word and then appeared as a sequence 485 times, and more significantly, many more times than the phrase, and but occurred? So, if you typed in and as a prompt, you would get end then. As an output knot and but because nobody writes hand butt. Why would this so? That's why it's streamed. I mean, literally strange to me to hear people say chat GPT copied all this content. It did not. It copied, or it extracted data about the content while ignoring the content. That's why there's no copyright issue. Us and now and and the rest of it. Will be political. Right, it's very profitable for some people to say, well, there really is copyright issue. But strictly speaking. You can copy the content. All right.

[Speaker 2]
Moving

[Speaker 1]
On? Advances in artificial intelligence. Rag mcp H2a. How many to have heard of these? Okay. First, one is rag. It's short for it's a terrible after one. Resource augmented generation. So, remember the context window. Right as our prompt. As the system prompt. Well, we can actually make that really big. And so what people do, is they take entire documents? And they putting them in that contacts window. You can test it off for yourself and chat gbd. You upload a file? All right. Or you can set up a shell application that has accesses a database. Right, puts that in the prompt window. Before the system prompt before the user prompt. And that becomes part of the context window. Resource augmented generation. We are generating content just like before, but now it's being augmented by the resource. What's neat about resource augmented generation is, uh, our AI now has a source of Truth? They did not have a source of Truth before that. Because. We fed it War and Peace, but it didn't analyze War and Peace for the content. All I had was word order. It learned the grammar and the syntax, not the meaning, right? That's what it took for more and peace. Now, we're giving it mean. Now, we're giving a set of facts. No. How closely does it align to these facts?

[Speaker 2]
Pretty

[Speaker 1]
Closely if they're good facts. Uh, best evidence for that is how well. Um. Gpt and similar programs. Uh, do? Automated software generation. Developers swear by it. Now, pretty much everyone who writes software. I mean, everyone who writes software uses chat GPT or something more advanced.

[Speaker 2]
Because

[Speaker 1]
What happened is? Used as their resource. We manual for the software language. Whole bunch of facts about software. And then you ask it. How do I do such and such, or what is the meaning of such and such error? Please look at this code and find the bug. I've done all of those and it Compares what you put in your prompt with the resource and comes out with an answer. Actually, pretty good. My biggest problem with chat GPT to write software isn't the accuracy? It's how much output it will give me. They cap it at a certain amount about 1200 words equivalent. At least that's my experience. If I paid them more and I get more, who knows? So, first of it? Interesting. Useful.

Number one is called model context protocol.

I've given you enough information. You could probably figure this out for yourself. But I'll waste some time and tell you anyways. Oh, I keep walking on. My no, there we go. Just burn the computer. Change the frame. Uh, all right. So, model context protocol. What we're going to do is give. Our AI system, a way of putting more stuff into the context. The context remember is where the prompt goes where the system prompt goes or the documents in rag. Go, now what we've got is? Ai host. And we'll write what's called an mCP server. Um. And connect it to a database. So, what happens is? You have to tell the AI that this exists. That's actually the hard part.

Receive a request

[Speaker 2]
In a certain format from the

[Speaker 1]
Ai. Now, the AI knows what the format is. Why? Because we've given it the manual. We've actually described our mCP server an Exquisite detail to the AI, so the AI knows how to ask a question of the server the server. Connects to a data source. Files, websites, databases, travel agencies. Uh, you name it. Any online service you can think of? The data source Returns the data. The server Returns the response to the AI. The AI does whatever it wants to do with that. We have just connected. Chat gpt to all of human knowledge. Well, all of human knowledge that's stored somewhere.

Uh, over the last. Four or five months. There's been a land rush. In writing MCP Surfers. Uh, there's now, I don't know. I'd have to estimate 40, 000 of them.

[Speaker 2]
So,

[Speaker 1]
It's. It's a bit of a, it's a bit of Madness, right? Um, but that's okay. I, I think personally I think this is a transitory step. Because why should you have to write a server? When you have an application? That can write servers. That doesn't make sense to me. So, eventually we'll be to the point where all we have to do is expose all the information about the API. The. To the, uh, AI. And we'll be able to connect.

[Speaker 2]
But

[Speaker 1]
Right now, the server does the middle work because it's still a bit tricky.

So? Yeah. Hey guys for listening? Of course, they do humans hallucinate all the time. We dream pretty much every night we make up stuff in our head. If somebody asks us where we were last night, and we don't want to admit where we were last night, we lie. And we lie confidently. Um, if you asked me a question about AI and I was of a certain ethical persuasion, which I'm not, I would give you a response that sounded authoritative but was pure gibberish.

[Speaker 2]
All

[Speaker 1]
Right. Lots of presenters do that, I won't. I'll just say, I don't know, because lots of things I don't know. So, that's what AI does. Why? Because it's the same kind of thing. Humans and AIS are the same kind of thing. Humans hallucinate, why wouldn't you expect their eyes to?

[Speaker 3]
Um, I kind of don't like the police Nation phrase, probably because it implies false experience, and the GPT doesn't have experiences in the sense that a human has experiences. What do you think about that?

[Speaker 1]
That's that reframing. I was just talking about, right?

I wrote a paper once simply called Consciousness, where I explain this. My view on this? At more length, but? What is an experience? All right. Well, it's strictly speaking. Is the activation of a certain set? Of cells, depending on the experience in the visual cortex or the auditory cortex, not the outer edges of it. We don't detect that. Right? Uh, but in far enough. So, so that? The loop. Neurons. Can detect that and feed that back in. Does that make sense?

[Speaker 3]
Well, I guess that's a bit more sort of below the surface than I was thinking. I was thinking more phenomenologically.

[Speaker 1]
Yeah, yeah,

[Speaker 3]
It's just not having an experience in that sentence. Well, what is

[Speaker 1]
An experience over and above what I just said in my point,

[Speaker 3]
Um?

[Speaker 1]
Seriously. Think about it, right?

By your knowledge. Suppose you were asking what is fire? And I said. Fire is the plasma state of the oxidation of carbon atoms. And suppose you said, no, I didn't want the physical description. I want to know what a fire is.

[Speaker 2]
Well off fire is.

[Speaker 1]
The plasma state of the oxidization of carbon nodes, nothing more than that. That's what it's to me in my explanation. That's what experiences. I'm a reasonable person. I'm open to alternative stories, provided that they make sense. There is no alternative story. Right, people talk about phenomenology, which? You know, we can go in the hospital and all the rest of it. Um. And talk about the phenomenal experience. But that's still like asking, what is fire, right? What is a phenomenal experience? It is the activation of neural cells. In the visual cortex in a couple layers in. That's what it is. Right, and you say, well, what about the experience of that? Those cells are connected to an additional layer. That records the existence of those other neural activations, because layer by layer by layer by layer. And some of these layers. Loop back. I forget the exact term, but but it's, it's a known. Uh, design. Convolution convoluted neural networks. I might be wrong on that. Because I don't. I'm not good with properties. Um. But anyhow, that's that's the explanation. Okay, so we come back. Can a computer have a conscious experience? Well, right now, no. But that's only because we've got 50 billion. They've got one billion, right? They have the conscious experiencedness, sorry. They have the conscious experience of something that is less than a worm. So, yeah, okay. Um. But we don't deny that a worm could have conscious experience. Is that we don't think it's anything remotely like ours? And how could it be?

[Speaker 2]
Right,

[Speaker 1]
They're just working with so much less brain. Um, elephants occupy. Things like that much more likely to right. Still, not working with the same brain material, but they're getting close. My cat definitely! Oh geez, my cat does all of the stuff that somebody with conscious experience does.

[Speaker 2]
She knows

[Speaker 1]
When it's food time. She tries to deceive me.

[Speaker 2]
And

[Speaker 1]
And pretend that she's not paying attention. All the intentional States she can't do a single word of vocabulary, not an English word anyways. But, you know, so? Non-Humans can have conscious experience. Non-living beings can have conscious attorneys, so we don't have that, you know. So we come back to. You don't like the vocabulary cool? We can use a different vocabulary. We can go, as they say, eliminativist on that side. But, you know, you don't like me going, and women is limitatist with humans. So, I figure. I can use the same logic when I'm talking about machines. Damn it, intentional states.

Intentional stance. Right, we're taking an intentional stance because that's how we humans make sense of things that are really complex. We anthropomorphize all the time. Nobody complains when I say, you know, the weather wants to kill me. When it's minus 40. The weather is not essentially in being, but nobody complains when I say that because everybody knows what I mean. So, the same sort of argument?

It's a good point, though, because it gets raised a ton, doesn't it? Absolutely! How am I doing on time? We're

[Speaker 4]
At almost five after three so ten more minutes, ten

[Speaker 1]
More minutes, and 80 more slides. Yeah, I know. Yeah, I'll be late for my own talk. Wouldn't be the first time. All right, I didn't mention a to a. Uh, a2a is agent to agent protocol. Um. Some of those mCP servers can act as proxies on behalf of an AI. When they do that is called an agent. When an agent talks to another agent, they have their own protocol agent to agent protocol. So, these AIS people talk about they talk in the language of there's only one AI, when, in fact, there are dozens, dozens, hundreds of thousands. And they will talk to each other using a2a protocol. Mike, it's really interesting. Maybe you've seen this?

Oh, come on, why is my mouse not working?

Touch screen for the wind take that knock people. Oh, geez.

[Speaker 4]
German.

[Speaker 1]
Yeah, that's enough all generated by AI. From scratch. From a text-based description. Uh,

[Speaker 2]
As?

[Speaker 1]
The keynote speaker said, we are moving from text-based Ahai into multimodal AI. Leo's a pretty good example of that.

[Speaker 2]
Um,

[Speaker 1]
People haven't thought this through this. Certainly, the AI critics. Uh, a few years from now, nobody's gonna care. That AI read the entire Corpus of human textual output.

[Speaker 2]
Nobody

[Speaker 1]
Will care. All of these copyright suits will be moot. Because AIS will be out. There are their proxies will be out there with cameras like little Google Street View cars collecting experience. They'll be collecting images from the streets. We'll see. What weather does they'll look at animals? They'll go to zoos and observe behaviors. They will infer in the same way that humans infer.

[Speaker 2]
Um.

[Speaker 1]
As David Hume would say through the association of. Events. Where one follows another principles like cause and effect principles like scientific explanation. They will come up with these on their own the same way they were able to. A deuce. The rules and strategies of games like chess and go. By seeing only examples of the games and never being told the rules. That's gonna happen. It's in the process that happened. Um, there's lots of stuff out there. It's funny, because every time one comes on team on TV, my wife says, hell, yeah, they're just trying to get your data they are just trying to get your data the little thing where you put your fingers on to measure your heartbeat. It's a system for determining what normal heartbeats look like. Of course it is. We charge you ninety dollars for it, but. That's, that's what kills me, plus a subscription probably. Um. Ride with GPS heat Maps. Where do people go cycling? We're giving ai's data. We're giving them non-text, data-based data, right? This data will eventually be a truth layer. Bad names, but I'll live with. Right now, we have the language layer, which is text generation or video generation. Next will be the truth layer, where it uses all of these inferences that it got from actual data in the real world. Uh, to create better, more realistic responses to that. And what's interesting is it sees things literally, not just a euphemism. Photons go through the detectors and activate narrow cells and see anything. Um, and it sees things we don't see. It sees things in ways that we can't see. He sees many, many more things when humans see things. The first few layers of our visual cortex actually filter out most of what we see.

[Speaker 2]
Uh,

[Speaker 1]
They can do it too, but they'll filter it out in different ways because they're more adaptable. So they'll be able to come up with causal principles that we never even considered.

[Speaker 2]
That's a bit specular though. The

[Speaker 1]
Multimodal input is not speculative, it is happening.

[Speaker 4]
Sorry, man.

[Speaker 1]
So, when we think about AI in education? We have to be. Thinking about more than advances in online instruction more than adaptive learning? More than open content more than automated assessment.

[Speaker 2]
Um.

[Speaker 1]
I have a whole Spiel. You got some of it earlier? Where the whole concept of open Edge educational resources is not going to make sense anymore. Because why would we need to create this content ahead of time in the hope that a teacher will reuse it to teach a class? Well, we don't have classes. And when we can create better content more authoritative content. Once we get all this experience built in. Uh, automatically. For each individual person at their point of need. Makes no sense, right? So all the discussions about licensing? Which? Terribly frustrating. Um. Won't matter anymore. Um, what will really matter is access to data obviously? Um. And, and part of what I say in in the context of this discussion is if we want open learning. If we want. Open AI, not the company, but the concept, then we have to have open data and open content.

[Speaker 2]
Everybody

[Speaker 1]
In Academia, and especially in the open education Community who are arguing against the use. Of content by AI. Are arguing against open learning. Because they're undermining the possibility of opening AI. The concept, not the company.

I don't know how to say it more plain than that. All right right now. Open the eye or, you know, right now, AI is the domain of the big evil corporations. And I'm saying that nonsarcastically, they're big. They're evil, they're corporations. Um, you know, they're they're literally not allowed to follow the principle. Don't be evil. What goes against our fiduciary duty, right? But. Ai's mathematics. There can be open AI. With enough processing power, there can be big open AI. There's nothing complex about this. The only thing complex is right now. I'm asking the resources, but if anything, like Moore's Law, holds. Kind of still holds. The round limits. We're seeing them, but it's still kind to still holds. People like you and I can have our own individual AI system. But if we can't have open AI, we can't have our own individual AI system. We will always be paying an AI tax to Google, Microsoft meta. Um and Tropic. And the rest. So?

[Speaker 2]
I

[Speaker 1]
Mean. I know they're well intentioned. I know that they want to respect the rights of content creators, but they are undermining open education. And anyhow. Education won't be about content creation anyways. We already know this. We're just having a hard time accepting this. But we know that you can see the thread in this conference. You can see the thread in the cmie conference, so I just went to. Right AI? Can do the professor's work of preparing the content, delivering the content, answering the questions that come up. They can't form a nice personal relationship with the student, not yet anyways. Um. I don't know, you know, some of some of the voice based assistants. It's pretty nice to talk to her. I'm not kidding, um? The students can use Ai and are using AI. To create the content that they hand in for marking. Which is okay, because the professor is using AI degrade that content. And then various other AI systems are analyzing all of these interactions and trying to make predictions about whether the student will pass or fail. And meanwhile, the company that might employ the student is ignoring this entire process. And looking at their social media to see what knowledge they actually demonstrate in their day-to-day basis. And they're using the millions. Hundreds of millions of bytes available. On Instagram, Twitter, or Facebook, whatever, and people will contribute, because that's how you get a job. In slogan form the credential of the future will be a job offer. To see me Crystal jobs.

Okay. I know I'm almost out of time.

How do we understand ourselves in this world? We are interactive, as we we're going to see this more and more important. We are interactive. The only way? To compete at any level. With machines is to have a society that is the smartest the machine. Unions, interacting with each other.

[Speaker 2]
And

[Speaker 1]
It also means interacting with the AI. There's, you can look it up. There's a term out there already in wide use human AI teaming. Where humans work in collaboration with AIS to do whatever. Already exists. There's a paper I found a while back.

Promoted the concept of A.I in the loop, right? Everybody's so concerned, and we can talk about this for ages about human in the loop. But really, we should be thinking about AI envelope. And I think, and why? Because who is? It is deciding what needs to be done. My comments about deantic AI, notwithstanding. To humans. And usually the decisions about what needs to be done are the results of interactions with each other. Social processes, political processes. Bringing AI into that. Allows us to draw from it. Right, so it's not about can we control AI? Right, as though it was some something operating independently of ourselves. It's about how we can. Make use of AI to further the objectives we already have. And again, remember. We'll have AI on this. We'll have something as powerful as chat GPT on this, and it'll cost us pennies to use.

[Speaker 4]
I know this is a mess. There's

[Speaker 1]
A. If you follow that presentation, you can see how I built that. Think about how we can use AI to do the kinds of things that we want to do. Now for various reasons not discussed. There might not be institutions, but suppose there are. Um, we would probably want to support staff diversity at the institution. The whole process about how we do that. There's, you know, you look at the whole workflow from recruitment, connecting and rolling, adopting, assessing strengths, engaging them, assisting them metrics for employment, something that I can't see down there, recommending compensation. Guy fits in. Now, the the six lines here in this presentation. Are six lines of what I call critical literacies. The ways the types of pattern matching. We can use AI to augment our thinking. Totally different discussions. But it's basically syntax semantics. Um. Pragmatics in the sense of use pragmatics in the sense of context. Cognition that is induction deduction, abduction, explanation. Definition and then change. Five or six basic types of pattern. Apply that metric to the different systems, and we have all of these mechanisms we can use AI to support stop diversity in the institution, an actual real stop diversity, not some rule on ccenter principles that somebody's put up on a wall that's ignored by everybody from time on in. And I take that one personally because? When I was in graduate school in the 1980s. That you were talking about. Uh equity and hiring. And here we are in 2025, and they're still working on equity and hiring. They still haven't gotten it down yet. That just kills me, right? Do it right? Get real equity and hiring. Sorry. How do we understand learning? This again is another presentation. Most of? Ai people are talking about personalized instruction, a ridiculous sham that you should not believe in.

[Speaker 4]
Critics of

[Speaker 1]
Using AI to personalize instruction are exactly right. Not because AI is a flim plan, but because personalized instruction is a flim plot.

It was, um. Do I have it here? Oh, I don't have it here. It is in his later slide when we look at how people actually learn. They generally learned. In the form of personal learning. It's based on what they're trying to do, what objective they're trying to meet, and then the learning is defined by that. So we, we start with a desired State. We practice, we get results. We get feedback in the cycle goes. We're not trying to define some sort of Ideal state of content, full awareness.

I'm

[Speaker 4]
Done.

[Speaker 1]
When other people walk in, that's a sure sign. I wish. So, yeah. You get the idea, right? Long story short. Pull together all this information about communities and think about it. Not as magical cognition. But uh, networks of interacting people forming a society that creates intelligence in the same way that a human brain does and chat GPT codes. We reframe our understanding of ourselves. That way, we have some kind of way of looking at the future. That's all I got. The next talk that I'm doing is the mechanics. The stuff that I've built trying to lead us to do that. Thank you everyone!

[Speaker 4]
No stop

[Speaker 1]
Recording now!

Oh, I wonder when it stopped?

[Speaker 5]
Michael and I are treating Michael's come down here and then. Okay, am I in this Sam room? You're in the same room. Oh, that's really handy. That is. 

 

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Aug 28, 2025 9:15 p.m.

Canadian Flag Creative Commons License.