Stephen Downes

Knowledge, Learning, Community
AI Applications: A Taxonomy


Automated transcription form audio recording by Google Gemini
 
 

Here is a transcription of the audio file.

Introduction

Hi everyone, I'm Stephen Downes. I work with the National Research Council of Canada. So, I'm in my office in Ottawa at the morning where it's raining hard. But before I do anything else... Go Jays!

Very important to get that in. Um, so, yeah. Um, I'm presenting on a taxonomy of AI... a taxonomy for AI applications. I actually developed this a few years ago, um, and I've used it since. Um, I did it in the context of a course I did, um, called "Ethics, Analytics, and the Duty of Care."

Um, and you can actually find that course in the, uh, on the URL that I just put into the chat area. Um, and this comes from that course.

Um, so let me share the slides. So I'm going to do about 20 minutes on this. Uh, in the course, I did a full week. So, uh, I did offer to do the week-long version, but uh, for some reason, that wasn't wanted. All right, I'm just kidding.

So you should be seeing the screen now. Uh, I can see the chat. Um, so if you have any questions or comments as we go along, as I present this, please do jump in. Um, I warmly welcome any sort of, um, any sort of questions, comments, queries, uh, huge objections, etc. It's not, not an issue for me.

 

terminological Note: AI vs. Analytics

 

So, just a terminological note, um, the course, as I said, was called "Ethics, Analytics, and the Duty of Care." I use the word analytics interchangeably with the term artificial intelligence or AI throughout. Um, there's subtle distinctions in meaning that aren't really that important. Um, but that's what I do. So, anytime I say analytics, think AI. Anytime I say AI, think analytics, and you'll be fine.


 

Why Create AI?

 

So, um, where I begin is why people are creating analytics in the first place, why people are creating AI. And there's a whole bunch of reasons. I mean, we didn't develop all of this technology just to have a computer create cat pictures for us, as fun as that is.

Um, you know, it's a response to economic, competitive pressures. Uh, it gives us greater agility, good practice in modern enterprise management, intelligent personalized services. I just finished doing a talk right before this one on, um, using artificial intelligence in federated learning networks, uh, which was fun. Um, visualized patterns, trends, etc.

Uh, people sometimes talk of AI as technology in search of an application—less so now than when I first put this presentation together. But just quite the opposite. There are tons of reasons why we want to use AI. And the reason why I focused on analytics was because a lot of the ethics of AI is risk-based and fear-based. And, and you could talk about, I'm sure you'll talk about that more. I wanted to focus on the other side as well. Why are we doing this stuff in the first place? There's a ton of benefits that we get from AI, and that's what this taxonomy is intended to convey.

 

Traditional Distinctions in AI

 

Some of the more traditional distinctions and types of analytics, um, you'll probably read about these or have read about these: uh, supervised and unsupervised learning, semi-supervised learning, reinforcement learning. These perform some of the major tasks, like clustering, regression, and association, associated especially with earlier machine learning, but still also performed by modern AI systems.

Um, so again, modern analytics has a foundation in machine learning, but more recently is dependent on neural networks. And the major applications, uh, I draw them out here: classification, regression, clustering, feature extraction, rule learning, prediction, and these days, we could also add generation and more, making cat pictures.

But that doesn't really get at the benefits of AI or analytics.

Classification, uh, it's just, you know, we can do yes/no, we can do multi-class, etc. Regression is basically finding a line in data. Clustering is grouping data points into distinct areas. Feature extraction is pulling out of the data sets of unique, relevant, or salient features. Again, we're not really getting at, you know, these are interesting, you know, especially feature extraction. I could go on all day about feature extraction. I love... but it doesn't tell me, doesn't tell us why we're doing it.

So, again, people don't talk this way so much anymore, but there have been historically two major approaches. Um, the older approach is machine learning, um, which is more algorithmic, more statistics-based. Um, you know, when people talk about AI as being, um, a stochastic parrot, they're kind of thinking more of machine learning than they are of the other type. Uh, it's known as deep learning, um, and it's and it's also known as, uh, neural network type learning. Um, and the distinction between the two really is, uh, in neural network or deep learning, there is... and you can see in the little diagram there, I'm pointing at it, but you can't really see. Um, there's the, uh, intermediate layer of data values or neurons or entities between the input and the output. That's what makes it "deep," right? Uh, but again, who cares?


 

A Taxonomy of AI Applications

 

So, I came up with this taxonomy. Um, this is the clip-it, save-it, and take-it-home version. Uh, the URL there is at the lower right-hand corner. Um, and, uh, the slides are all available on the link that you saw on the first page. I'll just zip back to give you that, because I forgot to do that. Hang on.

Uh, all my slides and, and the audio of this presentation as well, and video if there is any, uh, will be available at this URL here: www.downs.ca/presentation/597. Yeah, I know I'm getting old. I've been doing this for too long. 597.

All right. Feature extraction. Love feature extraction. Okay, deep learning. All right. So here's the taxonomy. There's where you can download it from.

So, I actually drew in the literature, at the time I was creating this course, there was an existing taxonomy that captured the first four of these: descriptive, diagnostic, predictive, and prescriptive. And you see that, those four all through the literature.

Um, so, when I created this presentation, that'd be about four years ago now, I added two categories, because it was very obvious at the time that the four that were listed just weren't going to do the job. So I've got five, er, six major categories: descriptive, diagnostic, predictive, prescriptive, generative, and deontic.

So, uh, and then I, I gave them each a title, just or a caption, just to give you a sense of what it's about.

  • So, Descriptive is answering the question, "What happened?"

  • Diagnostic is answering the question, "What kind of thing happened?" You see the distinction, right? You see where categorization is going to play a role there.

  • Predictive is answering the question, "What will happen?" Super useful. Um, there was one, uh, analytics project that my colleagues worked on, not me, um, to predict, uh, part failure on, um, airplanes, you know, like passenger jets. Really useful. We want to know these ahead of time, not after they happen.

  • Prescriptive: Um, "How do we make it happen?" Um, you know, so, learning recommendations, placement matching, hiring—this is the use of AI in these areas.

  • Generative, which is the first of the new categories I added—now all over the place, right, with large language models, um, which are almost exclusively deep learning or neural-net based—"Make something new."

  • And then finally, and this is the, the type of AI application that people aren't talking about yet, but will, because it's going to come: Deontic, uh, which answers the question, "What should happen?" And, and here we, we have AI telling us things like, uh, "What are community standards? What things are bad? How do you define fairness?" Uh, AI-generated changes to the law, uh, AI content moderation. Um, and even, and I was listening to an episode, uh, of a podcast called "Intelligent Machines" about a company called Hume AI (H-U-M-E-A-I), uh, for easing distress. Um, so it's AI that relates to and is based on, first and foremost, human emotions. Really interesting stuff.

 

Breakdown of Categories

 

So, I've got about four minutes left, give or take. Um, but, but there's a ton more stuff, uh, if you're interested in following this up further.

So, descriptive analytics, um, so as I say: description, detection, reporting, mechanisms to pull data from multiple sources (which is what a personal learning environment does), uh, filter it, combine it, aggregate it, mine it, etc. Um, so this ethics.mooc.ca file here is a presentation, um, that actually covers this whole area. And, whoops, that's not it. Oh, yeah, no, that is it. Sorry. Is it? Where is it? I'm lost. Uh, oh yeah, it is. Okay. So, and it just goes through, so you can follow, and there's links describing each of these different applications.

Diagnostic analytics. Again, I have a link from the, uh, ethics course, uh, all about diagnostic analytics. And this is really the, the use of pattern detection, uh, to detect patterns and trends. Um, this is really important in the sense that, uh, AI is able to detect patterns in data, um, that we don't have a name for and that we might not have considered in the past. Um, you know, and that's one of the major distinctions between supervised and unsupervised learning, is that in supervised learning, we're asking AI to fit the categories we've already predefined, but AI, of course, is capable... Oh, I'm sorry. Well, uh, that link will be available. Um, it's just not available yet. I'm sorry, Danielle.

The, uh, next category, predictive analytics. And again, here's how it works, right? It's, it's basically induction in action. Uh, so you have historical data, you run it through the algorithms, that creates a model. So that's the thing that's being used by ChatGPT and, uh, Claude and all these other things, are these models. And then you take the model, you give it new data in the form of prompts or, more recently, uh, in things like, uh, RAG, uh, and model context protocol, data from other sources, such as papers or web services, and then the predictive model gives you your outcome, which in predictive analytics is a prediction, but in generative analytics is, you know, a cat picture. Again, I have the link here to all of these.

Prescriptive analytics recommends solutions. You know, "Work on this deal, not that one. Sell this, not that. Do this next." And again, we have the link with the different information... Oh, not that link. Silly. Uh, come back. This link. Oh, the links merged. That's what I get for doing it at the last minute. Anyhow, if you click this link, but this actual link, you'll get that slideshow.

And then finally, or not finally, but generative analytics. And again, it's done the same thing, it's merged the links. PowerPoint can be really silly sometimes. But again, the slide show is behind this link.

And then finally, deontic. So, there's all kinds of areas where AI is going to tell us what should be done. And again, this one probably... yeah, that one worked properly. This is probably the most interesting and controversial application of AI, but we're going to need it. I, I think there's really no question that we're going to need it.

Uh, you know, right now we have judges determining what community standards are. Is that the best, fairest, most equitable way of doing that? Probably not. Judges are kind of in a privileged position. Uh, influencing human behavior. People are going to use AI to do it. We might as well talk about it. Identifying the bad, similarly, they're going to use AI to do it.

Um, and... riding redistribution. I did see one proposal, uh, and this of course is a big issue in the U.S. these days, um, of using AI to come up with fair and equitable, uh, distributions for ridings for, uh, federal and provincial elections, or in the U.S. state and federal elections. Um, so that you get rid of gerrymandering. And changing the law, of course, is important.

So that's the quick overview of the, uh, of the, uh, taxonomy that I've developed. And, uh, I hope you found that interesting, uh, and useful. Um, this link will, to the presentation, will be even more useful when it works. And I'm happy to take any comments or questions.


 

Q&A

 

(Question 1: Elaboration on Deontic Ethics)

Yeah. Um, so here, here's the question, right? The question is, "What should be the case?" And we don't have, really, a good mechanism of getting at that. Um, and, and people who've studied ethics certainly know this, right? There are all kinds of different ways of identifying what's right, what's good, what's fair, etc.

So, there are different approaches to this. Um, one approach could be just to look at a huge pile of actual data, uh, concerning human interactions—uh, every conversation on social media, for example—and look for indications of what's good and what's not good, and come up with an overall assessment of what society or what some segment of society believes is good and right. And then use that analysis, put it into a model, and then that model can be applied to ethical questions, new and old, and to come up with a solution that reflects what society thinks. There's no human can do that. Um, and you can't really get at it using imprecise measurements like polls or surveys or votes. But AI can analyze all of that data.

Another way of, of doing it is to use it, use AI explicitly as an ethics research tool. And that's what's happening kind of covertly on Reddit right now, um, in a subreddit called (and pardon the language) "Am I the Asshole?" Um, in this subreddit, uh, situations are put to people. Uh, somebody will say, "I did such and such," right? "I gave somebody flowers, then I took them back. Am I the asshole?" Right? And then the people in the subreddit will respond. And what I've noticed is that they're very rarely broad disagreement. Sometimes there is, but very rarely. And I've noticed that for the most part, they're in accord with my own ethical intuitions, as well.

And so, it's... this... I've, I've noticed more recently (and so I've stopped following the subreddit, actually, because it's so distasteful) but they're using computer-generated ethical situations in order to elicit responses from humans on these ethical questions. So they're sort of using AI as this research tool to find out what this Reddit audience... and we can talk about representation there... uh, thinks is ethical and non-ethical.

Uh, a third way is to pre-define what ought to be, uh, uh, you know, right and good. And that's actually the mechanism being used today in, in a lot of cases, where you come up with, say, a set of principles or rules or values (in my ethics course, I talk about all of these things), and, and basically put that in as, uh, content, maybe some, uh, predefined content, you put that into the context window of an artificial intelligence using something like Retrieval-Augmented Generation. Um, and then you use that content basis and a normal language model, like ChatGPT or whatever, and apply that to ethical questions.

So, uh, which, and, and also, uh, you can use that to guide the AI to itself act and respond ethically, according to as how you have defined it, which is, you know, which is why we have differing AIs with different value sets. So, ChatGPT represents one set of values, Grok AI represents a very different set of values. And all of this is done implicitly behind the scenes. But you could do it more explicitly if you made your training set public.

So, and then we just, you know, we just, we can run through the sorts of ethical calculations, right? If we're utilitarian-based, we could ask, you know, "Well, what is the good that is produced with this?" My, my whole course was kind of a utilitarian approach, right? I'm looking at the benefits as opposed to the risks. If you're doing completely risk-based, you just analyze the risks and then you say, you know, "Is, does, is this an acceptable level of risk?" Ignore the good, like lawyers do, and just look at the risk. Uh, yeah, that was a dig at lawyers. Um, you know, and then ask, uh, "Have we violated this risk protocol, no matter what?" Uh, or, you know, you know, um, the categorical imperative: "What would happen if everybody followed this rule?" We can hypothesize, but we don't really know, right? Uh, you know, "What if everybody walked on the street instead of the sidewalk?" Um, people can say, "Well, it would be madness." But would it really? Use AI to model this and play it out. And so on. I can go on about that as well.

So, does that develop the concept better of deontic ethics?

(Question 2: On Model Cards)

My, my first reaction is, "Are the model cards giving us actionable information?" Um, I mean, it's nice to know these things. Um, you know, especially the data sets. But nobody's going to read the data sets. It's just too much, right?

Uh, so, you know, uh, we can get a sense. Like, if, if the AI used, for example, the Common Crawl, then we can compare that with other AIs that use the Common Crawl. Uh, or, for example, um, Elon Musk just came out with a thing called Grokopedia, which is an AI-authored version of Wikipedia that actually copies a lot of Wikipedia. Having a data card for the AI that created Grokopedia would tell us what you can learn by inference after the fact, which is that Grokopedia depends a lot more on social media, um, than Wikipedia. Wikipedia doesn't depend on it at all. Grokopedia is basically based on that, plus a few other, uh, sources.

So it sort of helps you, but I, I think that the, the cards give you a little information, enough information to allow you to make an inference, but probably not enough information to make a correct inference. And, you know, just being able to draw our conclusions about the AI system, I, I think could mislead people into thinking they understand what the AI is actually based on, when it's not necessarily based on. That's my view on that.

(Question 3: AI as a Judge)

So, let, let's draw three possible scenarios.

One scenario, um, a judge as a type of analytics. I just mentioned the, uh, the Grokopedia and Wikipedia. Now, the way I tested that is, uh, I took a page, uh, specifically the page for Berlin on the two systems. Took the list of references—now there's about, slightly different numbers, but, uh, in the high 200s, low 300s references for each of those two pages. And I asked the, uh, ChatGPT to compare the list, to compare the two lists. And ChatGPT did that and drew conclusions about the sources, specifically that Grokopedia used social media and Wikipedia doesn't. Well, that's a, that's a judgment. I'm using it as a judge to identify the difference. And if I also knew that using social media is a bad way to base an encyclopedia, I could certainly reliably use it as a judge in this case. No problem, right? So that's a fairly simple application. There are no real consequences to it.

What about judging student work? Um, so, AI-based evaluation of student work. Um, sometimes yes, sometimes no. Depends. Right? Uh, for example, using AI to determine whether the student used AI to create the work? Yeah, very mixed results, not reliable. Um, using AI to determine whether the, uh, software that, uh, the student wrote would actually work and produce the expected result? Perfect. Works 100% of the time. Um, using AI to determine whether, uh, philosophy student has offered a valid argument for the existence of God? Probably, if the test is validity. Probably not if the test is, um, soundness. You know, because there's, there's more semantics here. So, mixed, right?

What about large learning models as a judge in a court case? Um, this brings in the two sides. Right? Like, right now, our court systems are almost dysfunctional. Uh, the backlog is years. Um, some of the courses, some of the cases are serious. Many of the cases are not serious, or not as serious. I guess if it's in a criminal court, it's serious. Um, what if we used AI as a judge in some cases to eliminate that backlog, give people a quick answer, and then allow a right of appeal to humans? That would certainly speed up the process. Uh, it still preserves human in the loop, and it provides access to justice that a lot of people might not have access to. Uh, because, you know, the fancy lawyer isn't going to sway the AI, uh, the way, uh, it could sway a human, or they could sway a human judge. So again, mixed results.

I think ultimately, it's going to be, we need to use it judiciously in areas where it has strength, with the possibility of human oversight. Um, once we do that, and once we see the results coming in, I think we'll begin to rely on it pretty quickly. Um, you know, it's... automated balls and strikes in baseball. (Go Jays!)

Um, they're using it in the, in the minor leagues now. Uh, the system works really well. Uh, they've come to rely on it. Um, and, and in that case, uh, you know, they still use humans, still have human umpires, but, you know, in the close cases where you can't really depend on the human, you can use the AI to make the evaluation. And, and, uh, people do rely on it.

 

Final Commentary: The Plurality of Ethics

 

I just want to make one comment, and this relates to some of the comments that Leslie made earlier about how we do, in fact, decide what is right and wrong. And she made the comment that, um, you know, there are many different views on what's right and what's wrong.

Um, and that's my conclusion in my course as well. And, and it's the empirically obvious conclusion to draw. There are some people who say, "Oh, no, we all have the same values," especially, you know, with the list of values for AI, "we all have the same values." But I did an analysis of that, and long story short: we don't, not even close.

Um, and so, we're moving to a world, therefore, in which there is no single ethical standard. And yet, we still need to manage not to kill ourselves, and we still need to manage, as individuals, to decide what is ethically right and what is ethically wrong.

Um, and that's the real challenge, right? It's not, "How do we make AI ethical?" It's, "How do we use a tool, this tool or any tool, in a world where there is a plurality of ethics?" I should write... maybe I'll write that one, on the plurality of ethics. That'd be a great book title. Oh, I'll let somebody else do it, I'm old.

Uh, but that's the question, right? And, and, you know, I've, I've had lots of talks, and talks with lots of people about AI ethics, and they all fall back on this, you know, this list of things, right? Justice, fairness. I'm a philosopher by trade. I, I work in digital technology, but my, my education is in philosophy. I can tell you for a fact, you can't define any of these terms to the satisfaction of everyone. Sorry, I just wanted to get that in there.

Ultimately, just final aside. Um, if we're going to come up with any sort of ethics at all, it's going to be for each individual on what seems or what feels to be right or wrong. A "sense of ethics," if you will. And here, I'm drawing from sources as varied as, um, Carol Gilligan and Bell Hooks on the one hand, and David Hume and John Stuart Mill on the other hand, who basically come to the same conclusion.

Um, so, you know, AI is a technology that is not based on rules and principles. Um, it's based on neural networks. And so are humans. And so the application of rules and principles to try to determine AI will be fundamentally misplaced. Okay, end of aside.

(Winding Down)

Yeah... I sort of worry. Um, you know, I mean... not to criticize them ahead of time, but engineers doing epistemology... it's kind of a rough mix. Um, and you need I think, in my opinion, a broader perspective.


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Nov 14, 2025 3:15 p.m.

Canadian Flag Creative Commons License.