Content-type: text/html ~ Stephen's Web ~ Having Reasons

Stephen Downes

Knowledge, Learning, Community

Jul 06, 2010

Originally posted on Half an Hour, July 6, 2010.

Semantics is the study of meaning, truth, purpose or goal in communication. It can be thought of loosely as an examination of what elements in communication 'stand for'.

Because human communication is so wonderfully varied and expressive, a study of semantics can very quickly become complex and obscure.

This is especially the case when we allow that meanings can be based not only in what the speaker intended, but what the listener understood, what the analyst finds, what the reasonable person expects, and what the words suggest.

In formal logic, semantics is the study of the conditions under which a proposition can be true. This can be based on states of affairs in the world, the meanings of the terms, such as we find in a truth table, or can be based on a model or representation of the world or some part of it.

In computer science, there are well-established methods of constructing models. These models form the basis for representations of data on which operations will be formed, and from which views will be generated.

David Chandler explains why this study is important. "The study of signs is the study of the construction and maintenance of reality. To decline such a study is to leave to others the control of the world of meanings."

When you allow other people to define what the words mean and to state what makes them true, you are surrendering to them significant ground in a conversation or argument.These constitute what Lakoff calls a "frame".

"Every word is defined relative to a conceptual framework. If you have something like 'revolt,' that implies a population that is being ruled unfairly, or assumes it is being ruled unfairly, and that they are throwing off their rulers, which would be considered a good thing. That's a frame."

It's easy and tempting to leave the task of defining meanings and truth conditions to others. Everyone tires of playing "semantical games" at some time or another. Yet understanding the tools and techniques of semantics gives a person tools to more deeply understand the world and to more clearly express him or her self.

Let me offer one simple example to make this point.

We often hear people express propositions as probabilities. Sometimes these are very precisely expressed, as in the form "there is a 40 percent probability of rain." Other times they are vague. "He probably eats lettuce for lunch." And other times, probabilities are expressed as 'odds'. "He has a one in three chance of winning."

The calculation or probability can be daunting. Probability can become complex in a hurry. Understanding probability can require understanding a probability calculus. And there is an endless supply of related concepts, such as Bayes Theorem of prior probability.

But when we consider the semantics of probability, we are asking the question, "on what are all of these calculations based?" Because there's no simple answer to the question, "what makes a statement about probabilities true?" There is no such thing in the world that corresponds to a "40 percent chance" - it's either raining, or it's not raining.

A semantics of probability depends on an interpretation of probability theory. And there are some major interpretations you can choose from, including:

1. The logical interpretation of probability. Described most fully in Rudolf Carnap's Logical Foundations of Probability, the idea at its heart is quite simple. Create 'state descriptions' consisting of all possible states of affairs in the world. These state descriptions are conjunctions of atomic sentences or their negations. The probability that one of these state sentences is 'true' is the percentage of state descriptions in which it is asserted. What is the possibility that a dice roll will be 'three'? There are six possible states, and 'three' occurs in one of them, therefore the probability is 1 in 6, or 16.6 percent.

2. The frequentist interpretation of probability. Articulated by Hans Reichenbach, the idea is that all frequencies are subsets of larger frequencies. "Reichenbach attempts to provide a foundation for probability claims in terms of properties of sequences." This is the basis for inductive interence. What we have seen in the world in the past is part of a larger picture that will continue into the future. If you roll the dice enough times and observe the results, what you will discover (in fair dice) that the number 'three' appears 16.6 percent of the time. This is good grounds for expecting the dice to roll 'three' at that same percentage in the future.

3. The subjectivist interpretation of probability. Articulated by Frank Ramsay, "The subjectivist theory analyses probability in terms of degrees of belief. A crude version would simply identify the statement that something is probable with the statement that the speaker is more inclined to believe it than to disbelieve it." What is the probability that the dice will roll 'three'? Well, what would we bet on it? Observers of these dice, and of dice in general, would bet one dollar to win six. Thus, the probability is 16.6 percent.

Each of these interpretations has its strengths and weaknesses. And each could be expanded into more and more detail. What counts, for example, as a 'property' in a state description? Or, what are we to make of irrational gamblers in the subjectivist interpretation?

But the main lesson to be drawn is two-fold:

- first, when somebody offers a statement about probabilities, there are different ways of looking at it, different ways it could be true, different meanings we could assign to it.

- and second, when such a statement has been offered, the person offering the statement may well be assuming one of these interpretations, and expects that you will too, even in cases where the interpretation may not be warranted.

What's important here is not so much a knowledge of the details of the different interpretations - first of all, you probability couldn't learn all the details in a lifetime, and second, most people who make probability assertions do so without any knowledge of these details. What is important to know is simply that they exist, that there are different foundations of probability, and that any of them could come into play at any time.

What's more, these interpretations will come into play not only when you make statements about the probability of something happening, but when you make statements generally. What is the foundation of your belief?

How should we interpret what you've said? Is it based on your own analytical knowledge, your own experience of states of affairs, or of the degree of certainty that you hold? Each of these is a reasonable option, and knowing which of these motivates you will help you undertsand your own beliefs and how to argue for them.

Because, in the end, semantics isn't  about what some communication 'stands for'. It is about, most precisely, what you believe words to mean, what you believe creates truth and falsehood, what makes a principle worth defending or an action worth carrying out.

It is what separates you from automatons or animals operating on instinct. It is the basis behind having reasons at all. It is what allows for the possibility of having reasons, and what allows you to regard your point of view, and that of others, from the perspective of those reasons, even if they are not clearly articulated or identified.


The whole concept of 'having reasons' is probably the deepest challenge there is for connectivism, or for any theory of learning. We don't want people to simply to react instinctively to events, we want them to react on a reasonable (and hopefully rational) basis. At the same time, we are hoping to develop a degree of expertise so natural and effortless that it seems intuitive.

Connectivist theory is essentially the idea that if we expose a network to appropriate stimuli, and have it interact with that stimuli, the result will be that the network is trained to react appropriately to that stimuli. The model suggests that exposure to stimuli - the conversation and practices of the discipline of chemistry, say - will result in the creation of a distributed representation of the knowledge embodied in that discipline, that we will literally become a chemist, having internalized what it is to be a chemist.

But the need to 'have reasons' suggests that there is more to becoming a chemist than simply developing the instincts of a chemist. Underlying that, and underlying that of any domain of knowledge, is the idea of being an epistemic agent, a knowing knower who knows, and not a mere perceiver, reactor, or doer. The having of reasons implies what Dennett calls the intentional stance - an interpretation of physical systems or designs from the point of view or perspective of reasons, belief and knowledge.

We could discuss the details of having and giving reasons until the cows come home (or until the cows follow their pre-programmed instinct to follow paths leading to sources of food to a place designated by an external agent as 'home'). From the point of view of the learner, through, probably the most important point to stress is that they can have reasons, they do have reasons, and they should be reflective and consider the source of those reasons.

Owning your own reasons is probably the most critical starting point, and ending point, in personal learning and personal empowerment. To undertake personal learning is to undertake learning for your own reasons, whatever they may be, and the outcome is, ultimately, your being able to articulate, examine, and define those reasons.


Interesting discussion here. My response:

Let me take a slightly different tack. I don’t endorse all the concepts here, but use of them may make my intent clearer.

Let’s say, for the sake of argument, that ‘to have learned’ something is to come to ‘know something’.

Well, what is it to ‘know something’. A widely held characterization is that knowledge is ‘justified true belief’. There has been a lot of criticism of this characterization, but it will do for the present purposes.

So what is ‘justified true belief’? We can roughly characterize it as follows:

- ‘belief’ means that there is a mental state (or a brain state) that amounts to the agreement that some proposition, P, is the case.

- ‘true’ means that P is, in fact, the case.

- ‘justified’ means that the belief that P and the fact that P are related through some reliable or dependable belief-forming process.

OK, like I say, there are all kinds of arguments surrounding these definitions that I need not get into. But the concept of ‘having reasons’ is related to the idea of justification.

Now – the great advantage (and disadvantage) of connectivism is that it suggests a set of mechanisms that enables the belief that P to be justified.


- we have perceptions of the world through our interactions with it.

- these perceptions, through definable principles of association, create a neural network.

- this neural network reliably reflects or mirrors (or ‘encodes’, if you’re a cognitivist) states of affairs in the world

- hence, a mental state (the reflection or encoding) has been created – a belief. This belief is ‘true’, and it is ‘true’ precisely because there is a state of affairs (whatever caused the original perception) that reliably (through principles of association) creates the belief.

All very good. But of this is the total picture of belief-formation, then there is nothing in principle distinct from simple behaviourism. A stimulus (the perception) produces an effect (a brain state) that we would ultimately say is responsible for behaviour (such as a statement of belief).

But this picture is an inadequate picture of learning. Yes, it characterizes what might be thought of as rote training, but it seems that there is more to learning than this.

And what is that? The *having* of reasons. It’s not just that the belief is justified. It’s that we know it is justified. It’s being able to say ‘this belief is caused by these perceptions’.

(This is why I say that learning is both ‘practice’ and ‘reflection’ – we can become training through practice along, but learning requires reflection – so that we know why we have come to have the knowledge that we have).

Learning that ‘the sky is blue’, for example, combines both of these elements.

On the one hand, we have perceptions of the sky which lead to mental states that enable us to, when prompted, say that “the sky is blue.”

At the same time, we would not be said to have ‘learned’ that the sky is blue unless we also had some (reasonable) story about how we have come to know that the sky is blue.

What I am after is an articulation of how we would come to be able to make such statements in a connectivist envrionment. How connectivism moves beyond being a ‘mere’ forming of associations, and allows for a having, and articulation, of reasons.

Stephen Downes Stephen Downes, Casselman, Canada

Copyright 2024
Last Updated: Jul 19, 2024 6:26 p.m.

Canadian Flag Creative Commons License.