Content-type: text/html Downes.ca ~ Stephen's Web ~ Notes on Brian Cantwell Smith, Rehabilitating Representation

Stephen Downes

Knowledge, Learning, Community

Nov 17, 2009

Originally posted on Half an Hour, November 17, 2009.

Stuff I like:


I am making the stronger claim that even in paradigmatic cases of
first-order logical inference, the operative constraints on “what can
be done” (what can be proved, what can be inferred, what can be
mechanized, what can be computed) are and always have been ultimately
physical, even if they have not classically been understood
in that way. In others words, the reigning theoretical presumption
that effectiveness and computability are appropriately
understood abstractly or syntactically isn’t too narrow. It is false.
Right! Because, as he says:

They have been understood as abstract, but classical
understanding is wrong. In point of fact they are concrete.
This is what I try to get at when I talk about things being a 'sign' or an 'indicator', rather nthan an instance of a general rule or principle. Understanding is no more about abstract rules of language than science is about ;laws of nature. Both (as understood in a post-positivist sense) involve the creation of (pragmatic) 'rules of thumb' that suggest - but do not dictate - an interpretation of the data.

This is also a very good observation:
I worry that, in eschewing abstract formality in favour of
concrete materiality, a spate of embodied cognition theories, from
cognitive neuroscience to cultural theory, even if dressed in impeccable
scientific credentials or urbane French garb, are unwittingly
falling prey to a kind of causal reductionism or causal fundamentalism
incapable of understanding what is ultimately distinctive
about minds and mentality—having critically to do with semantic
directedness.
The appeal to a causal model of the world just is an appeal to that model of the world where generalizations - universals - are literally true. But there are many knowable cases where they are not literally true - stochastic processes, quantuum physics, chaotic systems - which (to my mind) makes it more likely that the cases where we assert that they are literally true are probably misrepresentations. The generalization 'A causes B' is a very contingent statement, contingent not only on the state of affairs in the world but also on the state of representation in the believer.

Also - section 2a is beautifully done and illustrated. He knows this stuff cold (and nobody! knows this stuff cold). very very impressive.

Also,

Logicians, in my experience (as opposed
to logically-oriented computationalists), are mathematicians, not
naturalists.
Perhaps semantics can be naturalised; perhaps cognitive
science will show us how to naturalise semantics. But logic
doesn’t show us how.
This is an important point. To put it plainly: something isn't true just because it's logical.
Or poetically: there is more that can be imagined than there is under the sun.

But it doesn't matter...

For our purposes all that matters is that questions
of what secures the interpretation function I, what sort of
account semantics will ultimately be explained by, either in a particular
case (such as arithmetic) or in general, is not required, by
anything in the logicist framework, to be naturalised or even
naturalisable.

And I think the logicists make pretty much the same point the other way (the 'appeal to nature', or the 'naturalistic fallacy') - that it is an error to presuppose that principles of nature (causality, properties of brains, etc) determine, or are even reflected in, principles of logic.

Also, I love this:

When logical system are presented, traditionally, the syntax,
grammar, proof regimen, interpretation function, etc., are all usually
simply laid out in ostension—as if they had arrived, fullblown,
as “facts” for theoretic consideration.
Of course - one has to observe, it couldn't very well be said to be formal (in the Fodor sense of "shapes of the objects in their domains") otherwise.

This is a nice statement of the issue:

It is critical to
our project, however, to recognize that one of the arguments frequently
heard in the “embodied cognition” camp is that it is exactly
in virtue of being representational that logic exemplifies the
properties identified on the left—and therefore that, in order to
manifest the properties listed on the right, a system must abandon
representation.
Assume I'm in the right-hand column (which is more or less true). Does it follow that I'm an anti-representationalist?

I can say "no" at this point - because if you grant me everything on the right, there's a strong sense in which me being a representationalist or not doesn't matter (though I will later be called to account for this...).

I will say this, at least: that I am in agreement with the author that what we describe as logic is in fact describable (and more properly described) in the terms of the right-hand column, that is,

although this abstract
view is socio-intellectually or epistemically correct about
how logicians treat or analyse logic, it is ontologically misleading.

This is a very good statement of the issue:

for materialists or physicalists, the question is
how ordinary bodies or mechanisms, which in one sense are
merely physical, in another sense are not merely physical, but
must instead authentically and legitimately be understood (perhaps
even constituted) in intentional terms?
For me, what makes something "understood in intentional terms" is that there is a third party that actually interprets it that way. That, in other words, the intentionality (representationality, meaning, etc) of the the physical object is not in the object itself, but entirely in the eye of the beholder.

That's why I think it is wrong to say that the mind/brain consists of or contains representational states. Because to do so presupposes a third party (over and above the thinker him or her self) who interprets these states as representational (and just some such states, not all such states, as some are (presumably) like plants reacting to sunlight? (ie., non counterfactual)?)

the
“natural home” of computability results lies somewhere in between—
but much closer to the concrete end than is normally (especially
theoretically) realized.
Agreed.

Reigning theory of logic and computing treat computational
entities (states, marks, etc.) as abstract individuals, whereas
in fact they are more properly understood as concrete type—
i.e., as types of concrete things.
Agreed.

the constraints
on the notions (what it is to be a state, what it is to be a mark,
where the properly-vaunted computability come from, metaphysically)
derive from the concrete, physical world
Agreed.

recognising
the physical character of the notion of effectiveness that
constitutes half of the primary dialectic on which computing
rests, and that serves a lynchpin in our understanding of logicism—
is a necessary prerequisite, I believe, of understanding the
essential character of representation.
I easily grant the "physical character of the notion of effectiveness"...

OK, back to the semantical domain...

if the embodied cognition movement aims to deal with material
creatures interacting with their environments, we will have
to adjust our conception of semantics so that semantic domains
don’t just include concrete individual objects “at the bottom,”
Sure, this is the classic rejection of reductionism...

The semantic interpretations of representational
vehicles are analysed in terms of abstract (set-theoretic) models or
“stand-ins” for what I will call the genuine target domain,
Right. This is what my Masters thesis was about - that we typically substitute the (nice, neat) model for the (messy, complex) reality.

But more serious
issues arise if, forgetting that M is a model, we mistakenly
take (what I will call) insignificant properties of M—i.e., properties of M that are not intended to model anything in D—to be
part of the interpretation of S.

Right. This was the conclusion of my thesis (not nearly as gracefully stated as it is here).

I can accept this:

This is why the Maturana/Varela image is apt: a
system adjusts its internal state, and is “structurally coupled” to
its environment. Re pure causality or pure effective mechanism,
that is all.
so in a sense all properties of the system are representations of the external state - but then, I would say that actually identifying any particular property of the system 'the belief that Paris is the capital of France' as a representational property of the system (as opposed to, what, an accidental property?) is a purely interpretive exercise - something some third party external to the system can do, but which the system itself cannot do.


Stuff I need to think about:

I will say that, at the personal level,
we register the world in terms of the objects, properties, situations,
states of affairs, features, etc., that we thereby take it (the
world, that is) to consist in.
OK, so registration is like storing data (that's why it can be indifferent as to the provenance of that data, as an inference, perception, etc) such that this data represents something ("human thought, perception and understanding of the world is ineliminably ‘as’") and data-storage actually suggests that the data contains (what might be called) information (registration and is a success-verb (normally, registrations register something)).

Which all seems to me to be eliding over the major issues in dispute, rather than resolving them. Which may be appropriate, if the issues are not relevant, but if we come back later with some realist account of registration, and make it turn a semantic trick...

Note also the formulation of this. We have, when it is introduced, the idea of "to register X" but this slides fairly quickly into a formulation "to register X as Y", and while registration, thought of simply, is fairly easily understood, the idea of representing 'X AS Y' is much more difficult. What would be the difference between "“She registers her mother’s coming to the door" and "She registers her mother’s coming to the door as 're' or 'repeating' placement (in Strawson’s sense) of the feature Mama." We have, literally, reference coming in through the back door.

Of course, this is what makes it a representational theory (as opposed to, say, mine).

Also...

More generally, whatever is the nature of the interpretation relation
I between representational vehicles and the entities they
designate or denote, it is not something that happens. The numeral
‘2’ designates the number two, or so at least it is normally
presumed; but that “designation” is not a process, not something
that happens, not something that takes energy or time. Semantics—
at least in the small—is something that “obtains.”
Which, of course, is true - and hence is (isn't it?) the weakness of the model... or maybe the strength?

semantic properties—being referred
to, being true, being consistent, etc.—are not effective [ie., not compuytable. SD]. As
we will see, that is one of their enormous virtues—something that
causal reductionists ignore at their peril.
In other words - you can't (need not? - the meaning of "obtains" is a bit unclear here) do inferences with semantical structures, without invoking some mechanism that takes energy or time...

Also...

The (upper-level) effective transitions are normatively regulated
to honour the (lower-level) semantic facts.
This general pattern—of the effective mandated to honour the
semantic—is as deep a fact about logic as there is.

Well this is something I've always believed, but how does this reconcile with the above-mentioned naturalistic fallacy? Or is the point that it's not a fallacy?

But wait a minute...

Without norms, logic would be an empty vessel, devoid of
substance—uninterpreted mechanism flapping aimlessly in the
breeze.
the norms operate exactly by
tying the two realms back together again. It is this reconciling tug,
as I’ve said, that gives logical systems “bite.”

So - it's not the facts of nature that dictate what sort of logic we ought to employ, but rather, nothing more than norms? Norms?


Maybe not. Because what we get is:

one is forced to conclude that what is universally known
as the theory of effective computability is, in point of fact, (and
presumably will eventually be historically recognised as) a mathematical
theory of causality—namely, a theory of what can be
done, in what time and with what resources, by what sorts of arrangements
of concrete, physical stuff.

and in particular

I have dubbed the properties
that the theory traffics in effective properties, rather than physical
properties; they are properties that systems (or states) can do
consequential work in virtue of possessing.
This is for me very interesting.

A lot of what I do has to do with connections, and when asked what a connection is, I say, typically, "a link between two or more entities such that a change of state in one entity may result in a change of state in another entity". In other words, two entities linked by effective properties.

If this is the case I would look for congruence between this theory and network theory.

So we can get, from connectionism / connectivism some kind of representational theory (though not a logicist theory). A representational theory that is in fact rather more expressive ("in order to understand rhythm and dynamic movement.")

neurophysiology
and the theory of effective computability are climbing up the
same mountain, even if from different sides.
... even if they are not climbing at the same speed....

Also...

Fodor's formalism condition continues to gnaw at me. We have:

It is the negative aspect of formality, however, that concerns
us here: the ubiquitous assumption that both the syntactic properties
and identity conditions on the expressions or representational
vehicles (elements of S), and the operations or effective
transitions defined over them, must be defined independently of
semantics.
Now, this has been horribly abused (by people like Dan Willingham) to say that people cannot learn critical thinking without first learning a body of content (http://www.aft.org/pubs-reports/american_educator/issues/summer07/Crit_Thinking.pdf and elsewhere).

But I still need (want?) to be able to say:

- we do not form representations (models) etc. with syntactic properties independent of the subject matter being described, amd
- we can learn critical thinking

A big part of this, I think, has to do with the positive account of formalism:

The positive aspect of formality has to do
with shape, syntax, grammar, or “form”; it militates that inference
operations be definable (and work, causally) in virtue of the syntax
or form of the constituent expressions (representational vehicles).
If this is what formalism is, the it is not clear to me that it can be learned (I suspect the author might agree with this). But if critical thinking can be learned, and if moreover it can be learned as a discipline independent of the specific subject matter in which it is embedded, then there is some sort of abstraction occurring - not of the signs and symbols being used to represent the phenomena, but in the phenomena themselves.

But not in the phenomena themselves, I don't think. It would depend on there being an external recognizer, who can interpret or see patterns - abstractions - in the phenomena. No?

The author approaches this a different way:
representations often bear semantic relations to situations or
states of affairs that are distal, and distal things, because of the locality
requirements of physics, simply cannot get into the act in
affecting the here-and-now.
But is this explanation - that the states of affairs are distal - accurate?

Closer examination would suggest otherwise. For example, the case of counterfactuals - the state of affairs is not merely distal, it is non-existing. It exist only in the mind of the interpreter.

But even more importantly - and here I think I agree with the author - the truth of the counterfactual cannot 'make' that counterfactual true (existent?) in the world. In other words,

Semantic properties aren’t effective
Am I missing the point?

When we say it that way, we can mean "we can refer to the past without actually causing it to be some point in time in the past".

But it's also a rethinking of Tarski, isn't it?

Not "'Snow is white' is true iff snow is white" but rather "'Snow is white' is true only if snow is white".

But - I'm still going to say, then, even more confidently: semantic properties are not properties of the entities they (purportedly) represent, they are properties of ourselves.

The point is that
from an ontologically point of view, formality is wrong, because
too extreme. But it rests on a profound insight, about the noneffectiveness
of the (normatively-governing) semantic.
OK. I can imagine pink elephants without there having to be pink elephants. I can suppose a sentence is true without it having to actually be true. I can think something is right, or just, or meaningful, without those properties existing in nature at all.

But what gives my thoughts some sort of motor - what drives them from one to the next, as (say) an inference, is the formalism, the (effective) mechanical inference. because, in many instances, the motor of the natural world simply isn't available.

Thinking about this...









Things that make me go "umm...."

thus
consider it to be a substantive question how much of human existence
and/or participation in the world rests on registering it—as
opposed, say, simply to bumping into it, or responding as a purely
physical or mechanical device

What? Wait! Is that what the issue is? If we can think about things that aren't here, then we must be representing? If we can be depicted as thinking about things that aren't here, then we are 'representing'? That would make all counterfactuals that happen to be true in the world some sort of representation, with the result that everything in the world is a representational entity.

A spring can be coiled, or a rock elevated to a distance, such that, under certain circumstances, the spring can release, or the rock fall. Do we therefore say that the spring 'represents' the state that would result in its uncoiling? (Or of its original coiling in the first place?) We could... but it would not result in the sort of understanding of 'representation' we typically associate with the word. Crucially, it does not embrace the concept of 'misrepresentation'. A spring cannot 'misrepresent' what would uncoil it, because it does not have the cognitive capacity.

We say 'x represents y' not as a result of some natural process (such as coiling, or registering) but as a third party that is interpreting x with respect to y. The representational nature of x is not a property of x, nor evemn a property of the relation between x and y, but rather, the result of a stipulation on the part of a third party.

But if that is the case, then there is no contradiction to saying, not that 'x represents y', but rather, 'x is a sign of y'. X isn't a mark, that admits of some semantics, but is rather a trace, which allows an inference.

Next:

From the abstractness of set-theoretic models,
nothing (necessarily) follows about the concreteness or abstractness
of genuine (target) semantic realms.
Yes.

Methodological abstractness,
that is, need not vitiate subject matter concreteness.
Um... OK...

So discussions
of the issue of abstractness vs. concreteness—item number
1 in figure 5 (page ■■)—should not be influenced by the fact
that logicists do semantics model-theoretically. Doing so is compatible
with arbitrarily concrete commitments about the nature
of the semantic domain.
Well, wait...

Suppose some principle p in a model M such that p is abstract. We agree that no such entity p is held to exist (or not exist) in the semantic realm D being modeled. That's what's asserted here. But insofar as M is a stand-in for D in thee interpretation, M is representing (or should I maybe say registering?) that there is an entity p in the realm D, at least insofar as the employment of p legitimizes some inference in S.

The assertion seems essentially to be that we have norms that allow us to say (via models) that there are abstract entities in the world (D) such that these entities can be used in a system of inference (S). (The next move would be to show that there are such inferences, and the circle will be complete).

the secondary dialectic adumbrated
above, having to do with the relation between the abstract and
the concrete, will be as applicable to environments and task domains
as it is to creatures and cognition itself.

Also:

a representational system:
Exploits the effective properties of its inner states—
properties that it can use, but doesn’t intrinsically care
about—to “stand in for,” or “serve in place of” effective connection
with states that it is not effectively coupled to, so as
to lead it to behave appropriately towards those remote or
distal situations —situations that it does care about, but that
it can’t use.
The doctrine of 'caring' or 'not caring' has come upon us very gradually - and would be, I think, very difficult to work out in practice.

I much prefer this:

Or more simply yet, representational systems:
1. Exploit what is local and effective
2. So as to behave appropriately with respect to (to satisfy
governing semantic norms regarding) what is distal and
non-effective.
But there is still this intentionality in there... "so as to behave appropriately" implies some will, goal-directedness, etc. in the representer that operates independently of that which is represented.

I could just say it's a homunculus problem. But more concretely - if we refer to figure 9. for example, there is no way for the figure being represented to 'see' the semantic orientation. That's why in figure 10 we get "reciprocal causation". But of course causation (Newton nothwithstanding) is not reciprocal.

More on this:

So
a natural first way to generalise the logicist framework is to license
causal connections (‘↔’) across the S-D boundary.

we need a metaphysics to make this work...

the governing normative
conditions on non-effective tracking exploit the fact that
the passage of time for an agent, and the passage of time in the
agent’s task domain, are one and the same.
Agents are not just embodied, in other words, in the sense of
being made of concrete physical stuff. They are also embedded
EMBEDDEDNESS provides for various forms of coordination between
the realms of representational activity and realms that that
representational activity is about. Metaphysically, the point is
that not all coordination involves causal or effective coupling.
OK, then...

But now the point that 'semantical properties are not effective' just became a lot more complex.

We say, for example, "o’clock properties are indisputably non-effective.." Fair enough. But then we say
The task for a clock (or
clockmaker) is to exploit the effective properties of the inner
workings (clockworks) in order to establish an appropriate relationship
between those aspects of the hands that are effectively
controllable (the position around the dial) and the non-effective
temporal property thereby represented.
But now how does this happen? Though this causal soup in which the clock-maker is embedded. So the properties of the clock are causally dependent (at least in part, right?) on the nature of time. And if there is such a thing as 'reciprocal causation' then the properties of time are 9at least in part, right?) dependent on the clock. The semantics of the clock have become effective, by virtue of the manner in which they were created.









Typos

p.27 "Normally, one
simply proves or demonstrates the soundness of a system, and the
shows its completeness (if things work out well43)."

should be "and this shows its completeness..."

p. 41

"It in
technical or theory-internal
contexts, logicians speak of an element or structure of M’s
being a model of a sentence S (or of some other syntactic or representational
entity)."

Should just be "In technical or..." ??


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 26, 2024 4:16 p.m.

Canadian Flag Creative Commons License.

Force:yes