Jun 19, 2007
Posted to IFETS, June 19, 2007.
From: Gary Woodill
One reference that supports that contention that concepts are instantiated
in the brain is Manfred Spitzer's book The Mind within the Net: Models of
Learning, Thinking, and Acting. Spitzer spells out how this takes place. For
a brief review of this book see my April 10, 2007 blog entry entitled The Importance of Learning Slowly.
The Synaptic Self: How our brains become who we are by Joseph LeDoux covers
much of the same ground. Nobel laureate Eric Kandel outlines a model of how
learning is recorded in the brain in his easy to read In Search of Memory:
the Emergence of a New Science of Man.
I second these points and especially the recommendation of The Synaptic Self, which is a heady yet cogent description of the mind as (partially structured) neural network. Readers interested in the computational theory behind neural networks are recommended Rumelhart and McClelland's two volume Parallel Distributed Processing.
That said, the statement 'concepts are instantiated in the brain' depends crucially on what we take concepts to be. Typically we think of a concept as the idea expressed by a sentence, phrase, or proposition. But if so, then there are some concepts (argue opponents of connectionism) that cannot be instantiated in the brain (at least, not in a brain thought of as essentially (and only) neural networks).
For example, consider concepts expressing universal principles, such as 2+2=4. While we can represent the individual elements of this concept, and even the statement that expresses it, in a neural network, what we cannot express is what we know about this statement, that it is universally true, that it is true not only now and in the past and the future, but in all possible worlds, that it is a logical necessity. Neural networks acquire concepts through the mechanisms of association, but association only produces contingent, and not necessary, propositional knowledge.
There are two responses to this position. Either we can say that associationist mechanisms do enable the knowledge of universals, or the concepts that we traditionally depict as universals are not in fact as we depict them. The former response runs up against the problem of induction, and is (I would say) generally thought to be not solvable.
The latter response, and the response that I would mostly endorse, is that what we call 'universals' (and, indeed, a class of related concepts) are most properly thought of as fictions, that is to say, the sentences expressing the proposition are shorthand for masses of empirical data, and do not actually represent what their words connote, do not actually represent universal or necessary truths. Such is the approach taken by David Hume, in his account of custom and habit, by John Stuart Mill, in his treatment of universals, even by Nelson Goodman, in his 'dissolution' of the problem of induction by means of 'projectability'.
If we regard the meanings of words as fixed and accurate, therefore, and if we regard concepts to be the idea expressed by those words, then concepts cannot be instantiated in the brain, at least, not in a brain thought of as a neural network. If we allow, however, that some words do not mean what we take them to mean, that they are in fact 'fictions' (even if sometimes taken to be 'fact') then concepts can be instantiated in neural networks.