Concept Identity vs. Concept Similarity

In the 2007 Pufendorf lectures, Patricia Churchland said a few things that made me stop and think. One relates to concepts and concept membership. Churchland proposed, following Rosch, that concepts are built around prototypes, that they have a “radial structure”; that concepts have “fuzzy borders (boundaries)” and that concept membership is a “similarity” relationship. I can arrive at a set of similar, but not identical (to use the two hot terms in Fodor’s polemics on concepts) conclusions; but I think the differences are worth elaborating.

By way of intellectual history (mine) background, I have long been troubled by an aporia in what I believe about concepts and concept membership:

A. Concepts have (as Churchland’s slide said) fuzzy borders, and that fuzziness certainly seems to be essential.

On the other hand,

B. I find Fodor’s argument for identity and against similarity to be compelling.

The problem, of course, is that A argues for similarity as the touchstone of concept membership and implies that identity is much too strict to be a useful criterion; whereas B argues that similarity is a meaningless criterion unless there is a preexisting underlying criterion of identity: if similarity requires identity, identity is the fundamental criterion.

It seems odd, however, to argue for a robust notion of identity in the context of the complex non-linear processes of the brain; and just saying “Well, that’s the way it has to be, learn to live with it” is hardly compelling. So, the first issue we have to deal with is where does identity come from? Here’s what I currently think.

It all goes back to a central fact of neuro-epistemology, to wit: the brain has no direct access to the outside world; all access is via transducers–receptors and effectors. I think you mentioned this in one of the lectures. Thence, via Marr, Choe, Maturana & Varela, and von Foerster, I arrive at the following. In the general case, the only thing the brain can reliably identify in the brain-environment system of which it is a component is invariances, that is, invariant states. For a state to be invariant, it must be invariant under some set of operations. The particular set of operations under which a state remains unchanged is, in a real sense, the meaning of the state insofar as the brain is concerned. Nothing else can be known with certainty. Von Foerster, writing at a higher level of generality, uses the terms “eigen states” to describe these meta-stable (stable over at least some period of time) states.

Von Foerster’s terminology derives from a result of matrix algebra. An arbitrary square matrix has the characteristic that there are families of “eigenvectors” such that if E is an eigenvector of matrix M, then multiplying E by M yields a vector of the form k times E. In other words, multiplication by M takes certain vectors (its eigenvectors) into themselves up to a multiplicative constant. Von Foerster notes that the mathematics of a dynamic system is such that it has eigen states that the system maps into themselves (they are invariants of a sort); he characterizes eigen states as the way the system is able to “classify” its environment. A key result of von Foerster’s is that the eigen states of such systems are discrete and meta-stable. In the terminology of neural networks, these states are like attractor points (I am eliding some caveats, but the assertion is correct enough for the argument to stand). Like attractor points, they give the system the ability to do pattern completion.

Self-modifying systems have the diachronic ability to adaptively create (learn) new eigen states. But synchronically eigen states always have the discreteness property. Two eigen states are either identical or different. Similarity is not a characteristic of eigen states. Remind you of Fodor?

Let’s identify a concept with an eigen state. (In certain details, I think this is an oversimplification to the point of incorrectness, but I’ll hold that polemic for another time because it’s not central to this argument.) So, here we are:

Thesis: Concept similarity is at the core of concept membership; there’s no need for concept identity.

Antithesis: Concept identity is at the core of concept membership; similarity is a nonsensical thing to hang concept membership on.

Synthesis: Concepts are eigen states (states defined by sets of operations that preserve an invariant) and as such are unique and have identity conditions. The processes that work to arrive at a particular eigen state may (and probably in the brain generally do) involve completion effects that are undeniably “similarity” effects. So, at one and the same time,

1) Concepts cannot be fuzzy because eigen states are discrete

and

2) Concepts are essentially fuzzy because completion effects are always involved in arriving at them.

If you have some large enough portion of the eigen state associated with a concept, completion effects will fill in the rest and arrive at the unique eigen state (and thus the concept) itself. To the extent that completion effects vary in response to other things going on in the brain, there can be no precise specification of which patterns will or will not complete to a particular concept. This is why the merest ripple on the surface of cat-infested waters is sufficient to cause CAT thoughts and why during an invasion of robot cats from outer space, a host of cat-like creatures emerging from a flying saucer does not cause CAT thoughts.

So much for concept similarity versus concept identity.

Leave a Reply

You must be logged in to post a comment.