Archive for February, 2008

Free Will, Searle, and Determinism

Thursday, February 28th, 2008

A propos determinism: I recently looked into John Searle’s latest (2007) book, Freedom & Neurobiology. As usual, he gets his knickers into the traditional twist that comes from being a physical determinist and an unacknowledged romantic dualist. In this connection, the following line of reasoning occurred to me.

Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting the following. If there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge? Searle reasons (p.57) that

We have the first-person conscious experience of acting on reasons. We state these reasons in the form of explanations. [T]hey are not of the form A caused B. They are of the form, a rational self S performed act A, and in performing A, S acted on reason R.

He further remarks (p.42) that an essential feature of voluntary decision-making is the readily-perceivable presence of a gap:

In typical cases of deliberating and acting, there is a gap, or a series of gaps between the causes of each stage in the processes of deliberating, deciding and acting, and the subsequent stages.

Searle feels the need to interpret this phenomenological gap as the point at which non-determinism is required in order for free will to assert itself.

Searle’s non-determinist position in respect of free will is his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws. “If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will.” (p. 61) By way of mitigation, however, note that quantum mechanical effects render the literal total determinism position formally untenable and a serious discussion requires assessing how much determinism there actually is. As Mitchell Lazarus pointed out to me, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that algorithmically (at some level of detail) equivalent consequences may result from a range of stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism.

To my way of thinking, the presence of a gap is better explained if we make two assumptions that I do not think to be tendentious: 1) that the outcome of the decision-making process is not known in advance because the decision really hasn’t been made yet and 2) that details of the processes that perform the actual function of reaching a decision are not consciously accessible beyond the distinctive feeling (perception?) that one is thinking about the decision. When those processes converge on, arrive at, a decision, the gap is perceived to end and a high-level summary or abstract of the process becomes available, which we perceive as the reason(s) for, but not cause(s) of, the decision taken.

Presumably, based on what we know of the brain, the underlying process is complex, highly detailed and involves many simultaneous (parallel) deterministic (or as close to deterministic as modern physics allows) evaluations and comparisons. Consciousness, on the other hand, is as Searle describes it a unified field, which I take to mean that it is not well-suited to comprehend, deal with, simultaneous awareness of everything that determined the ultimate decision. There is a limit to the number of things (chunks, see Miller 1956) we can keep in mind at one time. Presumably, serious decision-making involves weighing too many chunkable elements for consciousness to deal with. This seems like a pretty good way for evolution to have integrated complex and sophisticated decision-making into our brains.

Where that leaves us is that we make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. It seems to me that this is as good a description of free will as one could ask for. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Nor is it the case that it is desirable that the process be non-deterministic. To hold non-determinism to be a virtue would be to argue for randomness rather than consistency in decision-making. Rather, we simply do not have direct perceptual access to the details of its functioning.

Concept Identity vs. Concept Similarity

Thursday, February 28th, 2008

In the 2007 Pufendorf lectures, Patricia Churchland said a few things that made me stop and think. One relates to concepts and concept membership. Churchland proposed, following Rosch, that concepts are built around prototypes, that they have a “radial structure”; that concepts have “fuzzy borders (boundaries)” and that concept membership is a “similarity” relationship. I can arrive at a set of similar, but not identical (to use the two hot terms in Fodor’s polemics on concepts) conclusions; but I think the differences are worth elaborating.

By way of intellectual history (mine) background, I have long been troubled by an aporia in what I believe about concepts and concept membership:

A. Concepts have (as Churchland’s slide said) fuzzy borders, and that fuzziness certainly seems to be essential.

On the other hand,

B. I find Fodor’s argument for identity and against similarity to be compelling.

The problem, of course, is that A argues for similarity as the touchstone of concept membership and implies that identity is much too strict to be a useful criterion; whereas B argues that similarity is a meaningless criterion unless there is a preexisting underlying criterion of identity: if similarity requires identity, identity is the fundamental criterion.

It seems odd, however, to argue for a robust notion of identity in the context of the complex non-linear processes of the brain; and just saying “Well, that’s the way it has to be, learn to live with it” is hardly compelling. So, the first issue we have to deal with is where does identity come from? Here’s what I currently think.

It all goes back to a central fact of neuro-epistemology, to wit: the brain has no direct access to the outside world; all access is via transducers–receptors and effectors. I think you mentioned this in one of the lectures. Thence, via Marr, Choe, Maturana & Varela, and von Foerster, I arrive at the following. In the general case, the only thing the brain can reliably identify in the brain-environment system of which it is a component is invariances, that is, invariant states. For a state to be invariant, it must be invariant under some set of operations. The particular set of operations under which a state remains unchanged is, in a real sense, the meaning of the state insofar as the brain is concerned. Nothing else can be known with certainty. Von Foerster, writing at a higher level of generality, uses the terms “eigen states” to describe these meta-stable (stable over at least some period of time) states.

Von Foerster’s terminology derives from a result of matrix algebra. An arbitrary square matrix has the characteristic that there are families of “eigenvectors” such that if E is an eigenvector of matrix M, then multiplying E by M yields a vector of the form k times E. In other words, multiplication by M takes certain vectors (its eigenvectors) into themselves up to a multiplicative constant. Von Foerster notes that the mathematics of a dynamic system is such that it has eigen states that the system maps into themselves (they are invariants of a sort); he characterizes eigen states as the way the system is able to “classify” its environment. A key result of von Foerster’s is that the eigen states of such systems are discrete and meta-stable. In the terminology of neural networks, these states are like attractor points (I am eliding some caveats, but the assertion is correct enough for the argument to stand). Like attractor points, they give the system the ability to do pattern completion.

Self-modifying systems have the diachronic ability to adaptively create (learn) new eigen states. But synchronically eigen states always have the discreteness property. Two eigen states are either identical or different. Similarity is not a characteristic of eigen states. Remind you of Fodor?

Let’s identify a concept with an eigen state. (In certain details, I think this is an oversimplification to the point of incorrectness, but I’ll hold that polemic for another time because it’s not central to this argument.) So, here we are:

Thesis: Concept similarity is at the core of concept membership; there’s no need for concept identity.

Antithesis: Concept identity is at the core of concept membership; similarity is a nonsensical thing to hang concept membership on.

Synthesis: Concepts are eigen states (states defined by sets of operations that preserve an invariant) and as such are unique and have identity conditions. The processes that work to arrive at a particular eigen state may (and probably in the brain generally do) involve completion effects that are undeniably “similarity” effects. So, at one and the same time,

1) Concepts cannot be fuzzy because eigen states are discrete

and

2) Concepts are essentially fuzzy because completion effects are always involved in arriving at them.

If you have some large enough portion of the eigen state associated with a concept, completion effects will fill in the rest and arrive at the unique eigen state (and thus the concept) itself. To the extent that completion effects vary in response to other things going on in the brain, there can be no precise specification of which patterns will or will not complete to a particular concept. This is why the merest ripple on the surface of cat-infested waters is sufficient to cause CAT thoughts and why during an invasion of robot cats from outer space, a host of cat-like creatures emerging from a flying saucer does not cause CAT thoughts.

So much for concept similarity versus concept identity.

Consciousness: Seeing yourself in the third person

Thursday, February 28th, 2008

Re: Patricial Churchland’s presentation on December 1, 2005 at the Inaugural Symposium of the Picower Institute at MIT. Two things Churchland said (at least according to my notes) lead me to an interesting take on the phenomenology of the self. She noted that the brain, without access to anything but its inputs and its outputs builds a model of the external world that includes a model of itself in the external world. She also noted (or was it Christoph Koch) in the Q&A period that there may be some advantage to a neural structure or system that “believes” it is the author of certain actions and behaviors; and there may be some advantage to an organism that includes such a neural structure or system.

Here’s where that takes me. Churchland pointed out that, ceteris paribus, selection favors organisms with better predictive ability. So, the ability to predict and / or reliably affect (relevant aspects of) the behavior of the outside world arises over the course of evolution. In particular, the need to predict (model) the behavior of conspecifics, and the development of the ability to do so has significant favorable consequences. The ability to predict and / or reliably affect (relevant aspects) of the behavior of conspecifics includes the ability to predict interactions among conspecifics (from a third-party perspective).

Once there is a model that predicts the behavior of conspecifics, there is a model that could be applied to predict ones own behavior from a third-party perspective as if one were an external conspecific.

One may suppose that the model of conspecific behavior that arises phylogenetically in the brain consists in the activity of different processes from the phylogenetically established brain processes that internally propose and select among courses of action. That being the case, the model of conspecific behavior constitutes an additional (at least in some ways independent) source of information about ones own behavior, information that could be used to improve ones ability to predict and reliably affect the behavior of the world (thus improving one’s fitness).

I take it as given that independently evolved and epigenetically refined processes that internally propose and select among alternative courses of action take as inputs information about the internal state of the organism and information about the external (black box) world. I further take it that ones own behavior has effects that can and ought to be predicted. Thus, ones own behavior should be an input to the system(s) that internally propose and select courses of action.

Now, information about ones own behavior can be made available within the brain via (at least) two possible routes:

(1) Make available (feed back) in some form an advance or contemporaneous statement of the behavior the brain intends to (is about to, may decide to) perform (close the loop internally).

(2) Observe ones own behavior and process it via the system whose (primary, original) purpose is to predict the behavior of others (close the loop externally).

Assuming, as proposed above, that the total information available from both routes together is greater than the information available from either one alone, selection favors an organism that is able to use information from both sources. However, there is little point to developing (i.e., evolving) a separate system to model (predict) ones own behavior, within an organism that has already a system to predict a conspecifics behavior on the basis of observables. It is better to adapt (exapt?) the existing system.

But, note: certain information that must be inferred from external inputs (abduced) about conspecifics and is thus inherently subject to uncertainty is available more reliably from within the brain. It is thus advantageous to add a facility to translate internally available information into a form usable within the model and provide it as additional input to the conspecifics model.

To the extent that the model preserves its significance as a model of external behavior as extracted from the external (black box) world, internally provided information will be processed as if it came from outside. But, such internally provided information is different in that it actually originated inside. Thus, it needs to be distinguished (distinguishable) from the information that really does come from outside.

The significant consequence of the preceding is that the introduction, as a matter of evolutionary expediency, of internally originating information into a system originally evolved to model the external behavior of conspecifics results in a model that treats the organism itself as if its own agency originated externally, literally outside the brain. This formulation is remarkably similar to some characterizations of the phenomenology of self-consciousness.

Once such a system is in place, evolutionary advances in the sophistication of the (externally shaped) model of (in particular) conspecifics can take advantage of and support the further development of the ability to literally re-present internal information as if it originated externally.

There is nothing in the preceding that requires uniquely human abilities. Accordingly, one may or may not wish to call this “self consciousness”; although I might be willing to do so and keep a straight face.