Archive for the ‘free will’ Category

Free Will Examined Further

Friday, April 20th, 2012

With respect to free will. Lots of philosophers and scientists (including me in a previous incarnation, having since seen the error of my ways) look to quantum effects as a way to square a completely physical universe with the possibility of free will. As I understand it, quantum phenomena are deterministic in the sense that something determinate has to happen as the end result of the collapse of the quantum wave function. Before the collapse we have a determinate probability density function. I take this to be the unvarnished meaning of Kauffman’s remark that “the quantum-classical boundary [is] non-random yet lawless.”

I agree that this implies that it is literally the case that “no algorithmic simulation of the world or ourselves can calculate the real world.” As my friend Mitchell has pointed out to me, infinite precision is not possible because of uncertainty constraints. Either one believes in a hidden variable theory of quantum mechanics or one does not. If one does, then we’re back to plain vanilla determinism and maybe uncertainty goes away, too. If one does not, then things are still deterministic with a dash of probability thrown in, the effect of which, no matter how “lawful” succeeds only in constraining the randomness a bit—and all of that subject to uncertainty limitations.

I don’t think randomness, even randomness selected from a deterministic probability density function helps a free will argument at all. What we want is responsibility, not random behavior. The only way I have ever seen quantum indeterminacy used as an argument for the possibility of free will is as part of a dualistic program in which mind and the physical universe are distinct. The idea seems to be that the mind gets to tweak quantum outcomes and that is enough to guarantee freedom and responsibility. Too much hand waving at too small a scale, I say. I don’t believe it for a second.

John Searle in his (2007) book, Freedom & Neurobiology worries about the philosophical consequences of physical determinism, too.  Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting that if there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge?

Searle reasons (p.57) that

We have the first-person conscious experience of acting on reasons. We state these reasons in the form of explanations. [T]hey are not of the form A caused B. They are of the form, a rational self S performed act A, and in performing A, S acted on reason R.

He further remarks (p.42) that an essential feature of voluntary decision-making is the readily-perceivable presence of a gap:

In typical cases of deliberating and acting, there is a gap, or a series of gaps between the causes of each stage in the processes of deliberating, deciding and acting, and the subsequent stages.

Searle feels the need to interpret this phenomenological gap as the point at which non-determinism is required in order for free will to assert itself.

Searle takes a non-determinist position in respect of free will as his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws.

If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will. (p. 61)

As noted above, the literal total determinism position is formally untenable and a serious discussion requires assessing how much determinism there actually is. As my friend Mitchell also points out, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that at some suitable macro level of detail equivalent responses eventuate from a range of non-equivalent stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism.

To my way of thinking, the presence of Searle’s gap is better explained if we make two assumptions that I do not think to be tendentious: 1) that the outcome of the decision-making process is not known in advance because the decision really hasn’t been made yet and 2) that details of the processes that perform the actual function of reaching a decision are not consciously accessible beyond the distinctive feeling (perception?) that one is thinking about the decision. When those processes converge on, arrive at, a decision, the gap is perceived to end and a high-level summary or abstract of the process becomes available, which we perceive as the reason(s) for, but not cause(s) of, the decision taken.

Presumably, based on what we know of the brain, the underlying process is complex, highly detailed and involves many simultaneous (parallel) deterministic (or as close to deterministic as modern physics allows) evaluations and comparisons. Consciousness, on the other hand, is as Searle describes it a unified field, which I take to mean that it is not well-suited to comprehend, deal with, simultaneous awareness of everything that determined the ultimate decision. There is a limit to the number of things (chunks, see Miller 1956) we can keep in mind at one time. Presumably, serious decision-making involves weighing too many chunkable elements for consciousness to deal with. This seems like a pretty good way for evolution to have integrated complex and sophisticated decision-making into our brains.

That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. We make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. It seems to me that this is just what we want from free will. After all, when we say we have free will, we mean that our decisions are the result of who we are, which is in turn the result of several billion years of history in our genes combined with our epigenetic encounters with the world in the form of our own personal histories. If we have formed a moral character, that is where it has come from. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Rather, we consider the alternatives and somehow arrive at a decision. Nor would it be desirable that the process be non-deterministic in any macro sense. To hold non-determinism to be a virtue would be to argue for the desirability of randomness rather than consistency in decision-making. We do not have direct perceptual access to the details of its functioning, but I do not doubt that what we have is everything one could desire of free will.

[My notes show that this entry dates from July 27, 2009]

Free Will, Searle, and Determinism

Thursday, February 28th, 2008

A propos determinism: I recently looked into John Searle’s latest (2007) book, Freedom & Neurobiology. As usual, he gets his knickers into the traditional twist that comes from being a physical determinist and an unacknowledged romantic dualist. In this connection, the following line of reasoning occurred to me.

Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting the following. If there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge? Searle reasons (p.57) that

We have the first-person conscious experience of acting on reasons. We state these reasons in the form of explanations. [T]hey are not of the form A caused B. They are of the form, a rational self S performed act A, and in performing A, S acted on reason R.

He further remarks (p.42) that an essential feature of voluntary decision-making is the readily-perceivable presence of a gap:

In typical cases of deliberating and acting, there is a gap, or a series of gaps between the causes of each stage in the processes of deliberating, deciding and acting, and the subsequent stages.

Searle feels the need to interpret this phenomenological gap as the point at which non-determinism is required in order for free will to assert itself.

Searle’s non-determinist position in respect of free will is his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws. “If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will.” (p. 61) By way of mitigation, however, note that quantum mechanical effects render the literal total determinism position formally untenable and a serious discussion requires assessing how much determinism there actually is. As Mitchell Lazarus pointed out to me, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that algorithmically (at some level of detail) equivalent consequences may result from a range of stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism.

To my way of thinking, the presence of a gap is better explained if we make two assumptions that I do not think to be tendentious: 1) that the outcome of the decision-making process is not known in advance because the decision really hasn’t been made yet and 2) that details of the processes that perform the actual function of reaching a decision are not consciously accessible beyond the distinctive feeling (perception?) that one is thinking about the decision. When those processes converge on, arrive at, a decision, the gap is perceived to end and a high-level summary or abstract of the process becomes available, which we perceive as the reason(s) for, but not cause(s) of, the decision taken.

Presumably, based on what we know of the brain, the underlying process is complex, highly detailed and involves many simultaneous (parallel) deterministic (or as close to deterministic as modern physics allows) evaluations and comparisons. Consciousness, on the other hand, is as Searle describes it a unified field, which I take to mean that it is not well-suited to comprehend, deal with, simultaneous awareness of everything that determined the ultimate decision. There is a limit to the number of things (chunks, see Miller 1956) we can keep in mind at one time. Presumably, serious decision-making involves weighing too many chunkable elements for consciousness to deal with. This seems like a pretty good way for evolution to have integrated complex and sophisticated decision-making into our brains.

Where that leaves us is that we make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. It seems to me that this is as good a description of free will as one could ask for. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Nor is it the case that it is desirable that the process be non-deterministic. To hold non-determinism to be a virtue would be to argue for randomness rather than consistency in decision-making. Rather, we simply do not have direct perceptual access to the details of its functioning.

030103 – Consciousness and the Self

Friday, January 3rd, 2003

030103 Consciousness and the Self

To the extent that humans (or any beings as yet unknown to us, like space aliens, say, or sophisticated AI’s) have any views at all on the topic, they will believe that they have free will.  The argument is relatively simple: I believe that any intelligent being will have as a part of its intelligence an internal model of the physical universe (to whatever level of detail is appropriate) that it uses consciously to assess possible courses of action in anticipation of selecting one for execution.  Implied in such a model is a model of the being itself.  This enables analyses of the form, “If I do X, how will I feel about that?”

The model of the self must be contained in the organism as must the larger model of the physical universe.  This ensures that the model cannot model the organism itself with complete accuracy.  To do so would require that the model of the organism include a model of the model of the organism and that model would in turn have to contain a model of the organism and so on ad infinitum.  Thus, the model of the self cannot be 100% accurate.  In effect, one will make inaccurate predictions about one’s own behavior.  Stated another way, no one can know with absolute certainty what he or she will do in a particular set of circumstances.  We experience this as making up our minds at the last minute or as having free will.

The Mind – The Inner Voice

It is by no means clear or self-evident why each of us should have within us a voice that we use sometimes for the purpose of planning things and sometimes for the purpose of commenting on the world around or within us.  Up to the present, all reports of this inner voice have been subjective.  It is interesting to speculate that there may come a time when brain activity recording will become sufficiently sensitive and sophisticated to enable us to identify and even record in some way the “utterances” of this voice.  I rush to assert, however, that we are a long way measured in decades from the ability to listen in on the contents of someone else’s comic book thought balloons.

So, for the time being, the little voice remains private to each individual.

Why do we “hear” this voice?  We know that it is not external.  It does not make a sound.  What do we know about it?  It speaks in whatever language we choose to have it speak.  Sometimes it is silent.  Sometimes it is next to impossible to make it be silent, as for example when it decides in the middle of the night to rehearse all the things you should have said to whomever it was you should have said them to.  When you write something, it says the words to you and you transfer them to the paper or type them at the computer as or just after it says them.

Systems Analysis

Suppose evolutionarily that we are modifying an organism that operates purely in a simple stimulus-response fashion (whatever that means).  We want to improve it in such a way that it can “anticipate” or “plan ahead” in some sense.  A reasonably parsimonious approach might be to recruit brain structures to produce internal representations of possible future states and inject them into the stimulus-response arc (decision-making system) as additional inputs that would be distinguishable from direct real-world inputs, but would somehow carry at least some of the weight of current real-world inputs.

In general, the organism should not confuse these forward-looking inputs with real-world inputs.  Dreams should not be confused with reality.

In the simplest form, such a system would give the organism the ability to perform Gedanken experiments on its environment.  That is, instead of physically trying a strategy to determine its outcome, the organism would be able to “imagine” the outcome and evaluate it against other possible strategies and outcomes.

To accomplish this requires an internal model of the external environment to the extent of modeling physical objects and at least to some extent their relevant physical properties.

When I first wrote the above, I had not thought much about the nature of the model that is required.  It would seem that the model, if it is “automatic” or “unconscious” (which is what I think it’s reasonable to assume was true at least initially in evolutionary terms), must be of the PHEPH (post hoc ergo propter hoc) type that is easy for neuro-glial circuits to implement.

It is advantageous to an organism to be able to abstract invariants from the environment, e.g., object constancy in the presence of partial visual occlusions and in the presence of changes in appearance resulting from viewpoint and changes in the object itself.

Language plays an interesting role in consciousness.  Language serves as a communications medium among humans.  Language is a way of signaling one person’s internal state to another.  Internally, language plays a role in representing concepts to our internal decision-making system.  We “talk to ourselves” (out loud or subvocally) to give ourselves advice or to explore abstract alternatives.

Things we say to ourselves are often things another person might say to us, e.g., “I don’t think this is such a good idea.”  In effect, our language ability is used in two different ways: to communicate with others and to communicate with ourselves.