Archive for the ‘consciousness’ Category

Free Will Examined Further

Friday, April 20th, 2012

With respect to free will. Lots of philosophers and scientists (including me in a previous incarnation, having since seen the error of my ways) look to quantum effects as a way to square a completely physical universe with the possibility of free will. As I understand it, quantum phenomena are deterministic in the sense that something determinate has to happen as the end result of the collapse of the quantum wave function. Before the collapse we have a determinate probability density function. I take this to be the unvarnished meaning of Kauffman’s remark that “the quantum-classical boundary [is] non-random yet lawless.”

I agree that this implies that it is literally the case that “no algorithmic simulation of the world or ourselves can calculate the real world.” As my friend Mitchell has pointed out to me, infinite precision is not possible because of uncertainty constraints. Either one believes in a hidden variable theory of quantum mechanics or one does not. If one does, then we’re back to plain vanilla determinism and maybe uncertainty goes away, too. If one does not, then things are still deterministic with a dash of probability thrown in, the effect of which, no matter how “lawful” succeeds only in constraining the randomness a bit—and all of that subject to uncertainty limitations.

I don’t think randomness, even randomness selected from a deterministic probability density function helps a free will argument at all. What we want is responsibility, not random behavior. The only way I have ever seen quantum indeterminacy used as an argument for the possibility of free will is as part of a dualistic program in which mind and the physical universe are distinct. The idea seems to be that the mind gets to tweak quantum outcomes and that is enough to guarantee freedom and responsibility. Too much hand waving at too small a scale, I say. I don’t believe it for a second.

John Searle in his (2007) book, Freedom & Neurobiology worries about the philosophical consequences of physical determinism, too.  Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting that if there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge?

Searle reasons (p.57) that

We have the first-person conscious experience of acting on reasons. We state these reasons in the form of explanations. [T]hey are not of the form A caused B. They are of the form, a rational self S performed act A, and in performing A, S acted on reason R.

He further remarks (p.42) that an essential feature of voluntary decision-making is the readily-perceivable presence of a gap:

In typical cases of deliberating and acting, there is a gap, or a series of gaps between the causes of each stage in the processes of deliberating, deciding and acting, and the subsequent stages.

Searle feels the need to interpret this phenomenological gap as the point at which non-determinism is required in order for free will to assert itself.

Searle takes a non-determinist position in respect of free will as his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws.

If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will. (p. 61)

As noted above, the literal total determinism position is formally untenable and a serious discussion requires assessing how much determinism there actually is. As my friend Mitchell also points out, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that at some suitable macro level of detail equivalent responses eventuate from a range of non-equivalent stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism.

To my way of thinking, the presence of Searle’s gap is better explained if we make two assumptions that I do not think to be tendentious: 1) that the outcome of the decision-making process is not known in advance because the decision really hasn’t been made yet and 2) that details of the processes that perform the actual function of reaching a decision are not consciously accessible beyond the distinctive feeling (perception?) that one is thinking about the decision. When those processes converge on, arrive at, a decision, the gap is perceived to end and a high-level summary or abstract of the process becomes available, which we perceive as the reason(s) for, but not cause(s) of, the decision taken.

Presumably, based on what we know of the brain, the underlying process is complex, highly detailed and involves many simultaneous (parallel) deterministic (or as close to deterministic as modern physics allows) evaluations and comparisons. Consciousness, on the other hand, is as Searle describes it a unified field, which I take to mean that it is not well-suited to comprehend, deal with, simultaneous awareness of everything that determined the ultimate decision. There is a limit to the number of things (chunks, see Miller 1956) we can keep in mind at one time. Presumably, serious decision-making involves weighing too many chunkable elements for consciousness to deal with. This seems like a pretty good way for evolution to have integrated complex and sophisticated decision-making into our brains.

That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. We make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. It seems to me that this is just what we want from free will. After all, when we say we have free will, we mean that our decisions are the result of who we are, which is in turn the result of several billion years of history in our genes combined with our epigenetic encounters with the world in the form of our own personal histories. If we have formed a moral character, that is where it has come from. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Rather, we consider the alternatives and somehow arrive at a decision. Nor would it be desirable that the process be non-deterministic in any macro sense. To hold non-determinism to be a virtue would be to argue for the desirability of randomness rather than consistency in decision-making. We do not have direct perceptual access to the details of its functioning, but I do not doubt that what we have is everything one could desire of free will.

[My notes show that this entry dates from July 27, 2009]

Free Will, Searle, and Determinism

Thursday, February 28th, 2008

A propos determinism: I recently looked into John Searle’s latest (2007) book, Freedom & Neurobiology. As usual, he gets his knickers into the traditional twist that comes from being a physical determinist and an unacknowledged romantic dualist. In this connection, the following line of reasoning occurred to me.

Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting the following. If there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge? Searle reasons (p.57) that

We have the first-person conscious experience of acting on reasons. We state these reasons in the form of explanations. [T]hey are not of the form A caused B. They are of the form, a rational self S performed act A, and in performing A, S acted on reason R.

He further remarks (p.42) that an essential feature of voluntary decision-making is the readily-perceivable presence of a gap:

In typical cases of deliberating and acting, there is a gap, or a series of gaps between the causes of each stage in the processes of deliberating, deciding and acting, and the subsequent stages.

Searle feels the need to interpret this phenomenological gap as the point at which non-determinism is required in order for free will to assert itself.

Searle’s non-determinist position in respect of free will is his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws. “If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will.” (p. 61) By way of mitigation, however, note that quantum mechanical effects render the literal total determinism position formally untenable and a serious discussion requires assessing how much determinism there actually is. As Mitchell Lazarus pointed out to me, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that algorithmically (at some level of detail) equivalent consequences may result from a range of stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism.

To my way of thinking, the presence of a gap is better explained if we make two assumptions that I do not think to be tendentious: 1) that the outcome of the decision-making process is not known in advance because the decision really hasn’t been made yet and 2) that details of the processes that perform the actual function of reaching a decision are not consciously accessible beyond the distinctive feeling (perception?) that one is thinking about the decision. When those processes converge on, arrive at, a decision, the gap is perceived to end and a high-level summary or abstract of the process becomes available, which we perceive as the reason(s) for, but not cause(s) of, the decision taken.

Presumably, based on what we know of the brain, the underlying process is complex, highly detailed and involves many simultaneous (parallel) deterministic (or as close to deterministic as modern physics allows) evaluations and comparisons. Consciousness, on the other hand, is as Searle describes it a unified field, which I take to mean that it is not well-suited to comprehend, deal with, simultaneous awareness of everything that determined the ultimate decision. There is a limit to the number of things (chunks, see Miller 1956) we can keep in mind at one time. Presumably, serious decision-making involves weighing too many chunkable elements for consciousness to deal with. This seems like a pretty good way for evolution to have integrated complex and sophisticated decision-making into our brains.

Where that leaves us is that we make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. It seems to me that this is as good a description of free will as one could ask for. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Nor is it the case that it is desirable that the process be non-deterministic. To hold non-determinism to be a virtue would be to argue for randomness rather than consistency in decision-making. Rather, we simply do not have direct perceptual access to the details of its functioning.

Consciousness: Seeing yourself in the third person

Thursday, February 28th, 2008

Re: Patricial Churchland’s presentation on December 1, 2005 at the Inaugural Symposium of the Picower Institute at MIT. Two things Churchland said (at least according to my notes) lead me to an interesting take on the phenomenology of the self. She noted that the brain, without access to anything but its inputs and its outputs builds a model of the external world that includes a model of itself in the external world. She also noted (or was it Christoph Koch) in the Q&A period that there may be some advantage to a neural structure or system that “believes” it is the author of certain actions and behaviors; and there may be some advantage to an organism that includes such a neural structure or system.

Here’s where that takes me. Churchland pointed out that, ceteris paribus, selection favors organisms with better predictive ability. So, the ability to predict and / or reliably affect (relevant aspects of) the behavior of the outside world arises over the course of evolution. In particular, the need to predict (model) the behavior of conspecifics, and the development of the ability to do so has significant favorable consequences. The ability to predict and / or reliably affect (relevant aspects) of the behavior of conspecifics includes the ability to predict interactions among conspecifics (from a third-party perspective).

Once there is a model that predicts the behavior of conspecifics, there is a model that could be applied to predict ones own behavior from a third-party perspective as if one were an external conspecific.

One may suppose that the model of conspecific behavior that arises phylogenetically in the brain consists in the activity of different processes from the phylogenetically established brain processes that internally propose and select among courses of action. That being the case, the model of conspecific behavior constitutes an additional (at least in some ways independent) source of information about ones own behavior, information that could be used to improve ones ability to predict and reliably affect the behavior of the world (thus improving one’s fitness).

I take it as given that independently evolved and epigenetically refined processes that internally propose and select among alternative courses of action take as inputs information about the internal state of the organism and information about the external (black box) world. I further take it that ones own behavior has effects that can and ought to be predicted. Thus, ones own behavior should be an input to the system(s) that internally propose and select courses of action.

Now, information about ones own behavior can be made available within the brain via (at least) two possible routes:

(1) Make available (feed back) in some form an advance or contemporaneous statement of the behavior the brain intends to (is about to, may decide to) perform (close the loop internally).

(2) Observe ones own behavior and process it via the system whose (primary, original) purpose is to predict the behavior of others (close the loop externally).

Assuming, as proposed above, that the total information available from both routes together is greater than the information available from either one alone, selection favors an organism that is able to use information from both sources. However, there is little point to developing (i.e., evolving) a separate system to model (predict) ones own behavior, within an organism that has already a system to predict a conspecifics behavior on the basis of observables. It is better to adapt (exapt?) the existing system.

But, note: certain information that must be inferred from external inputs (abduced) about conspecifics and is thus inherently subject to uncertainty is available more reliably from within the brain. It is thus advantageous to add a facility to translate internally available information into a form usable within the model and provide it as additional input to the conspecifics model.

To the extent that the model preserves its significance as a model of external behavior as extracted from the external (black box) world, internally provided information will be processed as if it came from outside. But, such internally provided information is different in that it actually originated inside. Thus, it needs to be distinguished (distinguishable) from the information that really does come from outside.

The significant consequence of the preceding is that the introduction, as a matter of evolutionary expediency, of internally originating information into a system originally evolved to model the external behavior of conspecifics results in a model that treats the organism itself as if its own agency originated externally, literally outside the brain. This formulation is remarkably similar to some characterizations of the phenomenology of self-consciousness.

Once such a system is in place, evolutionary advances in the sophistication of the (externally shaped) model of (in particular) conspecifics can take advantage of and support the further development of the ability to literally re-present internal information as if it originated externally.

There is nothing in the preceding that requires uniquely human abilities. Accordingly, one may or may not wish to call this “self consciousness”; although I might be willing to do so and keep a straight face.

030723 – Limits of consciousness

Wednesday, July 23rd, 2003

030723 – Limits of consciousness

Is important to note that we’re not conscious of absolutely everything that goes on in our bodies.  We’re not conscious of the normal functioning of our lymphatic system.  We’re not conscious of the normal functioning of the stomach, the liver, the pancreas, etc. We’re not conscious of changes in the iris of the eye.  With respect to these functions, we’re zombies.

We’re not ordinarily conscious of breathing, although we have the ability to take deep breaths or to hold our breaths.  Breathing is sometimes conscious, sometimes not.

I wouldn’t say we’re very good at imagining smells or tastes, but I can’t speak to the abilities of a skilled smeller or taster.  Still, we can recognize specific tastes and smells (new Coca-Cola didn’t taste like “Classic” Coca-Cola and people didn’t need a side-by-side comparison to know that).

I think I vote with Lakoff on the fact that our model of just about anything is ultimately based on our model of our self.  Or at least our models ultimately refer “metaphorically” to built-in [well, maybe not built-in, but acquired in the course of post-natal (and possibly some pre-natal) experience] “concepts” relating in some way to perceptual experience, often kinesthetic.  It is certainly the case that some of our knowledge is factual, e.g., the battle of Hastings was fought in 1066.  Other knowledge is procedural, I would say “model based”.  Model based knowledge is of necessity based on metaphor.  That is, the behavior of something is referenced mutatis mutandis to the behavior of something else already understood or at least already modeled.

An important model is our internal model of another person.  Is not clear to me whether the origin of this model is self-observation or observation of others.  Is there an internal model of the self and an internal model of another person?  Or are they one and the same, available to be applied equally to oneself or another?  Certainly, a key element of our model of another is projection of our own understanding onto the other.  Now comes the fun part.  By “introspection” it is clear that because I have a model of another person, my model of another person should include a model of that person’s model of yet another person.  So from these models, I now have available my own behavior (whether actual or under consideration), my anticipation of the behavior of another, and my anticipation of the other’s understanding of my behavior [and so on, but not infinitely because of (literally) memory limitations].

030721 – Consciousness and (philosophical) zombies

Monday, July 21st, 2003

[Added 040426]

Is consciousness an expert system that can answer questions about the behavior of the organism?  That is, does SHRDLU have all the consciousness there is?  Does consciousness arise from the need to have a better i/o interface?  Maybe the answer to the zombie problem is that there are nothing but zombies, so it’s not a problem.

In effect, everything happens automatically.  The i/o system is available to request clarification if the input is ambiguous and is available to announce the result of the computations as an output report.

030721 – Consciousness and zombies

The reason the zombie problem and the Chinese room problem are significant is that they are both stand-ins for the physicalism/dualism problem.  That being the case, it seems pointless to continue arguing about zombies and Chinese rooms absent a convincing explanation of how self-awareness can arise in a physical system.  That is the explanation I am looking for.

Ted Honderich (2000) observes that, “Something does go out of existence when I lose consciousness.”  From a systems point of view, loss of consciousness entails loss of the ability (the faculty?) to respond to ordinary stimuli and to initiate ordinary activities.  Loss of consciousness is characterized by inactivity and unresponsiveness.  Loss of consciousness is distinguished from death in that certain homeostatic functions necessary to the continued biological existence of the organism, but not generally accessible to consciousness, are preserved.

In sleep, the most commonly occurring loss of consciousness, these ongoing homeostatic functions have the ability to “reanimate” consciousness in response to internal or external stimuli.

Honderich observes that “consciousness can be both effect and cause of physical things.”  This is consistent with my sense that consciousness is an emergent property of the continuous flow of stimuli into the organism and equally continuous flow of behaviors emanating from the organism.  I’m not real happy about “emergent property”, but it’s the best I can do at the moment.

Honderich identifies three kinds of consciousness: perceptual consciousness, which “contains only what we have without inference;” reflective consciousness, which “roughly speaking is thinking without perceiving;” and affective consciousness, “which has to do with desire, emotion and so on.”

Aaron Sloman (“The Evolution of What?”  1998) notes that in performing a systems analysis of consciousness, we need to consider “what sorts of information the system has access to…, how it has access to this information (e.g., via some sort of inference, or via something more like sensory perception), [and] in what form it has the information (e.g., in linguistic form or pictorial form or diagrammatic form or something else).”

Sloman also identifies the problem that I had independently identified that leads to it being in the general case impossible for one to predict what one will do in any given situation.  “In any system, no matter how sophisticated, self-monitoring will always be limited by the available access mechanisms and the information structures used to record the results.  The only alternative to limited self-monitoring is an infinite explosion of monitoring of monitoring of monitoring…  A corollary of limited self-monitoring is that whatever an agent believes about itself on the basis only of introspection is likely to be incomplete or possibly even wrong.”

Sloman (and others), in discussing what I would call levels of models or types of models, identifies “a reactive layer, a deliberative layer, and a meta management (or self-monitoring) layer.”

030716

Wednesday, July 16th, 2003

030716

Babies are born with reflexes (IRM-FAP’s).  I wonder if the corresponding models mirror the reflexes.  It’s certainly a better place to start than a) all connection weights set to zero or b) connection weights set to random values.

How do babies to imitation?  How does the organization make the connection between what is seen at its own body?  Is the basic rule for babies: imitate unless homeostatic needs dictate otherwise?

“No” is an active response.  Absence of “no” seems to indicate “no objection”.

With respect to internal models, updating the model is not the trick.  The trick is turning off the Plant (effectors) for the purpose of thinking about actions.  Being able to talk to oneself is an outgrowth of being able to think about actions without acting.  The model can only be updated when action is taken, because that’s the only time the model can get an error signal.  Well, that’s true when the model models an internal process.  It’s interesting question to consider when a model of an external process gets updated.

An appeal to parsimony would suggest that a model of an external process gets updated when the model is being used, shall I say, unconsciously.  That is, if we assume a model of an external process is some kind of generalization of a model of an internal process, then the circumstances under which a model of an external process is updated will be some kind of generalization of the circumstances under which a model of an internal process is updated.

As an off the wall aside, this might account for the difficulty humans experience in psychotherapeutic circumstances.  Simply thinking about one’s worldview and recognizing that it should be changed is, by my model, not going to change one’s worldview.  In effect, change to an unconscious process can only take place unconsciously.

Margaret Foerster (personal communication) indicates that in her therapeutic experience, change begins when a patient is confronted with a highly specific example of his/her maladaptive behavior.  Not all highly specific examples have the effect of initiating change, but examples that do are recognizable by the therapist from the reaction of the patient (who also recognizes at a “gut” level) the significance of the example.  That is, the example creates a state in which the predictions of the patient’s internal model do not match the actual results.  To the extent that the internal model was invoked automatically rather than using the model analytically, the mismatch triggers (by my hypothesis) the (automatic) model correction (learning) process.

Foerster observes that in the sequel to such a significant therapeutic intervention, the patient experiences (and reports) additional related mismatches.  I don’t know that my model has anything to say about the fact that such mismatches are experienced consciously.  Nonetheless, I would be surprised to find that an unconscious model would change in a major way in response to a single mismatch.  I would rather expect gradual change based on accumulating evidence of consistently erroneous predictions.  On the other hand, I would expect the model to respond fairly rapidly to correct itself.  Notice that I say “correct itself”.  That is my way of indicating that the process is unconscious and not directly accessible, although significant change will manifest itself in the form of a recognizably (to both patient and therapist) different “way of thinking”.

Actually, I don’t think I have to worry about the fact that the mismatches Foerster describes are experienced consciously.  On reflection, I think mismatches are experienced consciously.  For example, when one is not paying attention and steps off a curb, the mismatch between expectation (no curb) and reality (sudden drop in the level of the ground) is most assuredly experienced consciously.

But back to the double life of models: it is all very well to say that a model can be used off line and that the experience of so doing is a mental image of some sort, but aside from the question of how a model is placed on line or off line, there remains the question of how inputs to the off line model are created.  Not to mention, of course, the question of why we “experience” anything.  So far, it would seem that there is nothing in a description of human behavior from the outside (for example, as seen by a Martian) that would lead one to posit “experience”, aside, that is, from our hetero phenomenological reports of “experience”.  That’s still a stumper.

Query: do hetero phenomenological reports of “experience” require the faculty of language?  Without the faculty of language how could one obtain a hetero phenomenological report?  How could one interpret such a report?  Is it the case that the only way a Martian can understand a hetero phenomenological report is to learn the language in which the report is made?  How much of the language?

Would it be sufficient for a Martian who only understood some form of pidgin like “me happy feelings now”.  The point seems to be that somehow English speakers generally come to understand what the word “experience” means and can use in appropriate hetero phenomenological contexts.  What would be necessary for a Martian to understand what “experience” means?

030709

Wednesday, July 9th, 2003

030709

I think of the sensorium as being something like an extremely resonant bell.  Sensory inputs or rather the processed remnants of sensory inputs, that is to say the effects of sensory inputs, prime various patterns in the sensorium and alter the way that subsequent sensory inputs are processed.  This process manifests itself as what we call short-term memory, intermediate term memory, long-term memory, as well as some form of learning.  Because we think of memory as the acquisition of factual information and not the development in the sensorium of patterns or the recognition of patterns, we’re not accustomed to thinking of short-term memory as learning.

Assuming that there is some sort of automatic recirculating mechanism, I wonder if the fact that in short-term (and long-term) memory experiments there is a clear effect favoring recall of the first item of a list is simply an artifact that results because generally the first item of a list is preceded by silence or by some irrelevant stimulus.  I wonder if short-term memory is some kind of more or less fixed time constant.  One might think of an initial stage of processing in which inputs are recirculated after some time.  This raises the question of whether observed limits on the number of memory chunks that can be stored in short-term memory is a result of the amount of time it takes for each chunk to be entered.  No, it’s probably much more complicated than that.  There is already an interaction between sensory inputs and pre-existing patterns from the get go.  That’s why zillions of short-term memory experiments use nonsense syllables.

Rather than thinking of attention as adding processing power to particular sensory inputs it may make more sense to think of attention as a way of suppressing, or at least reducing, the strength of competing sensory inputs.  That of course makes more sense than thinking that the brain has excess processing capacity just lying around waiting to be called into action for the purpose of attention.

How long is “now”?  I don’t think the question really has an answer.  I think the hetero phenomenological experience of “now” depends on the contents of the recirculating short-term memory buffer.  When I talk about the recirculating short-term memory buffer I mean that at a certain point in the processing of incoming sensory inputs, the processed inputs are fed back to an earlier point in the processing and somehow combined with the current incoming sensory inputs.  At the same time, the processed inputs continue to be further processed.

As I think more about “now” I realized that there are a number of different now’s depending on the sensory modality.  Well even that’s not right.  We know from various tachistoscopic experiments that there is a short-term visual buffer with a very short time constant, which suggests that there is a very short visual “now”.  I can’t think of any good evolutionary reason why each modality’s “now” should have the same time constant.

I see that I’ve written “the” recirculating short-term memory buffer.  I certainly don’t know that there’s only one, and I don’t know that any of my conclusions depend on there being only one.  Indeed I think that patterns recirculate with differing time constants depending in some way on the nature (whatever that means) of each pattern.

030708 – Computer consciousness

Tuesday, July 8th, 2003

030708 – Computer consciousness

I begin to understand the temptation to write papers that takes the form of diatribes against another academic’s position.  I just found the abstract of the paper written by someone named Maurizio Tirassa in 1994.  In the abstract he states, “I take it for granted that computational systems cannot be conscious.”

Oh dear.  I just read a 1995 response to Tirassa’s paper by someone in the department of philosophy and the department of computer science at Rensselaer Polytechnic Institute who says we must remain agnostic toward dualism.  Note to myself: stay away from this kind of argument; it will just make me crazy.

For the record: I take it for granted that computational systems can be conscious.  I do not believe in dualism.  There is no Cartesian observer.

I do like what Rick Grush has to say in his 2002 article “An introduction to the main principles of emulation: motor control, imagery, and perception”.  He posits the existence of internal models that can be disconnected from effectors and used as predictors.

Grush distinguishes between simulation and emulation.  He states that, “The difference is that emulation theory claims that mere operation of the motor centers is not enough, that to produce imagery they must be driving an emulator of the body (the musculoskeletal system and relevant sensors).”  He contrasts what he calls a “motor plan” with “motor imagery”.  “Motor imagery is a sequence of faux proprioception.  The only way to get … [motor imagery] is to run the motor plans through something that maps motor plans to proprioception and the two candidates here are a) the body (which yields real proprioception), and b) a body emulator (yielding faux proprioception).”

What’s nice about this kind of approach is that its construction is evolutionarily plausible.  That is, the internal model is used both for the production of actual behavior and for the production of predictions of behavior.  Evolution seems to like repurpose systems so long as the systems are reasonably modular.

Grush distinguishes between what he calls “modal” and “amodal” models.  “Modal” models are specific to a sensory modality (e.g., vision, audition, proprioception) and “amodal” models (although he writes as if there were only one) model the organism in the universe.  I do not much care for the terminology because I think it assumes facts not in evidence, to wit: that the principal distinguishing characteristic is the presence or absence of specificity to a sensory modality.  I also think it misleads in that it presumes (linguistically at least) to be an exhaustive categorization of model types.

That said, the most interesting thing in Grush for me is the observation that the same internal model can be used both to guide actual behavior and to provide imagery for “off-line” planning of behavior.  I had been thinking about the “on-line” and “off-line” uses of the language generation system.  When the system is “on-line”, physical speech is produced.  When the system is “off-line”, its outputs can be used to “talk to oneself” or to write.  Either way, it’s the same system.  It doesn’t make any sense for there to be more than one.

When a predator is crouched, waiting to spring as soon as the prey it has spotted comes into range, arguably it has determined how close the prey has to come for a pounce to be effective.  The action plan is primed, it’s a question of waiting for the triggering conditions (cognitively established by some internal mental model) to be satisfied.

It is at least plausible to suggest that if evolution developed modeling and used it to advantage in some circumstances; modeling will be used in other circumstances where it turns out to be beneficial.  I suppose this is a variant of Grush’s Kalman filters argument which says that Kalman filters turn out to be a good solution to a problem that organisms have and it would not be surprising to discover that evolution has hit upon a variant of Kalman filters to assist in dealing with that problem.

It’s clear (I hope, and if not, I’ll make an argument as to why) that a mobile organism gains something by having some kind of model (however rudimentary) of its external environment.  In “higher” organisms, that model extends beyond the range of that which is immediately accessible to its senses.  It’s handy to have a rough idea of what is behind one without having to look around to find out.  It’s also handy to know where one lives when one goes for a walk out of sight of one’s home.

Okay, so we need an organism-centric model of the universe, that is, one that references things outside the organism to the organism itself.  But more interestingly, does this model include a model of the organism itself?

Certain models cannot be inborn (or at least the details cannot be).  What starts to be fun is when the things modeled have a mind of their own (so to speak).  It’s not just useful to humans to be able to model animals and other humans (to varying degrees of specificity and with varying degrees of success).  It would seem to be useful to lots of animals to be able to model animals and other conspecifics.

What is the intersection of “modeling” with “learning” and “meaning”?  How does “learning” (a sort of mental sum of experience) interact with ongoing sensations?  “Learning” takes place with respect to sensible (that is capable of being sensed) events involving the organism, including things going on inside the organism that are sensible.  Without letting the concept get out of hand, I have said in other contexts that humans are voracious pattern-extractors.  “Pattern” in this context means a model of how things work.  That is, once a pattern is “identified” (established, learned), it tends to assert its conclusions.

This is not quite correct.  I seem to be using “pattern” in several different ways.  Let’s take it apart.  The kicker in just about every analysis of “self” and “consciousness” is the internal state of the organism.  Any analysis that fails to take into account the internal state of the organism at the time a stimulus is presented is not, in general, going to do well in predicting the organism’s response.  At the same time, I am perfectly willing to assert that the organism’s response—any organism’s response—is uniquely determined by the stimulus (broadly construed) and the organism’s state (also broadly construed).  Uniquely determined.  Goodbye free will.  [For the time being, I am going to leave it to philosophers to ponder the implications of this fact.  I am sorry to say that I don’t have a lot of faith that many of them will get them right, but some will.  This is just one of many red herrings that make it difficult to think about “self” and “consciousness”.]

Anyway, when I think about the process, I think of waves of data washing over and into the sensorium (a wonderfully content-free word).  In the sensorium are lots of brain elements (I’m not restricting this to neurons because there are at least ten times as many glia listening in and adding or subtracting their two cents) that have been immersed in this stream of information since they became active.  They have “seen” a lot of things.  There have been spatio-temporal-modal patterns in the stream, and post hoc ergo propter hoc many of these patterns have been “grooved”.  So, when data in the stream exhibit characteristics approximating some portion of a “grooved” pattern, other brain elements in the groove are activated to some extent, the extent depending on all sorts of things, like the “depth” of the “groove”, the “extent” of the match, etc.

In order to think about this more easily, remember that the sensorium does not work on just a single instantaneous set of data.  It takes some time for data to travel from neural element to neural element.  Data from “right now” enter the sensorium and begin their travel “right now”, hot on the heels of data from just before “right now”, and cool on the heels of data from a bit before “right now” and so on.  Who knows how long data that are already in the sensorium “right now” have been there.  [The question is, of course, rhetorical.  All the data that ever came into the sensorium are still there to the extent that they caused alterations in the characteristics of the neural elements there.  Presumably, they are not there in their original form, and more of some are there than of others.]  The point is that the sensorium “naturally” turns sequential data streams into simultaneous data snapshots.  In effect, the sensorium deals with pictures of history.

Now back to patterns.  A pattern may thus be static (as we commonly think of a pattern), and at the same time represent a temporal sequence.  In that sense, a pattern is a model of how things have happened in the past.  Now note that in this massively parallel sensorium, there is every reason to believe that at any instant many many patterns have been or are being activated to a greater or lesser extent and the superposition (I don’t know what else to call it) of these patterns gives rise to behavior in the following way.

Some patterns are effector patterns.  They are activated (“primed” is another term used here, meaning activated somewhat, but not enough to be “triggered”) by internal homeostatic requirements.  I’m not sure I am willing to state unequivocally that I believe all patterns have an effector component, but I’m at least willing to consider it.  Maybe not.  Maybe what I think is that data flows from sensors to effectors and the patterns I am referring to shape and redirect the data (which are ultimately brain element activity) into orders that are sent to effectors.

That’s existence.  That’s life.  I don’t know what in this process gives rise to a sense of self, but I think the description is fundamentally correct.  Maybe the next iteration through the process will provide some clues.  Or the next.  Or the next.

Hunger might act in the following way.  Brain elements determine biochemically and biorhythmically that it’s time to replenish the energy resources.  So data begin to flow associated with the need to replenish the energy resources.  That primes patterns associated with prior success replenishing the energy resources.  A little at first.  Maybe enough so if you see a meal you will eat it.  Not a lot can be hard-wired (built-in) in this process.  Maybe as a baby there’s a mechanism (a built-in pattern) that causes fretting in response to these data.  But basically, what are primed are patterns the organism has learned that ended up with food being consumed.  By adulthood, these patterns extend to patterns as complex as going to the store, buying food, preparing it, and finally consuming it.

This is not to say that the chain of determinism imposes rigid behaviors.  Indeed, what is triggered deterministically is a chain of opportunism.  Speaking of which, I have to go to the store to get fixings for dinner.  Bye.

030615 – (Heterophenomnological) Consciousness

Sunday, June 15th, 2003

030615 – (Heterophenomenological) Consciousness

It’s dreary and raining and that may make people a bit depressed.  That, in turn, may make it harder for people to find a satisfactory solution to their problems.   Realizing that, I feel a bit better.  It is sometimes useful to bring something into consciousness so one can look at it.

Although we may not have access to the underlying stimulus events (constellations) that directly determine our feelings, we can learn about ourselves just as we learn about other things and other people.  We can then shine the spotlight of consciousness on our inner state and try to glean what clues we can by carefully attention.

When I say we can learn about ourselves, that is to say that we can create an internal model of ourselves and use the predictions of that model to feed back into our decision-making process.  Such feedback has the result of modifying our behavior (as a feedback system does).

The interesting thing about the internal model is that it not only models external behavior, but also models internal state.

Interesting aside: consciousness can be switched on and off.  We can be awake or asleep.  We can be “unconscious”.

What are the design criteria for human beings such that consciousness is an appropriate engineering solution?

Goals:

  • Exist in world.
  • Basic provisioning.  Homeostasis. Obtain fuel.
  • Reproduction.  Mate.  Ensure survival of offspring.

Capabilities Required to Attain Goals:

  • Locomotion.
  • Navigation.
  • Manipulation.

Functions Required to Implement Required Capabilities

  • Identification of things relevant to implementation of goals.
  • Acquisition of skills relevant to implementation of goals (note that skills may be physical or cognitive).

Capabilities Required to Support Required Functions

  • Observation.  Primary exterosensors.
  • Memory.
  • Ability to manipulate things mentally (saves energy).  This includes the ability to manipulate the self mentally.
  • Ability to reduce power consumption during times when it is diseconomic to be active (e.g., sleep at night).

Damasio (1999, p.260) says:

“Homeostatic regulation, which includes emotion, requires periods of wakefulness (for energy gathering); periods of sleep (presumably for restoration of depleted chemicals necessary for neuronal activity); attention (for proper interaction with the environment); and consciousness (so that a high level of planning or responses concerned with the individual organism can eventually take place). The body-relatedness of all these functions and the anatomical intimacy of the nuclei subserving them are quite apparent.”

Well, I have an alternative theory of the utility of sleep, but Damasio’s is certainly plausible and has been around for a while in the form of the “cleanup” hypothesis: that there is something that is generated or exhausted over a period of wakefulness that needs to be cleaned up or replenished and sleep is when that gets done.  It raises the question of whether sleep is an essential part of consciousness and self-awareness or is it a consequence of the physical characteristics of the equipment in which consciousness and self-awareness are implemented.

One talks to oneself by inhibiting (or is it failing to activate) the effectors that would turn ready-to-speak utterances into actual utterances.  In talking to oneself, ready-to-speak utterances are fed back into the speech understanding system.  This is only a slight variation of the process of careful (e.g., public speaking) speech or the process used in writing.  In writing, the speech utterance effectors are not activated and the ready-to-speak stuff is fed into the writing system.

But does it always pass through the speech understanding system?  IOW is it possible to speak without knowing what you are going to say?  Possibly.  Specific evidence: on occasion one thinks one has said one thing and has in fact said something else.  Sometimes one catches it oneself.  Sometimes somebody says you said X, don’t you mean Y and you say oh, did I say X, I meant Y.

Nonetheless, I don’t think it’s necessary to talk to oneself to be conscious.  There are times when the internal voice is silent.  OTOH language is the primary i/o system for humans.  One might argue that language enhances consciousness.  As an aside, people who are deaf probably have an internal “voice” that “talks” to them.  Does talking to yourself help you to work things out?  Does the “voice” “speak” in unexpressed signs?  When a deaf person does something dumb, does he/she sign “dumb” to him/herself?

Is there something in the way pattern matching takes place that is critical to the emergence of consciousness?  The more I think about consciousness, the less certain I am that I know what I am talking about.  I don’t think that is bad.  It means that I am recognizing facets of the concept that I had not recognized before.  That seems to be what happened to Dennett and to Damasio.  They each had to invent terminology to express differences they had discovered.

Ultimately, we need an operational definition of whatever it is that I’m talking about here.  That is the case because at the level I am trying to construct a theory, there is no such thing as consciousness.  If there were, we’d just be back in the Cartesian theatre.  Is the question: How does it happen that human beings behave as if they have a sense of self?  I’m arriving at Dennett’s heterophenomenology.  (1991, p.96) “You are not authoritative about what is happening in you, but only about what seems to be happening in you….”

To approach the question of how heterophenomenological consciousness emerges, it is essential to think “massively parallel”.  What is the calculus of the brain.  A + B = ?  A & B ?  A | B ?  A followed by B?  Thinking massive parallelism, the answer could be: All of the above.  It must be the case that serial inputs are cumulatively deserialized.  There’s an ongoing accumulation of history at successively higher levels of abstraction (well, that’s one story, or one way of putting it).  Understanding language seems to work by a process of successive refinement.  Instinctively it’s like A & B in a Venn diagram, but that feels too sharp.

The system doesn’t take “red cow” to mean the intersection of red things with cow things.  The modifier adds specificity to an otherwise unspecified (default) attribute.  So the combination of activation of “red” and the activation of “cow” in “red cow”  leads to a new constellation of activation which is itself available for further modification (generalization or restriction or whatever).  This probably goes on all the time in non-linguistic processing as well.  A pattern that is activated at one point gets modified (refined) as additional information becomes available.  Sounds like a description of the process of perception.

Massively parallel, always evolving.  It doesn’t help to start an analysis when the organism wakes up, because the wake-up state is derived from (is an evolution of) the organism’s previous life.  Learning seems to be closely tied to consciousness.  Is it the case that the “degree” of consciousness of an organism is a function of the “amount of learning” previously accumulated by the organism?

We know how to design an entity that responds to its environment.  An example is called a PC (Personal Computer).

There’s learning (accumulation of information) and there’s self-programming (modification of processing algorithms).  Are these distinguishable in “higher” biological entities?  Does learning in say mammals, necessarily involve self-programming?  Is a distinction between learning and self-programming just a conceptual convenience for dealing with Von Neuman computers?

There’s “association” and “analysis”

There is learning and there’s self programming.  Lots of things happen automatically.  Association and analysis.  Segmentation is important: chunking is a common mechanism.  Chunking is a way of parallelizing the processing of serial inputs.  Outputs of parallel processors may move along as chunks.  Given that there’s no Cartesian observer, every input is being processed for its output consequences.  And every input is being shadow processed to model its consequences and the model consequences are fed back or fed along.  Associations are also fed back or fed along.  In effect there is an ongoing assessment of what Don Norman called “affordances”, e.g., what can be done in the current context?  The model projects alternate futures.  The alternate futures coexist with the current inputs.  The alternate futures are tagged with valences.  Are these Dennett’s “multiple drafts”?  I still don’t like his terminology.  Are the alternate futures available to consciousness?  Clearly sometimes.  What does that mean?  It is certainly possible for a system to do load balancing and prioritization.  If there is additional processing power available or if processing power can be reassigned to a particular problem.  Somehow, I don’t think it works that way.  Maybe some analyses are dropped or, more likely, details are dropped as a large freight train comes roaring through.  Tracking details isn’t much of a problem because of the constant stream of new inputs coming in.  Lost details are indeed lost, but most of the time, so what?

Language output requires serialization as do certain motor skills.  The trick is to string together a series of sayings that are themselves composed of ordered (or at least coordinated) series of sayings.  Coordination is a generalization of serialization because it entails multiple parallel processors.  Certainly, serial behavior is a challenge for a parallel organism, but so are all types of coordinated behavior.  Actions can be overlaid (to a certain extent, for example: walk and chew gum; ride a horse and shoot; drive and talk; etc.) Week can program computers in a way that evolution cannot hardwire organisms.  On the other hand, evolution has made the human organism programmable (and even self programmable).  Not only that, we are programmable in languages that we learn and we are programmable in perceptual motor skills that we practice and learn.  Is there some (any) reason to think that language is not a perceptual motor skill (possibly writ large)?

Suppose we believe that learning involves modifications of synaptic behavior.  What do we make of the dozen or so neurotransmitters?  Is there a hormonal biasing system that influences which transmitters are most active?  Is that what changes mode beyond just neural activity in homeostatic systems?  Otherwise, does the nature of neuronal responses change depending on the transmitter mix, and can information about that mix be communicated across the synaptic gap?  These are really not questions that need to be answered in order to create a model of consciousness (even though they are interesting questions) but they do serve as a reminder that the system on which consciousness is based is only weakly understood and probably much more complicated even than we think (and we think it’s pretty complicated).

I seem to have an image — well, a paradigm– in mind involving constraints and feature slots, but I don’t quite see how to describe it as an algorithm.  This is a pipelined architecture, but with literally millions of pipelines that interact locally and globally.  The answer to “what’s there?” or “what’s happening?” is not a list, but a coruscating array of facets.  It is not necessary to extract “the meaning” or even “a meaning” to appreciate what is going on.  A lot of the time, nothing is “going on”; things are what they are and are not changing rapidly.

Awareness and attention seem to be part of consciousness.  One can be aware of something and not pay attention to it.  Attention seems central — the ability to select or emphasize certain input (and/or output) streams.  What is “now”?  It seems possible to recirculate the current state of things.  Or just let them pass by.  Problem: possible how?  What “lets” things pass by?  The Cartesian observer is so seductive.  We think we exist and watch our own private movie, but it cannot happen that way.  What is it that creates the impression of “me”?  Yes, it’s all stimulus-response, and but the hyphen is where all the state information is stored.  What might give the impression of “me”?  I keep thinking it has something to do with the Watslawyck et al. [(1969(?) The Pragmatics of Human Communication] idea of multiple models.  This is the way I see you. This is the way I see you seeing me.  This is the way I see you seeing me seeing you.  And then nothing.  Embedding works easily once: “The girl the squirrel bit cried.”  But “The girl the squirrel the boy saw bit cried” is pathological.

As a practical matter, if we want to create an artificial mind, we probably want to have some sort of analog to the homunculus map in order to avoid the problem of having to infer absolutely everything from experience.  That is, being able to refer stimuli to an organism centric and gravity aware coordinate system, goes a long way towards establishing a lot of basic concepts: up-down, above-below, top-bottom, towards-away, left-right, front-back.  Add an organism/world boundary and you get inside-outside.  I see that towards-away actually cheats in that it implies motion.  Not a problem because motion is change of position over time and with multiple temporal snapshots (naturally produced as responses to stimuli propagate through neural fields), motion can be pretty easily identified.  So that gets things like fast-slow, into-out of, before-after.  We can even get to “around” once the organism has a finite extent to get around.

What would we expect of an artificial mind?  We would like its heterophenomenology to be recognizably human.  What does that mean?  Consider the Turing test.  Much is made of the fact that certain programs have fooled human examiners over some period of time.  Is it then the case that the Turing test is somehow inadequate in principle?  Probably not.  At least I’m not convinced yet that it’s not adequate.  I think the problem may be that we are in the process of learning what aspects of human behavior can be (relatively) easily simulated.  People have believed that it is easy to detect machines by attempting to engage them in conversation about abstract things.  But it seems that things like learning and visualization are essential to the human mind.  Has anyone tried things like: imagine a capital a.  Now in your imagination remove the horizontal stroke and turn the resulting shape upside down.  What letter does it look like?

Learning still remains intransigent problem.  We don’t know how it takes place.  Recall is equally dicey.  We really don’t seem to know any more about learning skills that we do about learning information.  We’re not even very clear about memorizing nonsense syllables for all the thousands of psychological experiments involving them.  Is learning essential to mind?  Well, maybe not.  Henry can’t learn any conscious facts, and he clearly has a mind (no one I know of has suggested otherwise).  Okay, so there could be a steady state of learning.  The ability to learn facts of the kind Henry Molaison couldn’t learn isn’t necessary for a mind to exist.  We don’t know whether the capacity for perceptual motor learning is necessary for a mind to exist.  Does a baby have a mind?  Is this a sensical question?  If not, when does it get one?  If so when did it develop?  How?

It begins to feel like the problem it is to figure out what the question should be.  “Consciousness” seems not to be enough.  “Mind” seems ill-defined.  “Self awareness” has some appeal, though I struggle to pin down what it denotes: clearly “awareness” of one’s “self” but then what’s a “self” and what does “awareness” mean?  Surely self-awareness means 1)  there is something that is “aware” (whatever “aware” means”), 2) that thing has a “self” (whatever “self” means), and that thing can be and is “aware” of its “self”.  A person could go crazy.

Is this a linguistic question — or rather a meta linguistic question: what does “I” mean?  What is “me”?  In languages that distinguish a “first person” it would appear that these questions can be asked.  And by the way, what difference does it make if the language doesn’t have appropriate pronouns and resorts to things like “this miserable wretch begs forgiveness”?  Who’s doing the begging?  No.  That’s not the question.  What’s doing the begging. heterophenomenologically, it doesn’t matter if I say it referring to myself or referring to another person.  Except that it has for me a special meaning when it refers to “my self” and that special meeting is appreciated, that is, understood, by others hearing “me” say it.

I don’t know anything about children learning what “I” and “me” refer to.  I remember reading something about an (autistic I think) child who referred to himself in the third person, for example: “he’s thirsty”

Consciousness seems to require inputs.  That is, one cannot just “be conscious” rather one must “be conscious of” things.  That sounds a bit forced, but not if it is precisely the inputs that give rise to consciousness.  No inputs, no consciousness.  Something in the processing of inputs gives rise to the heterophenomenological feeling of being conscious.

Does self-awareness have to do with internal models?  Does the organism have an internal model of the universe in which exists?  Does that model include among the entities modeled, the organism itself?  And is it necessary that the model of the organism include a model of the internal model of the universe and its component model of the organism?  It may not be an infinite series.  In fact it can’t be.  The brain (or any physical computer) has finite capacity.

But doesn’t a model imply someone or something that makes use of the model?  We keep coming back to metaphors that encourage the Cartesian fallacy.

Let’s think computer systems design.  Hell, let’s go all the way, let’s think robot design.  The robot exists in a universe.  The robot’s program receives inputs from its exteroceptors about the state of the universe and its inputs, suitably processed, are abstracted into a set of signals representing the inputs — in fact representing the inputs over a period of time.  The same thing is happening with samples representing the interoceptors monitoring the robot’s internal mechanical state: position of limbs, orientation, inertial state (falling, turning, whatever), battery/power level, structural integrity.

On the goals side, based on the internal state, the robot has certain not action triggers, but propensity triggers.  For example: When the internal power level or the internal power reserves fall below a particular threshold, the goal of increasing power reserves is given increased priority.  But we do not assume that the robot has a program that specifies exactly what to do in this state.  The state should trigger increased salience (whatever that means) and attention to things in the current environment that are (or have been in the learned past) associated with successful replenishing of power reserves.

At all times, the important question is: “what do I do now?”  The answer to this question helps determine what needs “attention” and what doesn’t need “attention”.  As a first approximation, things not “associated” with current priority goals are not attended to.  Well, it’s not quite a simple as that.  Things that don’t need attention, even though associated with an ongoing task (like walking or driving) don’t get attention processing.  Attention is the assignment of additional processing power to something.  Additional processing power can boost the signal level to above the consciousness threshold and can reduce the decay rate of attended signals.

No one has succeeded in explaining why heterophenomenological evidence indicates that people feel “conscious” sometimes and when they don’t feel “conscious” they don’t “feel” anything and they shift back and forth.  It’s a processing thing.  If I close my eyes and lie quietly, I’m not asleep.  I still hear things.  I can still think about things.  So consciousness can be turned on and off in the normal organism.  What’s going on here?  Understanding the neural connections won’t do it.  We would need to know what the connections “do”; how they “work”.

Sleep.  In effect, the organism can “power down” into a standby state (for whatever evolutionary reason).  If the threshold for external events is set high, most of them won’t make an impact (have an effect).  It’s like a stabilized image on the retina.  It disappears — well, it fades.  No change equals no signal.  If there’s nothing to react to, the organism, well, doesn’t react.

If outside inputs are suppressed, where do daydream inputs come from?  Not a critical question, but an interesting one.  Somebody pointed out that so-called “dream paralysis” is a good thing in that it keeps us from harming ourselves or others in reaction to dream threats or situations.

030103 – Consciousness and the Self

Friday, January 3rd, 2003

030103 Consciousness and the Self

To the extent that humans (or any beings as yet unknown to us, like space aliens, say, or sophisticated AI’s) have any views at all on the topic, they will believe that they have free will.  The argument is relatively simple: I believe that any intelligent being will have as a part of its intelligence an internal model of the physical universe (to whatever level of detail is appropriate) that it uses consciously to assess possible courses of action in anticipation of selecting one for execution.  Implied in such a model is a model of the being itself.  This enables analyses of the form, “If I do X, how will I feel about that?”

The model of the self must be contained in the organism as must the larger model of the physical universe.  This ensures that the model cannot model the organism itself with complete accuracy.  To do so would require that the model of the organism include a model of the model of the organism and that model would in turn have to contain a model of the organism and so on ad infinitum.  Thus, the model of the self cannot be 100% accurate.  In effect, one will make inaccurate predictions about one’s own behavior.  Stated another way, no one can know with absolute certainty what he or she will do in a particular set of circumstances.  We experience this as making up our minds at the last minute or as having free will.

The Mind – The Inner Voice

It is by no means clear or self-evident why each of us should have within us a voice that we use sometimes for the purpose of planning things and sometimes for the purpose of commenting on the world around or within us.  Up to the present, all reports of this inner voice have been subjective.  It is interesting to speculate that there may come a time when brain activity recording will become sufficiently sensitive and sophisticated to enable us to identify and even record in some way the “utterances” of this voice.  I rush to assert, however, that we are a long way measured in decades from the ability to listen in on the contents of someone else’s comic book thought balloons.

So, for the time being, the little voice remains private to each individual.

Why do we “hear” this voice?  We know that it is not external.  It does not make a sound.  What do we know about it?  It speaks in whatever language we choose to have it speak.  Sometimes it is silent.  Sometimes it is next to impossible to make it be silent, as for example when it decides in the middle of the night to rehearse all the things you should have said to whomever it was you should have said them to.  When you write something, it says the words to you and you transfer them to the paper or type them at the computer as or just after it says them.

Systems Analysis

Suppose evolutionarily that we are modifying an organism that operates purely in a simple stimulus-response fashion (whatever that means).  We want to improve it in such a way that it can “anticipate” or “plan ahead” in some sense.  A reasonably parsimonious approach might be to recruit brain structures to produce internal representations of possible future states and inject them into the stimulus-response arc (decision-making system) as additional inputs that would be distinguishable from direct real-world inputs, but would somehow carry at least some of the weight of current real-world inputs.

In general, the organism should not confuse these forward-looking inputs with real-world inputs.  Dreams should not be confused with reality.

In the simplest form, such a system would give the organism the ability to perform Gedanken experiments on its environment.  That is, instead of physically trying a strategy to determine its outcome, the organism would be able to “imagine” the outcome and evaluate it against other possible strategies and outcomes.

To accomplish this requires an internal model of the external environment to the extent of modeling physical objects and at least to some extent their relevant physical properties.

When I first wrote the above, I had not thought much about the nature of the model that is required.  It would seem that the model, if it is “automatic” or “unconscious” (which is what I think it’s reasonable to assume was true at least initially in evolutionary terms), must be of the PHEPH (post hoc ergo propter hoc) type that is easy for neuro-glial circuits to implement.

It is advantageous to an organism to be able to abstract invariants from the environment, e.g., object constancy in the presence of partial visual occlusions and in the presence of changes in appearance resulting from viewpoint and changes in the object itself.

Language plays an interesting role in consciousness.  Language serves as a communications medium among humans.  Language is a way of signaling one person’s internal state to another.  Internally, language plays a role in representing concepts to our internal decision-making system.  We “talk to ourselves” (out loud or subvocally) to give ourselves advice or to explore abstract alternatives.

Things we say to ourselves are often things another person might say to us, e.g., “I don’t think this is such a good idea.”  In effect, our language ability is used in two different ways: to communicate with others and to communicate with ourselves.