Archive for the ‘robot’ Category

030615 – (Heterophenomnological) Consciousness

Sunday, June 15th, 2003

030615 – (Heterophenomenological) Consciousness

It’s dreary and raining and that may make people a bit depressed.  That, in turn, may make it harder for people to find a satisfactory solution to their problems.   Realizing that, I feel a bit better.  It is sometimes useful to bring something into consciousness so one can look at it.

Although we may not have access to the underlying stimulus events (constellations) that directly determine our feelings, we can learn about ourselves just as we learn about other things and other people.  We can then shine the spotlight of consciousness on our inner state and try to glean what clues we can by carefully attention.

When I say we can learn about ourselves, that is to say that we can create an internal model of ourselves and use the predictions of that model to feed back into our decision-making process.  Such feedback has the result of modifying our behavior (as a feedback system does).

The interesting thing about the internal model is that it not only models external behavior, but also models internal state.

Interesting aside: consciousness can be switched on and off.  We can be awake or asleep.  We can be “unconscious”.

What are the design criteria for human beings such that consciousness is an appropriate engineering solution?

Goals:

  • Exist in world.
  • Basic provisioning.  Homeostasis. Obtain fuel.
  • Reproduction.  Mate.  Ensure survival of offspring.

Capabilities Required to Attain Goals:

  • Locomotion.
  • Navigation.
  • Manipulation.

Functions Required to Implement Required Capabilities

  • Identification of things relevant to implementation of goals.
  • Acquisition of skills relevant to implementation of goals (note that skills may be physical or cognitive).

Capabilities Required to Support Required Functions

  • Observation.  Primary exterosensors.
  • Memory.
  • Ability to manipulate things mentally (saves energy).  This includes the ability to manipulate the self mentally.
  • Ability to reduce power consumption during times when it is diseconomic to be active (e.g., sleep at night).

Damasio (1999, p.260) says:

“Homeostatic regulation, which includes emotion, requires periods of wakefulness (for energy gathering); periods of sleep (presumably for restoration of depleted chemicals necessary for neuronal activity); attention (for proper interaction with the environment); and consciousness (so that a high level of planning or responses concerned with the individual organism can eventually take place). The body-relatedness of all these functions and the anatomical intimacy of the nuclei subserving them are quite apparent.”

Well, I have an alternative theory of the utility of sleep, but Damasio’s is certainly plausible and has been around for a while in the form of the “cleanup” hypothesis: that there is something that is generated or exhausted over a period of wakefulness that needs to be cleaned up or replenished and sleep is when that gets done.  It raises the question of whether sleep is an essential part of consciousness and self-awareness or is it a consequence of the physical characteristics of the equipment in which consciousness and self-awareness are implemented.

One talks to oneself by inhibiting (or is it failing to activate) the effectors that would turn ready-to-speak utterances into actual utterances.  In talking to oneself, ready-to-speak utterances are fed back into the speech understanding system.  This is only a slight variation of the process of careful (e.g., public speaking) speech or the process used in writing.  In writing, the speech utterance effectors are not activated and the ready-to-speak stuff is fed into the writing system.

But does it always pass through the speech understanding system?  IOW is it possible to speak without knowing what you are going to say?  Possibly.  Specific evidence: on occasion one thinks one has said one thing and has in fact said something else.  Sometimes one catches it oneself.  Sometimes somebody says you said X, don’t you mean Y and you say oh, did I say X, I meant Y.

Nonetheless, I don’t think it’s necessary to talk to oneself to be conscious.  There are times when the internal voice is silent.  OTOH language is the primary i/o system for humans.  One might argue that language enhances consciousness.  As an aside, people who are deaf probably have an internal “voice” that “talks” to them.  Does talking to yourself help you to work things out?  Does the “voice” “speak” in unexpressed signs?  When a deaf person does something dumb, does he/she sign “dumb” to him/herself?

Is there something in the way pattern matching takes place that is critical to the emergence of consciousness?  The more I think about consciousness, the less certain I am that I know what I am talking about.  I don’t think that is bad.  It means that I am recognizing facets of the concept that I had not recognized before.  That seems to be what happened to Dennett and to Damasio.  They each had to invent terminology to express differences they had discovered.

Ultimately, we need an operational definition of whatever it is that I’m talking about here.  That is the case because at the level I am trying to construct a theory, there is no such thing as consciousness.  If there were, we’d just be back in the Cartesian theatre.  Is the question: How does it happen that human beings behave as if they have a sense of self?  I’m arriving at Dennett’s heterophenomenology.  (1991, p.96) “You are not authoritative about what is happening in you, but only about what seems to be happening in you….”

To approach the question of how heterophenomenological consciousness emerges, it is essential to think “massively parallel”.  What is the calculus of the brain.  A + B = ?  A & B ?  A | B ?  A followed by B?  Thinking massive parallelism, the answer could be: All of the above.  It must be the case that serial inputs are cumulatively deserialized.  There’s an ongoing accumulation of history at successively higher levels of abstraction (well, that’s one story, or one way of putting it).  Understanding language seems to work by a process of successive refinement.  Instinctively it’s like A & B in a Venn diagram, but that feels too sharp.

The system doesn’t take “red cow” to mean the intersection of red things with cow things.  The modifier adds specificity to an otherwise unspecified (default) attribute.  So the combination of activation of “red” and the activation of “cow” in “red cow”  leads to a new constellation of activation which is itself available for further modification (generalization or restriction or whatever).  This probably goes on all the time in non-linguistic processing as well.  A pattern that is activated at one point gets modified (refined) as additional information becomes available.  Sounds like a description of the process of perception.

Massively parallel, always evolving.  It doesn’t help to start an analysis when the organism wakes up, because the wake-up state is derived from (is an evolution of) the organism’s previous life.  Learning seems to be closely tied to consciousness.  Is it the case that the “degree” of consciousness of an organism is a function of the “amount of learning” previously accumulated by the organism?

We know how to design an entity that responds to its environment.  An example is called a PC (Personal Computer).

There’s learning (accumulation of information) and there’s self-programming (modification of processing algorithms).  Are these distinguishable in “higher” biological entities?  Does learning in say mammals, necessarily involve self-programming?  Is a distinction between learning and self-programming just a conceptual convenience for dealing with Von Neuman computers?

There’s “association” and “analysis”

There is learning and there’s self programming.  Lots of things happen automatically.  Association and analysis.  Segmentation is important: chunking is a common mechanism.  Chunking is a way of parallelizing the processing of serial inputs.  Outputs of parallel processors may move along as chunks.  Given that there’s no Cartesian observer, every input is being processed for its output consequences.  And every input is being shadow processed to model its consequences and the model consequences are fed back or fed along.  Associations are also fed back or fed along.  In effect there is an ongoing assessment of what Don Norman called “affordances”, e.g., what can be done in the current context?  The model projects alternate futures.  The alternate futures coexist with the current inputs.  The alternate futures are tagged with valences.  Are these Dennett’s “multiple drafts”?  I still don’t like his terminology.  Are the alternate futures available to consciousness?  Clearly sometimes.  What does that mean?  It is certainly possible for a system to do load balancing and prioritization.  If there is additional processing power available or if processing power can be reassigned to a particular problem.  Somehow, I don’t think it works that way.  Maybe some analyses are dropped or, more likely, details are dropped as a large freight train comes roaring through.  Tracking details isn’t much of a problem because of the constant stream of new inputs coming in.  Lost details are indeed lost, but most of the time, so what?

Language output requires serialization as do certain motor skills.  The trick is to string together a series of sayings that are themselves composed of ordered (or at least coordinated) series of sayings.  Coordination is a generalization of serialization because it entails multiple parallel processors.  Certainly, serial behavior is a challenge for a parallel organism, but so are all types of coordinated behavior.  Actions can be overlaid (to a certain extent, for example: walk and chew gum; ride a horse and shoot; drive and talk; etc.) Week can program computers in a way that evolution cannot hardwire organisms.  On the other hand, evolution has made the human organism programmable (and even self programmable).  Not only that, we are programmable in languages that we learn and we are programmable in perceptual motor skills that we practice and learn.  Is there some (any) reason to think that language is not a perceptual motor skill (possibly writ large)?

Suppose we believe that learning involves modifications of synaptic behavior.  What do we make of the dozen or so neurotransmitters?  Is there a hormonal biasing system that influences which transmitters are most active?  Is that what changes mode beyond just neural activity in homeostatic systems?  Otherwise, does the nature of neuronal responses change depending on the transmitter mix, and can information about that mix be communicated across the synaptic gap?  These are really not questions that need to be answered in order to create a model of consciousness (even though they are interesting questions) but they do serve as a reminder that the system on which consciousness is based is only weakly understood and probably much more complicated even than we think (and we think it’s pretty complicated).

I seem to have an image — well, a paradigm– in mind involving constraints and feature slots, but I don’t quite see how to describe it as an algorithm.  This is a pipelined architecture, but with literally millions of pipelines that interact locally and globally.  The answer to “what’s there?” or “what’s happening?” is not a list, but a coruscating array of facets.  It is not necessary to extract “the meaning” or even “a meaning” to appreciate what is going on.  A lot of the time, nothing is “going on”; things are what they are and are not changing rapidly.

Awareness and attention seem to be part of consciousness.  One can be aware of something and not pay attention to it.  Attention seems central — the ability to select or emphasize certain input (and/or output) streams.  What is “now”?  It seems possible to recirculate the current state of things.  Or just let them pass by.  Problem: possible how?  What “lets” things pass by?  The Cartesian observer is so seductive.  We think we exist and watch our own private movie, but it cannot happen that way.  What is it that creates the impression of “me”?  Yes, it’s all stimulus-response, and but the hyphen is where all the state information is stored.  What might give the impression of “me”?  I keep thinking it has something to do with the Watslawyck et al. [(1969(?) The Pragmatics of Human Communication] idea of multiple models.  This is the way I see you. This is the way I see you seeing me.  This is the way I see you seeing me seeing you.  And then nothing.  Embedding works easily once: “The girl the squirrel bit cried.”  But “The girl the squirrel the boy saw bit cried” is pathological.

As a practical matter, if we want to create an artificial mind, we probably want to have some sort of analog to the homunculus map in order to avoid the problem of having to infer absolutely everything from experience.  That is, being able to refer stimuli to an organism centric and gravity aware coordinate system, goes a long way towards establishing a lot of basic concepts: up-down, above-below, top-bottom, towards-away, left-right, front-back.  Add an organism/world boundary and you get inside-outside.  I see that towards-away actually cheats in that it implies motion.  Not a problem because motion is change of position over time and with multiple temporal snapshots (naturally produced as responses to stimuli propagate through neural fields), motion can be pretty easily identified.  So that gets things like fast-slow, into-out of, before-after.  We can even get to “around” once the organism has a finite extent to get around.

What would we expect of an artificial mind?  We would like its heterophenomenology to be recognizably human.  What does that mean?  Consider the Turing test.  Much is made of the fact that certain programs have fooled human examiners over some period of time.  Is it then the case that the Turing test is somehow inadequate in principle?  Probably not.  At least I’m not convinced yet that it’s not adequate.  I think the problem may be that we are in the process of learning what aspects of human behavior can be (relatively) easily simulated.  People have believed that it is easy to detect machines by attempting to engage them in conversation about abstract things.  But it seems that things like learning and visualization are essential to the human mind.  Has anyone tried things like: imagine a capital a.  Now in your imagination remove the horizontal stroke and turn the resulting shape upside down.  What letter does it look like?

Learning still remains intransigent problem.  We don’t know how it takes place.  Recall is equally dicey.  We really don’t seem to know any more about learning skills that we do about learning information.  We’re not even very clear about memorizing nonsense syllables for all the thousands of psychological experiments involving them.  Is learning essential to mind?  Well, maybe not.  Henry can’t learn any conscious facts, and he clearly has a mind (no one I know of has suggested otherwise).  Okay, so there could be a steady state of learning.  The ability to learn facts of the kind Henry Molaison couldn’t learn isn’t necessary for a mind to exist.  We don’t know whether the capacity for perceptual motor learning is necessary for a mind to exist.  Does a baby have a mind?  Is this a sensical question?  If not, when does it get one?  If so when did it develop?  How?

It begins to feel like the problem it is to figure out what the question should be.  “Consciousness” seems not to be enough.  “Mind” seems ill-defined.  “Self awareness” has some appeal, though I struggle to pin down what it denotes: clearly “awareness” of one’s “self” but then what’s a “self” and what does “awareness” mean?  Surely self-awareness means 1)  there is something that is “aware” (whatever “aware” means”), 2) that thing has a “self” (whatever “self” means), and that thing can be and is “aware” of its “self”.  A person could go crazy.

Is this a linguistic question — or rather a meta linguistic question: what does “I” mean?  What is “me”?  In languages that distinguish a “first person” it would appear that these questions can be asked.  And by the way, what difference does it make if the language doesn’t have appropriate pronouns and resorts to things like “this miserable wretch begs forgiveness”?  Who’s doing the begging?  No.  That’s not the question.  What’s doing the begging. heterophenomenologically, it doesn’t matter if I say it referring to myself or referring to another person.  Except that it has for me a special meaning when it refers to “my self” and that special meeting is appreciated, that is, understood, by others hearing “me” say it.

I don’t know anything about children learning what “I” and “me” refer to.  I remember reading something about an (autistic I think) child who referred to himself in the third person, for example: “he’s thirsty”

Consciousness seems to require inputs.  That is, one cannot just “be conscious” rather one must “be conscious of” things.  That sounds a bit forced, but not if it is precisely the inputs that give rise to consciousness.  No inputs, no consciousness.  Something in the processing of inputs gives rise to the heterophenomenological feeling of being conscious.

Does self-awareness have to do with internal models?  Does the organism have an internal model of the universe in which exists?  Does that model include among the entities modeled, the organism itself?  And is it necessary that the model of the organism include a model of the internal model of the universe and its component model of the organism?  It may not be an infinite series.  In fact it can’t be.  The brain (or any physical computer) has finite capacity.

But doesn’t a model imply someone or something that makes use of the model?  We keep coming back to metaphors that encourage the Cartesian fallacy.

Let’s think computer systems design.  Hell, let’s go all the way, let’s think robot design.  The robot exists in a universe.  The robot’s program receives inputs from its exteroceptors about the state of the universe and its inputs, suitably processed, are abstracted into a set of signals representing the inputs — in fact representing the inputs over a period of time.  The same thing is happening with samples representing the interoceptors monitoring the robot’s internal mechanical state: position of limbs, orientation, inertial state (falling, turning, whatever), battery/power level, structural integrity.

On the goals side, based on the internal state, the robot has certain not action triggers, but propensity triggers.  For example: When the internal power level or the internal power reserves fall below a particular threshold, the goal of increasing power reserves is given increased priority.  But we do not assume that the robot has a program that specifies exactly what to do in this state.  The state should trigger increased salience (whatever that means) and attention to things in the current environment that are (or have been in the learned past) associated with successful replenishing of power reserves.

At all times, the important question is: “what do I do now?”  The answer to this question helps determine what needs “attention” and what doesn’t need “attention”.  As a first approximation, things not “associated” with current priority goals are not attended to.  Well, it’s not quite a simple as that.  Things that don’t need attention, even though associated with an ongoing task (like walking or driving) don’t get attention processing.  Attention is the assignment of additional processing power to something.  Additional processing power can boost the signal level to above the consciousness threshold and can reduce the decay rate of attended signals.

No one has succeeded in explaining why heterophenomenological evidence indicates that people feel “conscious” sometimes and when they don’t feel “conscious” they don’t “feel” anything and they shift back and forth.  It’s a processing thing.  If I close my eyes and lie quietly, I’m not asleep.  I still hear things.  I can still think about things.  So consciousness can be turned on and off in the normal organism.  What’s going on here?  Understanding the neural connections won’t do it.  We would need to know what the connections “do”; how they “work”.

Sleep.  In effect, the organism can “power down” into a standby state (for whatever evolutionary reason).  If the threshold for external events is set high, most of them won’t make an impact (have an effect).  It’s like a stabilized image on the retina.  It disappears — well, it fades.  No change equals no signal.  If there’s nothing to react to, the organism, well, doesn’t react.

If outside inputs are suppressed, where do daydream inputs come from?  Not a critical question, but an interesting one.  Somebody pointed out that so-called “dream paralysis” is a good thing in that it keeps us from harming ourselves or others in reaction to dream threats or situations.