Archive for January, 2003

030105 – Wants

Sunday, January 5th, 2003

030105 Wants

One of the most central and most refractory problems of all theoretical models of human behavior is the problem of wants.  What is a want?  What makes this a difficult problem is that everybody knows what it means to want something.  But from a modeling standpoint, what does it mean?  Wanting is fundamental.  Can there even be behavior without wants?  I think not.  Can non-human animals be said to have wants?  I think so.

That being the case, what is different (if anything) about human wants?  Wants are in many cases related to biological needs, e.g., food, water, excretion of wastes.  Wants are also associated with biological imperatives that fall short of being needs (where a need must be met or the organism will perish).  The only biological imperative I can think of at the moment is sex, without which an organism will perish without offspring.

Given that there is no Cartesian observer or meaner in the brain, the question of wants becomes even more important.  Dennett (1991) talks about some kind of system to determine what to think about next.  Jumping off from his analysis, it seems like evolution has created an on-idle loop that thinks about things whenever there’s nothing urgent to deal with at the moment.  The evolutionary advantage this confers [I thought there was a word con-something that would work there, but I couldn’t think of it at first.  Eventually, I found it, and there it is.] is that idle-time thinking may result in elaborating strategies that make the organism fitter when urgent situations do occur.  That is, idle-time thinking is sort of like ongoing fire-drills, or contingency planning.  You never know when having thought about something or learned something will come in handy.

Still, wanting is problematical.

A lot of AI sidesteps the problem.  Programs that are designed to understand and paraphrase text want to understand and paraphrase text because that is what they are designed and programmed to do.  Such programs do not produce as output, “I’m tired of this work, let’s go out, have a few beers, and talk about life” (unless of course, that is a paraphrase of some corpus of input text).

So, maybe it makes sense to try to figure out what we want AI devices to want.  Self-preservation is good. (Oops, now we hit one of the problems Asimov’s Laws of Robotics address: we don’t want AI entities to preserve themselves at the expense of allowing humans to come to harm, although presumably we don’t mind if they inveigle themselves into our affections so we are unwilling / unlikely / not disposed to turn them off.)

At least self-preservation is good in a Mars rover.  It may not be good in a military robot, although military robots are presumably will continue to be expensive, so we don’t want them to risk their existence casually.

Is fear what happens when the Danger-let’s-get-the-hell-out-of-here subsystem is screaming at the top of its lungs and we are not getting the hell out of there?

In our universe, for an organism to exist, it must be the offspring of a previous organism.  This trivial fact is called Evolution and much is made of it.  Although it is incorrect to attribute volition to Evolution, it does not do violence to reality to assert that Evolution is the name we give to the continued existence of things that have been able to reproduce.  Moreover, observation teaches that the more complex such things are, the more complex are the processes through which those things reproduce.

It does not make much sense to say that a bacterium or a virus wants to reproduce, although it does reproduce when conditions are favorable.  For that matter, it doesn’t make much sense to say that a bacterium or a virus wants to do anything.  I guess that means we think of wanting as something that we are aware of: something that rises to the level of consciousness—an attribute we do not apply to bacteria or viruses.  So here we are with propositional attitudes, which linguistically seem to come in at least two flavors: indicative and subjunctive.

030103 – Consciousness and the Self

Friday, January 3rd, 2003

030103 Consciousness and the Self

To the extent that humans (or any beings as yet unknown to us, like space aliens, say, or sophisticated AI’s) have any views at all on the topic, they will believe that they have free will.  The argument is relatively simple: I believe that any intelligent being will have as a part of its intelligence an internal model of the physical universe (to whatever level of detail is appropriate) that it uses consciously to assess possible courses of action in anticipation of selecting one for execution.  Implied in such a model is a model of the being itself.  This enables analyses of the form, “If I do X, how will I feel about that?”

The model of the self must be contained in the organism as must the larger model of the physical universe.  This ensures that the model cannot model the organism itself with complete accuracy.  To do so would require that the model of the organism include a model of the model of the organism and that model would in turn have to contain a model of the organism and so on ad infinitum.  Thus, the model of the self cannot be 100% accurate.  In effect, one will make inaccurate predictions about one’s own behavior.  Stated another way, no one can know with absolute certainty what he or she will do in a particular set of circumstances.  We experience this as making up our minds at the last minute or as having free will.

The Mind – The Inner Voice

It is by no means clear or self-evident why each of us should have within us a voice that we use sometimes for the purpose of planning things and sometimes for the purpose of commenting on the world around or within us.  Up to the present, all reports of this inner voice have been subjective.  It is interesting to speculate that there may come a time when brain activity recording will become sufficiently sensitive and sophisticated to enable us to identify and even record in some way the “utterances” of this voice.  I rush to assert, however, that we are a long way measured in decades from the ability to listen in on the contents of someone else’s comic book thought balloons.

So, for the time being, the little voice remains private to each individual.

Why do we “hear” this voice?  We know that it is not external.  It does not make a sound.  What do we know about it?  It speaks in whatever language we choose to have it speak.  Sometimes it is silent.  Sometimes it is next to impossible to make it be silent, as for example when it decides in the middle of the night to rehearse all the things you should have said to whomever it was you should have said them to.  When you write something, it says the words to you and you transfer them to the paper or type them at the computer as or just after it says them.

Systems Analysis

Suppose evolutionarily that we are modifying an organism that operates purely in a simple stimulus-response fashion (whatever that means).  We want to improve it in such a way that it can “anticipate” or “plan ahead” in some sense.  A reasonably parsimonious approach might be to recruit brain structures to produce internal representations of possible future states and inject them into the stimulus-response arc (decision-making system) as additional inputs that would be distinguishable from direct real-world inputs, but would somehow carry at least some of the weight of current real-world inputs.

In general, the organism should not confuse these forward-looking inputs with real-world inputs.  Dreams should not be confused with reality.

In the simplest form, such a system would give the organism the ability to perform Gedanken experiments on its environment.  That is, instead of physically trying a strategy to determine its outcome, the organism would be able to “imagine” the outcome and evaluate it against other possible strategies and outcomes.

To accomplish this requires an internal model of the external environment to the extent of modeling physical objects and at least to some extent their relevant physical properties.

When I first wrote the above, I had not thought much about the nature of the model that is required.  It would seem that the model, if it is “automatic” or “unconscious” (which is what I think it’s reasonable to assume was true at least initially in evolutionary terms), must be of the PHEPH (post hoc ergo propter hoc) type that is easy for neuro-glial circuits to implement.

It is advantageous to an organism to be able to abstract invariants from the environment, e.g., object constancy in the presence of partial visual occlusions and in the presence of changes in appearance resulting from viewpoint and changes in the object itself.

Language plays an interesting role in consciousness.  Language serves as a communications medium among humans.  Language is a way of signaling one person’s internal state to another.  Internally, language plays a role in representing concepts to our internal decision-making system.  We “talk to ourselves” (out loud or subvocally) to give ourselves advice or to explore abstract alternatives.

Things we say to ourselves are often things another person might say to us, e.g., “I don’t think this is such a good idea.”  In effect, our language ability is used in two different ways: to communicate with others and to communicate with ourselves.