Archive for the ‘evolution’ Category

030828 – Are human beings rational?

Thursday, August 28th, 2003

030828 – Are human beings rational?

My wife asked an interesting question: Do I think that human beings are inherently rational.  I think the answer is emphatically no.  Human beings have the ability to learn procedures.  One of the procedures that human beings have discovered, found useful, and passed along culturally is the procedure of logical analysis or logical thinking.  The fact that in many cases logic enables us to find good solutions to certain classes of significant problems ensures that logical analysis will be one of the procedures activated as a candidate for execution in a broad range of external circumstances and internal states.

What strikes me is that the end result of evolution selecting organisms with greater and greater ability to learn and apply procedural patterns has resulted in an organism that is capable of learning to simulate serial computations, at least on a limited scale.  Certainly it was Dennett who put this idea into my mind, but I do not believe that he arrived at this conclusion by the same path that I did.

This raises an interesting question: what kind of pattern and procedural learning capabilities are required in order to be able to simulate serial computations or, more precisely, to be able to learn and execute a logical thinking pattern?  Human beings certainly aren’t much in the way of serial computers.  We’re not fast.  We’re not computationally adept.  We don’t have a lot of dynamic memory.  Our push down stack for recursion seems to be limited to one level.  (The fact that we must use the logical thinking pattern to analyze pathological sentences like, “The pearl the squirrel the girl hit bit split,” rather than the (unconscious) language understanding pattern simply underlines this limitation on our capability for recursion.)

So, is human language ability the result of the evolution of ever more sophisticated procedural pattern learning capabilities?  Is the driving force behind the evolution of such enhanced procedural pattern learning the advantage obtained by the organisms who best understand their conspecifics?  Is this evolution’s de facto recognition that brawn being equal, better brains confer a reproductive advantage?  Now if better understanding of one’s conspecifics is the goal, language ability may just fall out automatically, because if one has a mechanism that can build a model of others, it makes it a lot easier to figure out what the other intends or is responding to.

Clearly, since the ability to take the viewpoint of another person does not manifest itself in children until some time after they have acquired at least the rudiments of language, the manifestation of the ability to take the viewpoint of another person is not a requirement for the acquisition of at least the rudiments of the language.  There seems to be a subtle distinction to be made here: when daddy says “hudie” (the Chinese equivalent of “butterfly”) and looks at, or taps, or points to a butterfly or a representation of a butterfly, something has to help the child attend to both the butterfly instance and the sound.  That something may be the emerging model of the other.  Or maybe it’s the other way around as I suggested earlier: the trick is for the parent to take advantage of his or her own model of the child in order to intuitively construct or take advantage of the situation in which both the butterfly and the sound of the word will be salient to the child.

Still, I keep coming back to the idea that the internal model of the other is somehow crucial and the even more crucial is the idea that the internal model of the other contains the other’s model of others.  As I think about it though, it seems to me that creating an internal pattern, that is to say learning a pattern, based on experience and observation of the behavior of another organism is not a capability that is uniquely human.  It would seem to be a valuable ability to have.  What seems to be special about the patterns we humans develop of other people is that we attribute to the other a self.  An or to animal can get a long way without attributing a self (whatever that means) to other creatures with which it interacts.

030826 – Parallel processing in the mind

Tuesday, August 26th, 2003

030826 – Parallel processing in the mind

I don’t know if it originated with Grossberg, but I like the concept of complementary processing streams.  Actually, he talks about it as if it always involves a dichotomy.  Could it not also be multiple (any number) parallel streams?  Certainly, the convergence of inputs from a large number of brain areas on the amygdala indicates that it’s not just dichotomous streams.

Grossberg writes as if he is describing exactly what happens—especially with his neural circuit diagrams, but the more I read, the more they seem fanciful.  Certainly there’s something missing when the diagrams only show neurons in layers 2/3, 4, and 6.

In also seems that there’s something missing from the analysis of the visual “what” pathway.  Edge and Surface processing seem very closely tied throughout.  In visual area V1, the “blob” neurons are surrounded by “interblob” neurons and in visual area V2, the “thin stripe” the neurons alternate with the “interstripe” neurons.  Surely there is some crosstalk between (among) the channels.

Grossberg uses the term “catastrophic forgetting”.  He also talks about complementary channels of processing in the brain.  And he, among others, and talks about a “where” channel to the parietal lobe and a “what” channel to the temporal lobe.  Things then get a little confused.  Part of the point of “catastrophic forgetting” is, in effect that certain memories need to get overwritten, e.g., memories of where a particular movable object is located.  In contrast other memories should not be easily forgotten.

It is not clear that the categories “easily over writable” and “not easily over writable” (or should it be “things that change often” and “things that don’t often change”?) are the same as “where” and “what”.  It’s certainly possible from an evolutionary standpoint that what and where are sufficiently essential aspects of the environment that they should be per se ensconced in genetically determined neural structures.  Nonetheless, is reasonable to ask whether what evolution has provided is also being used in ways unrelated to its evolutionarily determined functionality.

Or alternatively, given that evolution has cobbled together mechanisms capable of recording information with differing degrees of environmental permanence, it seems reasonable to suppose that the same mechanism could show up in different places; although, I am well aware that the essentially opportunistic functioning of evolution leaves open the possibility that the same function is performed in many different ways.  Still, in our environment and the environment of our animal ancestors some things change rapidly and some things don’t.

030819 – Emotion and incentive

Tuesday, August 19th, 2003

030819 – Emotion and incentive

I really don’t like Joseph LeDoux’s (2002) use of the words emotion and incentive.  He uses emotion to mean just about anything that can affect synaptic plasticity, that is, he defines the term backwards.  That doesn’t work because we don’t know what can affect synaptic plasticity, but we do have a good idea of what we think emotion means.

Similarly, incentive.  To my mind an incentive is a conditional promise of reward in the future.  It takes the form, “if you do this you’ll get that.”  The term is a bit confusing in ordinary speech.  Management announces an incentive program whereby workers who overfill their quotas will receive a significant bonus.  The announcement serves as the incentive for employees to work harder.
Hans-Lukas Teuber (Chair of the M.I.T. Psychology Department while I was getting my Ph.D there) liked to tell the  story of the monkey and the “consolation prize.”  The monkey works to get a piece of banana, but when the monkey gets the piece of banana, he doesn’t eat it, he just sticks it in his mouth and holds it in his cheek.  When the monkey makes a mistake and doesn’t get a piece of banana, he eats some of the banana he was holding in his cheek.  So the monkey “rewards” himself for making a mistake.  Teuber called it a “consolation prize.”

So the (implicit) promise of “a piece of banana if you do this correctly” is the incentive (I actually would have said motivation here—LeDoux can’t because he uses motivation to mean something else).  What’s the banana then?  A reward?  Maybe, but in the context of the situation, the banana is the confirmation that the incentive was correctly understood, and that, in itself is (arguably) rewarding.

It should be rewarding, in any case, by the following argument.  It is clearly adaptive for an organism to be able to reliably predict the way the future will unfold, particularly with respect to possible events that have (can have, may have) some kind of significance to the organism.  It is even more important for an organism to be able to reliably predict the effects of a possible action

“I’ll bet that if I figure out what to do here, I’ll get a piece of banana.  Hmmm.  This looks right.  I’ll do it.  Banana!  Yes!  I was right!”

Or

“I’ll bet that if I figure out what to do here, I’ll get a piece of banana.  Hmmm.  This looks right.  I’ll do it.  No banana?  Bummer!  I didn’t get it right.  I’m gonna eat a piece of banana.”

So, back to the question: what is the banana?  In evolutionary terms, at one level, the banana is nourishment and valuable as such; in this context, however, the banana is real-world confirmation of correct understanding of (at least one aspect) of the real world.

But notice the subtlety here.  Setting aside our knowledge that correlation is not causality (which we seem to do a lot), the banana confirms the existence of a pattern: In the context of this recognizable situation it is to be expected that a problem will be presented and if I correctly figure out what the situation requires and do it, I will get some banana and if I don’t figure out what the situation requires, I won’t get any banana.

If no banana is forthcoming, what is the correct conclusion in this situation?  There are several: 1) I got it wrong (everything else is unchanged); 2) I got it right, but there are no more bananas at the moment (everything else is unchanged); 3) The pattern is incorrect: there are no bananas to be had here.  This is clearly not an exhaustive list of all the alternatives, but it does indicate that the conclusion to be drawn in the situation is by no means obvious.  This is borne out by the well-known fact that behavior patterns established by a random reinforcement pattern are more resistant to extinguishment than patterns established by a 100 percent reliable reinforcement pattern.

Again let’s look from an evolutionary standpoint: Which is more important?  Obtaining a piece of banana or knowing how to obtain a piece of banana?  If I give a man a fish, I have fed him for a day; if I teach a man to fish, I have fed him for life.

An important question for an organism is: Where is food?  The obvious next question is: How do I get there? Once these questions are answered, the next question is: Once I get there, how do I get it?  I have a feeling that in the brain these questions, or rather the answers to these questions, are intimately related.  Ultimately, an organism needs a procedural answer: What steps need to be taken in order to arrive at the desired goal?  The organism needs a sequential plan.  It makes me wonder if the parietal lobe in addition to its involvement with the representation of physical space also is involved with the representation of conceptual space.

Maybe not.  Physical space has obvious nearness relationships that conceptual space does not necessarily have.  On the other hand, George Lakoff’s arguments about the way in which meanings are derived from physical relationships may suggest that parietal lobe involvement (or, more precisely, involvement of whenever part of the brain is responsible for keeping track of the physical organization of the universe with respect to the organism) in the organization of concepts is in fact plausible.

Correlation is not causality, but from an evolutionary standpoint an organism cannot in general afford to do the necessary research to establish reliable causality.  Interestingly, human beings have acquired the ability to reason systematically and have managed in some cases to determine causality.  What is more significant, and many have remarked upon this, is that humans can transmit patterns verbally to other humans.  Not only that, patterns thus transmitted can be used by the receiver almost as if they had been directly precedent or intuited or whatever the appropriate word is to describe the way we acquire patterns.  I say “almost” because I think there must be some difference between patterns established by word-of-mouth and patterns established by other means.

I don’t think, however, that the difference is as simple as the difference between declarative and non declarative memory.  And by the way am not real happy with the use of the word declarative.  And I guess part of the reason for that is that I think some if not much of things that enter “declaratively” ends up stored “non declaratively”.  Which, I suppose, is simply to say that we don’t always consciously examine the implications of things that we hear, but those implications may end up being stored.  Perhaps this is just a matter of “stimulus generalization”, but whatever it is, it feels like a hard and fast distinction between declarative and non declarative memory is ultimately misguided.  And, in fact, studies of “priming” and in individuals whose declarative memory system is damaged in some way seem to me to imply that non declarative priming (whatever that means) also occurs in those whose declarative memory system is intact.

I suppose the argument is simply that there are two kinds of memory, but things start to feel a little too glib when people start to discuss the pathways by which information enters one memory system or the other as if in the intact organism there is no (and can be no) “crosstalk” between the two.  Maybe it’s just that in the course of reviewing the literature of the past thirty years I have concluded that where there are dichotomies it is important, even essential, not to accept them literally, for fear of missing overlooked clues to the functioning of the system.

030717

Thursday, July 17th, 2003

030717

I think I am getting tired of the gee-whiz attitude of linguists who are forever marveling at “the astronomical variety of sentences and a natural language user can produce and understand.”  Hauser, Chomsky, and Fitch (2002).  I can’t recall anyone marveling at the astronomical variety of visual images a human observer can understand, or the astronomical variety of visual images a human artist can produce.  I am also tired of the gee-whiz attitude linguists take with respect to the fact that there can be no “longest” sentence.  With essentially the same argument, I can assert that there can be no “largest” painting.  So what?

Another gee-whiz topic for linguists is the fact that, “A child is exposed to only a small proportion of the possible sentences in its language, thus limiting its database for constructing a more general version of that language in its own mind/brain.”  Hauser, Chomsky, and Fitch (2002).  It is also the case that a child is exposed to only a small proportion of the possible visual experiences in the universe, thus limiting its database for constructing a more general version of visual experience in its own mind/brain.  If one is to marvel at “the open ended generative property of human language,” one must marvel at the open ended generative property of human endeavor in art and music as well.  And if we do that, must we also marvel at the open ended generative property of bower bird endeavor in bower building and whale endeavor in whale song composition?

Hauser, Chomsky, and Fitch (2002) refer to “the interface systems — sensory-motor and conceptual-intentional”.  Note that there is a nice parallelism between sensory and conceptual and between motor and intentional.  I like it.

Hauser, Chomsky, and Fitch (2002) observe that it is possible that “recursion in animals represents a modular system designed for a particular function (e.g., navigation) and impenetrable with respect to other systems.  During evolution, the modular and highly domain specific system of recursion may have become penetrable and domain general.  This opened the way for humans, perhaps uniquely, to apply the power of recursion to other problems.”

Here, again, is a suggestion that to me points at a new kind of model found only in humans: a model of the self?  perhaps in some sense a model of models, but otherwise behaving like models in other animals.

A cat may be conscious, but does it, can it, know that it is conscious?

030715

Tuesday, July 15th, 2003

030715

Hauser, Chomsky, and Fitch in their Science review article (2002) indicate that “comparative studies of chimpanzees and human infants suggest that only the latter read intentionality into action, and thus extract unobserved rational intent.” this goes along with my own conviction that internal models are significant in the phenomenon of human and self-awareness.

Hauser, Chomsky, and Fitch argue that “the computational mechanism of recursion” is critical to language ability, “is recently involved and unique to our species.”  I am well aware that many have died attempting to oppose Chomsky and his insistence that practical limitations have no place in the description of language capabilities.  I am reminded of Dennett’s discussion of the question of whether zebra is a precise term, that is, whether there exists anything that can be correctly called a zebra.  It seems fairly clear that Chomsky assumes that language exists in the abstract (much the way we naively assume that zebras exist in the abstract) and then proceeds to draw conclusions based on that assumption.  The alternative is that language, like zebras, is in the mind of the beholder, but that when language is placed under the microscope it becomes fuzzy at the boundaries precisely because it is implemented in the human brain and not in a comprehensive design document.

Uncritical acceptance of the idea that our abstract understanding of the computational mechanism of recursion is anything other than a convenient crutch for understanding the way language is implemented in human beings is misguided.  In this I vote with David Marr (1982) who believed that neither computational iteration nor computational recursion is implemented in the nervous system.

On the other hand, it is interesting that a facility which is at least a first approximation to the computational mechanism of recursion exists in human beings.  Perhaps the value of the mechanism from an evolutionary standpoint is that it does make possible the extraction of intentionality from the observed behavior of others.  I think I want to turn that around.  It seems reasonable to believe that the ability to extract intentionality from observed behavior would confer an evolutionary advantage.  In order to do that, it is necessary to have or create an internal model of the other in order to get access to the surmised state of the other.

Once such a model is available it can be used online to surmise intentionality and it can be used off line for introspection, that is, it can be used as a model of the self.  Building from Grush’s idea that mental imagery is the result of running a model in off line mode, we may ask what kind of imagery would result from running a model of a human being off line.  Does it create an image of a self?

Alternatively, since all of the other models proposed by Grush are in models of some aspect of the organism itself, it might be more reasonable to suppose that a model of the complete self could arise as a relatively simple generalization of the mechanism used in pre-existing models of aspects of the organism.

If one has a built-in model of one’s self in the same way one has a built-in model of the musculoskeletal system, then language learning may become less of a problem.  Here’s how it would work.  At birth, the built-in model is rudimentary and needs to be fine-tuned to bring it into closer correspondence with the system it models.  An infant is only capable of modeling the behavior of another infant.  Adults attempting to teach language skills to infants use their internal model to surmise what the infant is attending to and then name it for the child.  To the extent that the adult has correctly modeled the infant and the infant has correctly modeled the adult (who has tried to make it easy to be modeled), the problem of establishing what it is that a word refers to becomes less problematical.

030708 – Computer consciousness

Tuesday, July 8th, 2003

030708 – Computer consciousness

I begin to understand the temptation to write papers that takes the form of diatribes against another academic’s position.  I just found the abstract of the paper written by someone named Maurizio Tirassa in 1994.  In the abstract he states, “I take it for granted that computational systems cannot be conscious.”

Oh dear.  I just read a 1995 response to Tirassa’s paper by someone in the department of philosophy and the department of computer science at Rensselaer Polytechnic Institute who says we must remain agnostic toward dualism.  Note to myself: stay away from this kind of argument; it will just make me crazy.

For the record: I take it for granted that computational systems can be conscious.  I do not believe in dualism.  There is no Cartesian observer.

I do like what Rick Grush has to say in his 2002 article “An introduction to the main principles of emulation: motor control, imagery, and perception”.  He posits the existence of internal models that can be disconnected from effectors and used as predictors.

Grush distinguishes between simulation and emulation.  He states that, “The difference is that emulation theory claims that mere operation of the motor centers is not enough, that to produce imagery they must be driving an emulator of the body (the musculoskeletal system and relevant sensors).”  He contrasts what he calls a “motor plan” with “motor imagery”.  “Motor imagery is a sequence of faux proprioception.  The only way to get … [motor imagery] is to run the motor plans through something that maps motor plans to proprioception and the two candidates here are a) the body (which yields real proprioception), and b) a body emulator (yielding faux proprioception).”

What’s nice about this kind of approach is that its construction is evolutionarily plausible.  That is, the internal model is used both for the production of actual behavior and for the production of predictions of behavior.  Evolution seems to like repurpose systems so long as the systems are reasonably modular.

Grush distinguishes between what he calls “modal” and “amodal” models.  “Modal” models are specific to a sensory modality (e.g., vision, audition, proprioception) and “amodal” models (although he writes as if there were only one) model the organism in the universe.  I do not much care for the terminology because I think it assumes facts not in evidence, to wit: that the principal distinguishing characteristic is the presence or absence of specificity to a sensory modality.  I also think it misleads in that it presumes (linguistically at least) to be an exhaustive categorization of model types.

That said, the most interesting thing in Grush for me is the observation that the same internal model can be used both to guide actual behavior and to provide imagery for “off-line” planning of behavior.  I had been thinking about the “on-line” and “off-line” uses of the language generation system.  When the system is “on-line”, physical speech is produced.  When the system is “off-line”, its outputs can be used to “talk to oneself” or to write.  Either way, it’s the same system.  It doesn’t make any sense for there to be more than one.

When a predator is crouched, waiting to spring as soon as the prey it has spotted comes into range, arguably it has determined how close the prey has to come for a pounce to be effective.  The action plan is primed, it’s a question of waiting for the triggering conditions (cognitively established by some internal mental model) to be satisfied.

It is at least plausible to suggest that if evolution developed modeling and used it to advantage in some circumstances; modeling will be used in other circumstances where it turns out to be beneficial.  I suppose this is a variant of Grush’s Kalman filters argument which says that Kalman filters turn out to be a good solution to a problem that organisms have and it would not be surprising to discover that evolution has hit upon a variant of Kalman filters to assist in dealing with that problem.

It’s clear (I hope, and if not, I’ll make an argument as to why) that a mobile organism gains something by having some kind of model (however rudimentary) of its external environment.  In “higher” organisms, that model extends beyond the range of that which is immediately accessible to its senses.  It’s handy to have a rough idea of what is behind one without having to look around to find out.  It’s also handy to know where one lives when one goes for a walk out of sight of one’s home.

Okay, so we need an organism-centric model of the universe, that is, one that references things outside the organism to the organism itself.  But more interestingly, does this model include a model of the organism itself?

Certain models cannot be inborn (or at least the details cannot be).  What starts to be fun is when the things modeled have a mind of their own (so to speak).  It’s not just useful to humans to be able to model animals and other humans (to varying degrees of specificity and with varying degrees of success).  It would seem to be useful to lots of animals to be able to model animals and other conspecifics.

What is the intersection of “modeling” with “learning” and “meaning”?  How does “learning” (a sort of mental sum of experience) interact with ongoing sensations?  “Learning” takes place with respect to sensible (that is capable of being sensed) events involving the organism, including things going on inside the organism that are sensible.  Without letting the concept get out of hand, I have said in other contexts that humans are voracious pattern-extractors.  “Pattern” in this context means a model of how things work.  That is, once a pattern is “identified” (established, learned), it tends to assert its conclusions.

This is not quite correct.  I seem to be using “pattern” in several different ways.  Let’s take it apart.  The kicker in just about every analysis of “self” and “consciousness” is the internal state of the organism.  Any analysis that fails to take into account the internal state of the organism at the time a stimulus is presented is not, in general, going to do well in predicting the organism’s response.  At the same time, I am perfectly willing to assert that the organism’s response—any organism’s response—is uniquely determined by the stimulus (broadly construed) and the organism’s state (also broadly construed).  Uniquely determined.  Goodbye free will.  [For the time being, I am going to leave it to philosophers to ponder the implications of this fact.  I am sorry to say that I don’t have a lot of faith that many of them will get them right, but some will.  This is just one of many red herrings that make it difficult to think about “self” and “consciousness”.]

Anyway, when I think about the process, I think of waves of data washing over and into the sensorium (a wonderfully content-free word).  In the sensorium are lots of brain elements (I’m not restricting this to neurons because there are at least ten times as many glia listening in and adding or subtracting their two cents) that have been immersed in this stream of information since they became active.  They have “seen” a lot of things.  There have been spatio-temporal-modal patterns in the stream, and post hoc ergo propter hoc many of these patterns have been “grooved”.  So, when data in the stream exhibit characteristics approximating some portion of a “grooved” pattern, other brain elements in the groove are activated to some extent, the extent depending on all sorts of things, like the “depth” of the “groove”, the “extent” of the match, etc.

In order to think about this more easily, remember that the sensorium does not work on just a single instantaneous set of data.  It takes some time for data to travel from neural element to neural element.  Data from “right now” enter the sensorium and begin their travel “right now”, hot on the heels of data from just before “right now”, and cool on the heels of data from a bit before “right now” and so on.  Who knows how long data that are already in the sensorium “right now” have been there.  [The question is, of course, rhetorical.  All the data that ever came into the sensorium are still there to the extent that they caused alterations in the characteristics of the neural elements there.  Presumably, they are not there in their original form, and more of some are there than of others.]  The point is that the sensorium “naturally” turns sequential data streams into simultaneous data snapshots.  In effect, the sensorium deals with pictures of history.

Now back to patterns.  A pattern may thus be static (as we commonly think of a pattern), and at the same time represent a temporal sequence.  In that sense, a pattern is a model of how things have happened in the past.  Now note that in this massively parallel sensorium, there is every reason to believe that at any instant many many patterns have been or are being activated to a greater or lesser extent and the superposition (I don’t know what else to call it) of these patterns gives rise to behavior in the following way.

Some patterns are effector patterns.  They are activated (“primed” is another term used here, meaning activated somewhat, but not enough to be “triggered”) by internal homeostatic requirements.  I’m not sure I am willing to state unequivocally that I believe all patterns have an effector component, but I’m at least willing to consider it.  Maybe not.  Maybe what I think is that data flows from sensors to effectors and the patterns I am referring to shape and redirect the data (which are ultimately brain element activity) into orders that are sent to effectors.

That’s existence.  That’s life.  I don’t know what in this process gives rise to a sense of self, but I think the description is fundamentally correct.  Maybe the next iteration through the process will provide some clues.  Or the next.  Or the next.

Hunger might act in the following way.  Brain elements determine biochemically and biorhythmically that it’s time to replenish the energy resources.  So data begin to flow associated with the need to replenish the energy resources.  That primes patterns associated with prior success replenishing the energy resources.  A little at first.  Maybe enough so if you see a meal you will eat it.  Not a lot can be hard-wired (built-in) in this process.  Maybe as a baby there’s a mechanism (a built-in pattern) that causes fretting in response to these data.  But basically, what are primed are patterns the organism has learned that ended up with food being consumed.  By adulthood, these patterns extend to patterns as complex as going to the store, buying food, preparing it, and finally consuming it.

This is not to say that the chain of determinism imposes rigid behaviors.  Indeed, what is triggered deterministically is a chain of opportunism.  Speaking of which, I have to go to the store to get fixings for dinner.  Bye.

030105 – Wants

Sunday, January 5th, 2003

030105 Wants

One of the most central and most refractory problems of all theoretical models of human behavior is the problem of wants.  What is a want?  What makes this a difficult problem is that everybody knows what it means to want something.  But from a modeling standpoint, what does it mean?  Wanting is fundamental.  Can there even be behavior without wants?  I think not.  Can non-human animals be said to have wants?  I think so.

That being the case, what is different (if anything) about human wants?  Wants are in many cases related to biological needs, e.g., food, water, excretion of wastes.  Wants are also associated with biological imperatives that fall short of being needs (where a need must be met or the organism will perish).  The only biological imperative I can think of at the moment is sex, without which an organism will perish without offspring.

Given that there is no Cartesian observer or meaner in the brain, the question of wants becomes even more important.  Dennett (1991) talks about some kind of system to determine what to think about next.  Jumping off from his analysis, it seems like evolution has created an on-idle loop that thinks about things whenever there’s nothing urgent to deal with at the moment.  The evolutionary advantage this confers [I thought there was a word con-something that would work there, but I couldn’t think of it at first.  Eventually, I found it, and there it is.] is that idle-time thinking may result in elaborating strategies that make the organism fitter when urgent situations do occur.  That is, idle-time thinking is sort of like ongoing fire-drills, or contingency planning.  You never know when having thought about something or learned something will come in handy.

Still, wanting is problematical.

A lot of AI sidesteps the problem.  Programs that are designed to understand and paraphrase text want to understand and paraphrase text because that is what they are designed and programmed to do.  Such programs do not produce as output, “I’m tired of this work, let’s go out, have a few beers, and talk about life” (unless of course, that is a paraphrase of some corpus of input text).

So, maybe it makes sense to try to figure out what we want AI devices to want.  Self-preservation is good. (Oops, now we hit one of the problems Asimov’s Laws of Robotics address: we don’t want AI entities to preserve themselves at the expense of allowing humans to come to harm, although presumably we don’t mind if they inveigle themselves into our affections so we are unwilling / unlikely / not disposed to turn them off.)

At least self-preservation is good in a Mars rover.  It may not be good in a military robot, although military robots are presumably will continue to be expensive, so we don’t want them to risk their existence casually.

Is fear what happens when the Danger-let’s-get-the-hell-out-of-here subsystem is screaming at the top of its lungs and we are not getting the hell out of there?

In our universe, for an organism to exist, it must be the offspring of a previous organism.  This trivial fact is called Evolution and much is made of it.  Although it is incorrect to attribute volition to Evolution, it does not do violence to reality to assert that Evolution is the name we give to the continued existence of things that have been able to reproduce.  Moreover, observation teaches that the more complex such things are, the more complex are the processes through which those things reproduce.

It does not make much sense to say that a bacterium or a virus wants to reproduce, although it does reproduce when conditions are favorable.  For that matter, it doesn’t make much sense to say that a bacterium or a virus wants to do anything.  I guess that means we think of wanting as something that we are aware of: something that rises to the level of consciousness—an attribute we do not apply to bacteria or viruses.  So here we are with propositional attitudes, which linguistically seem to come in at least two flavors: indicative and subjunctive.