Archive for the ‘motivation’ Category

What might ‘wanting’ be?

Friday, March 28th, 2008

I have long wondered what ‘wanting’ is from a physiological standpoint.  Antonio Damasio (1999, The Feeling of What Happens: Body and Emotion in the Making of Consciousness) has given me an idea that, I think, accounts for the human experience of wanting.  Homeostasis.  The argument goes like this.  In unicellular organisms, homeostasis doesn’t have a lot of ways to operate.  When an organism becomes mobile, homeostatic processes can trigger behaviors that with better than chance probability (from an evolutionary standpoint) result in internal state changes that serve to maintain homeostasis.  In effect, evolution favors behaviors that can be triggered to achieve homeostatic goals. 

In complex organisms, there are homeostatic mechanisms that work on the internal environment directly, but there are some internal environment changes for which it is not possible to compensate adequately by modifying the internal environment directly.  Thence, hunger.  Hunger is how we experience the process that is initiated when homeostatic mechanisms detect an insufficiency of fuel.  (Actually, it’s probably more sophisticated than that—more like detection of a condition in which the reserve of fuel drops below a particular threshold—and maybe there are multiple thresholds, but the broad outline is clear.) 

All organisms have phylogenetically established (built-in) processes for incorporating food.  In mammals, there is rooting reflex and a suckle reflex.  Chewing (which starts out as gumming, but who’s worrying?) and swallowing are built-ins as well.  But those only help when food is presented.  Problem: how to get food to be presented?  Well, if food is presented before hunger sets in, it’s not a homeostatic problem.  If not, homeostatic mechanisms switch the organism into “need-fuel mode”.  In “need-fuel” mode, organisms do things that tend to increase the likelihood that fuel will become available.  Babies fuss, and even cry, sometimes lots and loudly. 

Pain is another place where internal homeostatic processes intersect with the external universe.  Pain is how we experience the process that is initiated when homeostatic sensors detect deviations from internal stability that arise from a physical process (heat, cold, puncture, etc.).  Again, evolution has sophisticated the process somewhat.  The pain process arises when a threshold condition is passed.  Pain does not wait for serious damage to take place, pain is triggered when it’s time to take action to prevent serious damage.   

Pain actually has to be a bit subtle, too.  Some pain may and should be ignored.  If fight is an alternative to flight, then fight arguably ups the threshold for debilitating pain. 

There are other obvious situations in which homeostatic considerations require some action with respect to the outside world.  Urination and defecation are two.  Similarly, vomiting (with its warning homeostatic signal, nausea). 

Our wanting, then, has its origin as the experience of a process that responds to some (serious or prospectively serious) homeostatic imbalance. 

As an aside, I want to propose that one of the characteristics that distinguishes reptiles from mammals is that when a reptile is in reasonable homeostatic equilibrium, it does nothing.  When a mammal is in the same state, it does something—explores its environment, plays, writes poetry, etc.  In the most general terms, it sets out to learn something.  This characteristic arguably confers at least a marginal advantage to animals that possess it, viz. it is possible that something learned in the absence (at the time) of any pressing need will turn out to be valuable in dealing with future situations in which there will be no opportunity to learn it.  So, the concept of homeostasis has to be broadly construed. 

My central point, however, is that ultimately our wants, wishes, desires, dislikes, disgusts, and delights all refer to internal homeostatic processes.  The fact that there are so many distinguishable variants of wanting suggests to me that the many shades of our experience reflect the many kinds of homeostatic processes that have been phylogenetically established in our brains and bodies, each presumably for the most part having proved advantageous over evolutionary time.

030907 – Rationality and communication

Sunday, September 7th, 2003

030907 – Rationality and communication

Following up on Watzlawick, et al., Pragmatics of Human Communication I find that in later discussions by the “communications” community, there is an unspoken assumption that communication has rational motivation.  For example, quoted from Dirk Schouten, “Real Communication with Audiovisual Means”

<http://http://utopia.knoware.nl/users/schoutdi/eng/thcomac.htm>

Habermas divides speech acts (what someone says) into two principal categories.  There are Strategic actions (speech acts which make people do things) and Communicative Actions (speech acts which are designed to arrive at a common understanding of a situation).

…

Speech acts, according to Habermas contain a propositional and a performative part (like Watzlawick and Austin, he believes that when we say something we also do something.) The propositional part indicates a state of affairs in reality. For example: “The average income of farmers in South America is just 87 dollars per annum”. The performative part implies or indicates how the propositional part needs to be understood (in this case “The speaker thinks this is disgraceful”). In that way one can categorize or question something. An audience can respond: “I think that is disgraceful, too.” Or: “Why do you think it disgraceful?” Or: “I see what you mean, but…”

In fact a speaker, by saying something, not only says something that is true to her, but also says: “I claim the communicative right towards you to have an opinion and to say it to you in this defined situation”. The performative part defines the boundaries of the communicative action. It marks out the (communicative) context of the (propositional) content. It makes clear which relation the speaker wants to make to their audience. As long as the participants are aimed at reaching mutual agreement, a communicative situation is shaped, because the speaker makes three “validity claims” with their speech act:

1. They claim that they are speaking the truth in the propositional part of the speech act;
2. They claim normative legitimacy concerning the communicative act in a smaller sense (the performative part); and
3. They claim truthfulness/authenticity concerning the intentions and emotions they express.

These validity claims the speaker makes can, in principle, be criticized, although in practice this possibility is often blocked. In communicative action the hearers can (if they wish) demand reasons from the speakers to justify their validity claims.

The problem with this analysis is that the process of originating a communication is one of shaping and selection of behaviors based on internal models and internal states.  The “intention” of behavior is to add a pattern that will move the shape of the current pattern towards a projected pattern which is created by feeding the current pattern and the “intended” behavior into the optimal projection pattern.  Huh?

Let’s try this again.  There is a current pattern of activation.  It is a combination of

existing patterns

with

patterns created by external receptors, enteroceptors, and proprioceptors

and

patterns created for and by effectors (motor patterns, behaviors)

and

modulating influences (generally neurochemicals)

030718 – Self-Reporting

Friday, July 18th, 2003

030718 – Self-Reporting

Is there any advantage to an organism to be able to report its own internal state to another organism?  For that is one of the things that human beings are able to do.  Is there any advantage to an organism to be able to use language internally without actually producing an utterance?

Winograd’s SHRDLU program had the ability to answer questions about what it was doing.  Many expert system programs have the ability to answer questions about the way they reached their conclusions.  In both cases, the ability to answer questions is implemented separately from the part of the program that “does the work” so to speak.  However, in order to be able to answer questions about its own behavior, the question answering portion of the program must have access to the information required to answer the questions.  That is, the expertise required to perform the task is different from the expertise required to answer questions about the performance of the task.

In order to answer questions about a process that has been completed, there must be a record of, or a way to reconstruct, the steps in the process.  Actually, is not sufficient simply to be able to reconstruct the steps in the process.  At the very least, there must be some record that enables the organism to identify the process to be reconstructed.

Not all questions posed to SHRDLU require memory.  For example one can ask SHRDLU, “What is on the red block?”  To answer a question like this, SHRDLU need only observe the current state of its universe and report the requested information.  However, to answer at question like, “Why did you remove the pyramid from the red block?”  SHRDLU must examine the record of its recent actions and the “motivations” for its recent actions to come up with an answer such as, “In order to make room for the blue cylinder.”

Not all questions that require memory require information about motivation as, for example, “When was the blue cylinder placed on the red cube?”

Is SHRDLU self-aware?  I don’t think anyone would say so.  Is an expert system that can answer questions about its reasoning self-aware?  I don’t think anyone would say so.  Still, the fact remains that it is possible to perform a task without being able to answer questions about the way the task was performed.  Answering questions is an entirely different task.

030604 – Wants (more)

Wednesday, June 4th, 2003

030604 – Wants (more)

Could it be that the fundamental nature of wanting is IRMs (innate releasing mechanisms) and FAPs (fixed action patterns)?  Certainly IRMs and FAPs have a long and honorable evolutionary history.  There is certainly reason to say that lower animals are a soup of IRMs and FAPs.  Why not higher animals, too?  If I don’t know what I want until I see what I do, is that just a way of saying that I don’t have direct access to my IRMs?  Or is that just silly?

And what does it make sense for evolution to select as generic wants to be activated when there’s nothing pressing?  How about something like

–    Learn something new
–    Acquire a new skill (What’s a skill?  A complex perceptual motor pattern?)
–    Practice an acquired skill
–    Think about something interesting (What’s interesting?)
–    Stimulate yourself
–    Play with the external world (What’s play?)

You can’t have a theory of consciousness without including:

–    Wanting (approach)
–    Absence of wanting / indifference
–    Negatively directed wanting / wanting not (avoidance)
–    Learning
–    Skill acquisition (Perceptual / Motor Learning)
–    Imitation (human see, human do)
–    Pleasure / Satisfaction
–    Pain / Frustration
–    Salience / Interest
–    Metaphor

[Is this my own rediscovery of what Jerry Fodor (and presumably many others) call propositional attitudes?  Some of the items are, but others are not.]

If you stick out your tongue at a baby, from a very early age, the baby will imitate the action.  But the baby can’t see its tongue, so how does it know what to do.  It’s a visual stimulus, but the mirroring is not visual.  Now, it’s possible that a baby can see its tongue, if it sticks it out far enough, but unless the baby has spent time in front of a mirror, there’s no reason to believe the baby has ever seen its own face head-on (as it were).

Children want to do what they see their older siblings doing.  It seems to be innate.  It would seem to be rather peculiar to argue that children learn to want to imitate.  But how does a child (or anybody, for that matter) decide what it wants to imitate now?  There’s “What do I do now?”  “Imitate.” and “what do I want to imitate?”

A “high performance skill” (Schneider 1985): more than 100 hours of specialist training required; substantial numbers of trainees fail to acquire proficiency; performance of adepts is qualitatively different (whatever that means) from that of non-adepts.  There are lots of examples of high performance skills.  People spend lots of time practicing sports, learning to work machinery, etc.  Why?  Improving a skill (developing a skill and further developing it) is satisfying.  Does general knowledge count as a skill?  Can we lump book learning with horsemanship?

What about Henry Molaison, whose perceptual motor skills improved but he did not consciously recognize the testing apparatus?  Not really a problem.  There’s a sense in which the development of perceptual motor skills is precisely intended to create motor programs that don’t require problem solving on-the-fly.  Ha!  We can create our own FAPs!  [This is like blindsight.  Things that do not present themselves to the conscious-reporting system (e.g., Oh, yeah, I know how to do this pursuit rotor thing.) are available to be triggered as a consequence of consciously reportable intentions and states of mind (e.g., I’m doing this pursuit rotor thing.).  So part of what we learn to do consciously is learned and stored in non-reportable form (cf. Larry Squire’s papers on the topic).  But in the case of blindsight, some trace of detectablility is present.]

But if we can create our own FAPs, we must also create our own IRMs.  That means we have to create structures (patterns) that stretch from perceptions to behaviors.  Presumably, they are all specializations.  We create shortcuts.  If shortcuts are faster (literally) then they will happen first.  In other words, the better you get at dealing with a particular pattern, the more likely that pattern will be able to get to the effectors (or to the next stage of processing) first.   Is that what lateral inhibition does?  It gives the shortcut enough precedence to keep interference from messing things up.  In other words, lateral inhibition helps resolve race conditions.  [“Race conditions” reminds me that synchronous firing in the nervous system proceeds faster than anything else.]

Consciousness (whatever that means, still) is a tool for learning or for dealing with competing IRM/FAPs.  What do I mean “dealing with”?  Selecting among them, strengthening them or weakening them, refining them.  (There.  I got revising which was close, but not quite correct.  I typed it and then I got refining which was le mot juste (and it varies only in two consonants /f/ for /v/ which is only unvoiced for voiced and /s/ for /n/ which have no connection as far as I can tell).  [Find research on tip-of-the-tongue (TOT) phenomena.]

TOT: “partial activation” model v. “interference” model.  It seems to me that these are the same thing in my model of shortcuts and races.

The problem of observational learning: assuming that human infants are primed to learn from observation (or is it that they are primed to imitate actions they perceive, particularly humanish actions?).  Suppose moreover that humans have a way of segmenting perceptions and associating the segments.  Be real careful here: Marr suggests that visual inputs get taken apart and pieces processed hither, thither, and yon.  They never need to get put together because there’s no Cartesian observer.  So associations between percepts and imitative action patterns are spread out (multi-dimensional, if you will) without the need to segment the patterns any more than they are naturally.

As Oliphant (1998? submitted to Cognitive Behavior, p.15) says, “Perhaps it is an inability to constrain the possible space of meanings that prevents animals from using learned systems of communication, even systems that are no more complicated than existing innate signaling systems.”

Oliphant also says (1998? submitted to Cognitive Behavior, p.15), “When children learn words, they seem to simplify the task of deciding what a word denotes through knowledge of the existence of taxonomic categories (Markman, 1989), awareness of pragmatic context (Tomasello, 1995), and reading the intent of the speaker (Bloom, 1997).”  [Are some or all of these consequences of the development of attractor basins?  Is part of the developmental / maturational process the refinement of the boundaries of attractor basins?  Surely.]

It begins to feel as if imitation is key.  Is the IRM human-see and the FAP human-do?  Refinement is also the name of the game: patterns (input and output) can be refined with shortcuts.  There are innate groundings.  The innate groundings are most likely body-centric, but then again, imitation has an external stimulus: the behavior to imitate.

I’ve been finding lots of AI articles about cognitive models that use neural networks.  Granting that they are by nature schematic oversimplifications, there is one thing that seems to characterize all of them, and it’s something that has bothered me about neural networks all along: they assume grandmother-detectors.  That is, they have a set of input nodes that fire if and only if a particular stimulus occurs.  The outputs are similarly specific: each output node fires to signal a specific response.  Of course, this is pretty much a description of the IRM / FAP paradigm and, following Oliphant (1998?), the interesting problems seem to be happening in the system before and after this kind of model.

There are two easy ways of initializing a neural network simulation: set all weights to zero or set the weights to random values.  But assuming that what goes on in the brain bears at least some resemblance to what goes on in a neural network simulation, it seems clear that evolution guarantees that neither of these initialization strategies is used ontogenetically.  Setting all connection strengths to zero gives you a vegetable, and setting connection strengths randomly gives you a mess.  Surely evolution has found a better starting point.  [Cf. research on ontogenetic self-organization.]

One researcher’s baby is another researcher’s bathwater.  Hmmm.  Ain’t thinking grand?

Given that there aren’t grandmother detectors [although there are some experiments that claim Raquel Welch detectors, I think] and that there are not similarly specific effectors, we are back to Lashley’s problem of serial behavior.  What keeps the pandemonium from just thrashing?  I keep coming back to a substrate of plastic (i.e., tunable, mutable, modifiable, subsettable, short-cuttable) IRMs and FAPs.  Babies don’t get “doggie” all at once.  There seems to be a sort of bootstrap process involved.  Babies have to have enough built in to get the process started.  From that point on, it’s successive refinement.

I wrote “invisible figre” then stopped.  My intention had been to write “invisible fingers”.  I had been reading French.   I don’t know for [shure] sure how the ‘n’ got lost, but the “gre” would have been a Frenchified spelling and “figre” would not have had the nasalized consonant that would have (if pronounced in French) produced “fingres”.

All these little sensory and motor homuncuili in the cortex—maybe what they are telling us is pretty much what Lakoff was saying, namely that our conception of the universe is body-centric.  Makes good sense.  That’s where the external universe impinges upon us and that’s where we impinge on the external universe.  I couldn’t think of a better reference system.

Chalmers (The Conscious Mind, 1996) believes that zombies are logically possible because he can imagine them.  He believes that a reductionist explanation of consciousness is impossible.  It is certainly true that it is a long jump from the physics of the atom to the dynamics of Earth’s atmosphere that give rise to meteorological phenomena, but we don’t for that reason argue that a reductionist explanation is impossible.  Yes, it’s a hard problem, but it requires poking one hell of a big hole in our understanding of physics to believe that a scientific explanation is impossible and therefore consciousness must be supernatural.  I don’t think I want to read his book now.  I feel it will be like reading a religious tract arguing that evolution is impossible.  my Spanish Literature Professor Juan Marichal once observed, a propos a book written by a Mexican author who had conceived a virulent hatred for Cortez (from a vantage point 400 years after the conquest of Mexico) that it is possible to learn something even from works written by people who have peculiar axes to grind.  So maybe sometime I’ll revisit Chalmers, but not now.

Antonio Damasio (1999, The Feeling of What Happens: Body and Emotion in the Making of Consciousness.) The trouble with neural nets is often that they have no memory other than the connection weights acquired during training.  A new set of data erases or modifies the existing weights rather than taking into account what had been learned thus far.  Learning from experience means that there is some record of past experience to learn from.  Of course, that may just be the answer: memory systems server to counterbalance the tendency to oscillate or to go with the latest fad.  If a new pattern has some association with what has gone before, then what has gone before will shape the way in which the new pattern is incorporated.  If there is a long-term record of an old pattern, it will still be available at some processing stage even if the new pattern becomes dominant at some other processing stage.  So, it may not be necessary to solve in a single stage of processing the problem of new data causing forgetfulness.

Learning has to be going on at multiple levels simultaneously.  Alternatively, there are nested (layered? as in cortical layers) structures that feed information forward, so some structures learn from direct inputs and subsequent structures learn from the outputs of the structures that get direct inputs and so on.

Antonio Damasio (1999) has given me the idea that will, I think, account for wanting.  Homeostasis.  The argument goes like this.  In unicellular organisms, homeostasis doesn’t have a lot of ways to operate.  When an organism becomes mobile, homeostatic processes can trigger behaviors that with better than chance probability (from an evolutionary standpoint) result in internal state changes that serve to maintain homeostasis.  In effect, evolution favors behaviors that can be triggered to achieve homeostatic goals.

In complex organisms, there are homeostatic mechanisms that work on the internal environment directly, but there are some internal environment changes for which it is not possible to compensate adequately by modifying the internal environment directly.  Thence, hunger.  Hunger is how we experience the process that is initiated when homeostatic mechanisms detect an insufficiency of fuel.  (Actually, it’s probably more sophisticated than that—more like detection of a condition in which the reserve of fuel drops below a particular threshold—and maybe there are multiple thresholds, but the broad outline is clear.)

All organisms have phylogenetically established (built-in) boot processes for incorporating food.  In mammals, there is a rooting reflex and a suckle reflex.  Chewing (which starts out as gumming, but who’s worrying?) and swallowing are built-ins as well.  But those only help when food is presented.  Problem: how to get food to be presented?  Well, if food is presented before hunger sets in, it’s not a homeostatic problem.  If not, homeostatic mechanisms switch the organism into “need-fuel mode”.  In “need-fuel” mode, organisms do things that tend to increase the likelihood that fuel will become available.  Babies fuss, and even cry, sometimes lots and loudly.

Pain is another place where internal homeostatic processes intersect with the external universe.  Pain is how we experience the process that is initiated when homeostatic sensors detect deviations from damage to internal stability that arise from a physical process (heat, cold, puncture, etc.).  Again, evolution has sophisticated the process somewhat.  The pain process arises when a threshold condition is passed.  Pain does not wait for serious damage to take place, pain is triggered when it’s time to take action to prevent serious damage.

Pain actually has to be a bit subtle, too.  Some pain may and should be ignored.  If fight is an alternative to flight, then fight arguably ups the threshold for debilitating pain.

There are other obvious situations in which homeostatic considerations require some action with respect to the outside world.  Urination and defecation are two.  Similarly, vomiting (with its warning homeostatic signal, nausea).

Our wanting, then, has its origin as the experience of a process that responds to some (serious or prospectively serious) homeostatic imbalance.

As an aside, I want to propose that one of the characteristics that distinguishes reptiles from mammals is that when a reptile is in reasonable homeostatic equilibrium, it does nothing.  When a mammal is in the same state, it does something—explores its environment, plays, writes poetry, etc.  In the most general terms, it sets out to learn something.  This characteristic arguably confers at least a marginal advantage to animals that possess it, viz. it is possible that something learned in the absence (at the time) of any pressing need will turn out to be valuable in dealing with future situations in which there will be no opportunity to learn it.  So, the concept of homeostasis has to be broadly construed.

My central point, however, is that ultimately our wants, wishes, desires, dislikes, disgusts, and delights all refer to internal homeostatic processes.  The fact that there are so many distinguishable variants of wanting suggests to me that the many shades of our experience reflect the many kinds of homeostatic processes that have been phylogenetically established in our brains and bodies, each presumably for the most part having proved advantageous over evolutionary time.

030105 – Wants

Sunday, January 5th, 2003

030105 Wants

One of the most central and most refractory problems of all theoretical models of human behavior is the problem of wants.  What is a want?  What makes this a difficult problem is that everybody knows what it means to want something.  But from a modeling standpoint, what does it mean?  Wanting is fundamental.  Can there even be behavior without wants?  I think not.  Can non-human animals be said to have wants?  I think so.

That being the case, what is different (if anything) about human wants?  Wants are in many cases related to biological needs, e.g., food, water, excretion of wastes.  Wants are also associated with biological imperatives that fall short of being needs (where a need must be met or the organism will perish).  The only biological imperative I can think of at the moment is sex, without which an organism will perish without offspring.

Given that there is no Cartesian observer or meaner in the brain, the question of wants becomes even more important.  Dennett (1991) talks about some kind of system to determine what to think about next.  Jumping off from his analysis, it seems like evolution has created an on-idle loop that thinks about things whenever there’s nothing urgent to deal with at the moment.  The evolutionary advantage this confers [I thought there was a word con-something that would work there, but I couldn’t think of it at first.  Eventually, I found it, and there it is.] is that idle-time thinking may result in elaborating strategies that make the organism fitter when urgent situations do occur.  That is, idle-time thinking is sort of like ongoing fire-drills, or contingency planning.  You never know when having thought about something or learned something will come in handy.

Still, wanting is problematical.

A lot of AI sidesteps the problem.  Programs that are designed to understand and paraphrase text want to understand and paraphrase text because that is what they are designed and programmed to do.  Such programs do not produce as output, “I’m tired of this work, let’s go out, have a few beers, and talk about life” (unless of course, that is a paraphrase of some corpus of input text).

So, maybe it makes sense to try to figure out what we want AI devices to want.  Self-preservation is good. (Oops, now we hit one of the problems Asimov’s Laws of Robotics address: we don’t want AI entities to preserve themselves at the expense of allowing humans to come to harm, although presumably we don’t mind if they inveigle themselves into our affections so we are unwilling / unlikely / not disposed to turn them off.)

At least self-preservation is good in a Mars rover.  It may not be good in a military robot, although military robots are presumably will continue to be expensive, so we don’t want them to risk their existence casually.

Is fear what happens when the Danger-let’s-get-the-hell-out-of-here subsystem is screaming at the top of its lungs and we are not getting the hell out of there?

In our universe, for an organism to exist, it must be the offspring of a previous organism.  This trivial fact is called Evolution and much is made of it.  Although it is incorrect to attribute volition to Evolution, it does not do violence to reality to assert that Evolution is the name we give to the continued existence of things that have been able to reproduce.  Moreover, observation teaches that the more complex such things are, the more complex are the processes through which those things reproduce.

It does not make much sense to say that a bacterium or a virus wants to reproduce, although it does reproduce when conditions are favorable.  For that matter, it doesn’t make much sense to say that a bacterium or a virus wants to do anything.  I guess that means we think of wanting as something that we are aware of: something that rises to the level of consciousness—an attribute we do not apply to bacteria or viruses.  So here we are with propositional attitudes, which linguistically seem to come in at least two flavors: indicative and subjunctive.