Archive for the ‘homeostasis’ Category

What might ‘wanting’ be?

Friday, March 28th, 2008

I have long wondered what ‘wanting’ is from a physiological standpoint.  Antonio Damasio (1999, The Feeling of What Happens: Body and Emotion in the Making of Consciousness) has given me an idea that, I think, accounts for the human experience of wanting.  Homeostasis.  The argument goes like this.  In unicellular organisms, homeostasis doesn’t have a lot of ways to operate.  When an organism becomes mobile, homeostatic processes can trigger behaviors that with better than chance probability (from an evolutionary standpoint) result in internal state changes that serve to maintain homeostasis.  In effect, evolution favors behaviors that can be triggered to achieve homeostatic goals. 

In complex organisms, there are homeostatic mechanisms that work on the internal environment directly, but there are some internal environment changes for which it is not possible to compensate adequately by modifying the internal environment directly.  Thence, hunger.  Hunger is how we experience the process that is initiated when homeostatic mechanisms detect an insufficiency of fuel.  (Actually, it’s probably more sophisticated than that—more like detection of a condition in which the reserve of fuel drops below a particular threshold—and maybe there are multiple thresholds, but the broad outline is clear.) 

All organisms have phylogenetically established (built-in) processes for incorporating food.  In mammals, there is rooting reflex and a suckle reflex.  Chewing (which starts out as gumming, but who’s worrying?) and swallowing are built-ins as well.  But those only help when food is presented.  Problem: how to get food to be presented?  Well, if food is presented before hunger sets in, it’s not a homeostatic problem.  If not, homeostatic mechanisms switch the organism into “need-fuel mode”.  In “need-fuel” mode, organisms do things that tend to increase the likelihood that fuel will become available.  Babies fuss, and even cry, sometimes lots and loudly. 

Pain is another place where internal homeostatic processes intersect with the external universe.  Pain is how we experience the process that is initiated when homeostatic sensors detect deviations from internal stability that arise from a physical process (heat, cold, puncture, etc.).  Again, evolution has sophisticated the process somewhat.  The pain process arises when a threshold condition is passed.  Pain does not wait for serious damage to take place, pain is triggered when it’s time to take action to prevent serious damage.   

Pain actually has to be a bit subtle, too.  Some pain may and should be ignored.  If fight is an alternative to flight, then fight arguably ups the threshold for debilitating pain. 

There are other obvious situations in which homeostatic considerations require some action with respect to the outside world.  Urination and defecation are two.  Similarly, vomiting (with its warning homeostatic signal, nausea). 

Our wanting, then, has its origin as the experience of a process that responds to some (serious or prospectively serious) homeostatic imbalance. 

As an aside, I want to propose that one of the characteristics that distinguishes reptiles from mammals is that when a reptile is in reasonable homeostatic equilibrium, it does nothing.  When a mammal is in the same state, it does something—explores its environment, plays, writes poetry, etc.  In the most general terms, it sets out to learn something.  This characteristic arguably confers at least a marginal advantage to animals that possess it, viz. it is possible that something learned in the absence (at the time) of any pressing need will turn out to be valuable in dealing with future situations in which there will be no opportunity to learn it.  So, the concept of homeostasis has to be broadly construed. 

My central point, however, is that ultimately our wants, wishes, desires, dislikes, disgusts, and delights all refer to internal homeostatic processes.  The fact that there are so many distinguishable variants of wanting suggests to me that the many shades of our experience reflect the many kinds of homeostatic processes that have been phylogenetically established in our brains and bodies, each presumably for the most part having proved advantageous over evolutionary time.

030728 – The simplest incomplete grammar

Monday, July 28th, 2003

030728 – The simplest incomplete grammar

If grammars are inherently incomplete, what is the simplest incomplete grammar?  Actually the question should be given an example based on say English of the simplest incomplete grammar.

Even if grammars are not inherently incomplete, one may argue that individuals acquire aspects of a grammar over time.  I vaguely recall that certain grammatical structures are in fact acquired at different ages as children learn languages.  Moreover, there are some built-in conflicts in the grammar of English (and probably just about any other language).  For example:

It’s me.  (Arguably based on the Norman French equivalent of modern French C’est moi).

It is I.  (Based on the rule that the verb to be takes the nominative case on both sides).

We’re truly unaccustomed to thinking about massively parallel computing.  Our approach to computing has been to create very fast single threaded processors; and as an afterthought, ordinarily to take advantage of idle time, we have introduced multi programming.  I think it is fair to say that our excursions into the realm of massively parallel computing are still in their infancy.  Without having done a careful survey of the literature, it would seem that the challenge of massively parallel computing  (at least that which would be patterned after neural structures in the mammalian brain) is to be able to handle the large number of interconnections found in the brain as well as the large number of projections from place to place.  [However, it is emphatically not the case that in the brain everything is connected directly to everything else.  It would be impractical, and it’s hard to see what it would accomplish beyond confusion.]

To hazard a gross oversimplification of the computational architecture of the brain, the brain is composed of layers of neurons, whose layers are identified by their common synaptic distance from some source of input.  Layers are stacked like layers in a cake (giving rise to “columns”, identified by their association with predecessor and postdecessor synapses.  To the extent the word “column” suggests a cylinder of roughly constant diameter, or even constant cross-section, it may be a bad choice of metaphor.  I imagine the diameter of a “column” to increase initially (as inputs pass deeper into the processor) and then to decrease (as signals that are to become outputs pass towards the effectors).  At various stages in the processing, intermediate outputs are transmitted to other areas (projections, via fiber bundles).  Depending on the stage of processing, a layer may receive synchronic input (that is, all inputs represent some class of inputs that originated at essentially the same moment in time, e.g., visual input from the retina) or, it may receive diachronic input (that is, a set of samples over time that originated at essentially the same location).  Indeed, some layers may receive both synchronic and diachronic inputs.

We don’t know much about how to think about the functions computed (computable) by such a system.  Not to mention that I don’t know much of anything about synaptic transmission.  Yeah, yeah, neurotransmitters pour into the synaptic gap.  Some of them are taken up by receptors on the axon and if enough of them arrive, the axon fires into the neuron.  But there are lots of different neurotransmitters.  Why?  How do the stellate glia affect the speed and nature of the pulses and slow potentials?  Do concentrations of neurotransmitters change globally?  Locally?

Somebody pointed out (Damasio?) that “homeostasis” is not really a good metaphor because the “set point” (my term) of the system changes depending on things.  In some cases, it’s clear what goes on: Too much water in the system?  Excrete water?  But the other side of that: Too much salt in the system?  Conserve water?  Well, yes, but what needs to happen is the triggering of an appetitive state that leads to locating a source of water (in some form, e.g., a water tap, a pond, a peach) and taking the appropriate steps to make that water internally available (e.g., get a glass, open the tap, fill the glass, drink the water; stick face in water, slurp it up; eat the peach).

At its core, this is a sort of low-level optimizer.  Based on the readings of a set of enteroceptors (sensors), create an internal state that either modifies the internal environment directly or that “motivates” (“activates”) behaviors that will indirectly modify the internal state.

It’s all very well to say that if one drips hypersaline solution into the CSF by the hypothalamus, the goat “gets thirsty and drinks lots of water,” but an awful lot has to happen on the way.

And it’s not very safe for the optimizer (governor?) to call for specific external behavior.  It can specify the goal state and monitor whether the organism is getting closer to or farther away from the goal state, but it’s not clear (with respect to thirst, say) how the information about getting closer to the goal can get to the optimizer at any time before an appropriate change in the internal environment is detected, e.g., the organism begins to ingest something that triggers the “incoming water” detectors.  Prior to that, it’s all promises.  Presumably, it goes something like this: behaviors “associated” with triggering the “incoming water” detectors are “primed”.  How?  Maybe by presentation of the feeling of thirst.  Priming of those behaviors triggers back-chained behaviors associated with the initiation of the “directly” primed behaviors.  And so on, like ripples in a pond.  The ever-widening circles of primed behaviors are looking for triggers that can be found in the current environment (more correctly, that can be found in the current internal environment as it represents the current external environment).

[Back-chaining seems related to abduction, the process of concocting hypotheses to account for observed circumstances.]

I keep coming around to this pattern matching paradigm as an explanation of all behavior.  It’s really a variation of innate releasing mechanisms and fixed action patterns.

030708 – Computer consciousness

Tuesday, July 8th, 2003

030708 – Computer consciousness

I begin to understand the temptation to write papers that takes the form of diatribes against another academic’s position.  I just found the abstract of the paper written by someone named Maurizio Tirassa in 1994.  In the abstract he states, “I take it for granted that computational systems cannot be conscious.”

Oh dear.  I just read a 1995 response to Tirassa’s paper by someone in the department of philosophy and the department of computer science at Rensselaer Polytechnic Institute who says we must remain agnostic toward dualism.  Note to myself: stay away from this kind of argument; it will just make me crazy.

For the record: I take it for granted that computational systems can be conscious.  I do not believe in dualism.  There is no Cartesian observer.

I do like what Rick Grush has to say in his 2002 article “An introduction to the main principles of emulation: motor control, imagery, and perception”.  He posits the existence of internal models that can be disconnected from effectors and used as predictors.

Grush distinguishes between simulation and emulation.  He states that, “The difference is that emulation theory claims that mere operation of the motor centers is not enough, that to produce imagery they must be driving an emulator of the body (the musculoskeletal system and relevant sensors).”  He contrasts what he calls a “motor plan” with “motor imagery”.  “Motor imagery is a sequence of faux proprioception.  The only way to get … [motor imagery] is to run the motor plans through something that maps motor plans to proprioception and the two candidates here are a) the body (which yields real proprioception), and b) a body emulator (yielding faux proprioception).”

What’s nice about this kind of approach is that its construction is evolutionarily plausible.  That is, the internal model is used both for the production of actual behavior and for the production of predictions of behavior.  Evolution seems to like repurpose systems so long as the systems are reasonably modular.

Grush distinguishes between what he calls “modal” and “amodal” models.  “Modal” models are specific to a sensory modality (e.g., vision, audition, proprioception) and “amodal” models (although he writes as if there were only one) model the organism in the universe.  I do not much care for the terminology because I think it assumes facts not in evidence, to wit: that the principal distinguishing characteristic is the presence or absence of specificity to a sensory modality.  I also think it misleads in that it presumes (linguistically at least) to be an exhaustive categorization of model types.

That said, the most interesting thing in Grush for me is the observation that the same internal model can be used both to guide actual behavior and to provide imagery for “off-line” planning of behavior.  I had been thinking about the “on-line” and “off-line” uses of the language generation system.  When the system is “on-line”, physical speech is produced.  When the system is “off-line”, its outputs can be used to “talk to oneself” or to write.  Either way, it’s the same system.  It doesn’t make any sense for there to be more than one.

When a predator is crouched, waiting to spring as soon as the prey it has spotted comes into range, arguably it has determined how close the prey has to come for a pounce to be effective.  The action plan is primed, it’s a question of waiting for the triggering conditions (cognitively established by some internal mental model) to be satisfied.

It is at least plausible to suggest that if evolution developed modeling and used it to advantage in some circumstances; modeling will be used in other circumstances where it turns out to be beneficial.  I suppose this is a variant of Grush’s Kalman filters argument which says that Kalman filters turn out to be a good solution to a problem that organisms have and it would not be surprising to discover that evolution has hit upon a variant of Kalman filters to assist in dealing with that problem.

It’s clear (I hope, and if not, I’ll make an argument as to why) that a mobile organism gains something by having some kind of model (however rudimentary) of its external environment.  In “higher” organisms, that model extends beyond the range of that which is immediately accessible to its senses.  It’s handy to have a rough idea of what is behind one without having to look around to find out.  It’s also handy to know where one lives when one goes for a walk out of sight of one’s home.

Okay, so we need an organism-centric model of the universe, that is, one that references things outside the organism to the organism itself.  But more interestingly, does this model include a model of the organism itself?

Certain models cannot be inborn (or at least the details cannot be).  What starts to be fun is when the things modeled have a mind of their own (so to speak).  It’s not just useful to humans to be able to model animals and other humans (to varying degrees of specificity and with varying degrees of success).  It would seem to be useful to lots of animals to be able to model animals and other conspecifics.

What is the intersection of “modeling” with “learning” and “meaning”?  How does “learning” (a sort of mental sum of experience) interact with ongoing sensations?  “Learning” takes place with respect to sensible (that is capable of being sensed) events involving the organism, including things going on inside the organism that are sensible.  Without letting the concept get out of hand, I have said in other contexts that humans are voracious pattern-extractors.  “Pattern” in this context means a model of how things work.  That is, once a pattern is “identified” (established, learned), it tends to assert its conclusions.

This is not quite correct.  I seem to be using “pattern” in several different ways.  Let’s take it apart.  The kicker in just about every analysis of “self” and “consciousness” is the internal state of the organism.  Any analysis that fails to take into account the internal state of the organism at the time a stimulus is presented is not, in general, going to do well in predicting the organism’s response.  At the same time, I am perfectly willing to assert that the organism’s response—any organism’s response—is uniquely determined by the stimulus (broadly construed) and the organism’s state (also broadly construed).  Uniquely determined.  Goodbye free will.  [For the time being, I am going to leave it to philosophers to ponder the implications of this fact.  I am sorry to say that I don’t have a lot of faith that many of them will get them right, but some will.  This is just one of many red herrings that make it difficult to think about “self” and “consciousness”.]

Anyway, when I think about the process, I think of waves of data washing over and into the sensorium (a wonderfully content-free word).  In the sensorium are lots of brain elements (I’m not restricting this to neurons because there are at least ten times as many glia listening in and adding or subtracting their two cents) that have been immersed in this stream of information since they became active.  They have “seen” a lot of things.  There have been spatio-temporal-modal patterns in the stream, and post hoc ergo propter hoc many of these patterns have been “grooved”.  So, when data in the stream exhibit characteristics approximating some portion of a “grooved” pattern, other brain elements in the groove are activated to some extent, the extent depending on all sorts of things, like the “depth” of the “groove”, the “extent” of the match, etc.

In order to think about this more easily, remember that the sensorium does not work on just a single instantaneous set of data.  It takes some time for data to travel from neural element to neural element.  Data from “right now” enter the sensorium and begin their travel “right now”, hot on the heels of data from just before “right now”, and cool on the heels of data from a bit before “right now” and so on.  Who knows how long data that are already in the sensorium “right now” have been there.  [The question is, of course, rhetorical.  All the data that ever came into the sensorium are still there to the extent that they caused alterations in the characteristics of the neural elements there.  Presumably, they are not there in their original form, and more of some are there than of others.]  The point is that the sensorium “naturally” turns sequential data streams into simultaneous data snapshots.  In effect, the sensorium deals with pictures of history.

Now back to patterns.  A pattern may thus be static (as we commonly think of a pattern), and at the same time represent a temporal sequence.  In that sense, a pattern is a model of how things have happened in the past.  Now note that in this massively parallel sensorium, there is every reason to believe that at any instant many many patterns have been or are being activated to a greater or lesser extent and the superposition (I don’t know what else to call it) of these patterns gives rise to behavior in the following way.

Some patterns are effector patterns.  They are activated (“primed” is another term used here, meaning activated somewhat, but not enough to be “triggered”) by internal homeostatic requirements.  I’m not sure I am willing to state unequivocally that I believe all patterns have an effector component, but I’m at least willing to consider it.  Maybe not.  Maybe what I think is that data flows from sensors to effectors and the patterns I am referring to shape and redirect the data (which are ultimately brain element activity) into orders that are sent to effectors.

That’s existence.  That’s life.  I don’t know what in this process gives rise to a sense of self, but I think the description is fundamentally correct.  Maybe the next iteration through the process will provide some clues.  Or the next.  Or the next.

Hunger might act in the following way.  Brain elements determine biochemically and biorhythmically that it’s time to replenish the energy resources.  So data begin to flow associated with the need to replenish the energy resources.  That primes patterns associated with prior success replenishing the energy resources.  A little at first.  Maybe enough so if you see a meal you will eat it.  Not a lot can be hard-wired (built-in) in this process.  Maybe as a baby there’s a mechanism (a built-in pattern) that causes fretting in response to these data.  But basically, what are primed are patterns the organism has learned that ended up with food being consumed.  By adulthood, these patterns extend to patterns as complex as going to the store, buying food, preparing it, and finally consuming it.

This is not to say that the chain of determinism imposes rigid behaviors.  Indeed, what is triggered deterministically is a chain of opportunism.  Speaking of which, I have to go to the store to get fixings for dinner.  Bye.

030604 – Wants (more)

Wednesday, June 4th, 2003

030604 – Wants (more)

Could it be that the fundamental nature of wanting is IRMs (innate releasing mechanisms) and FAPs (fixed action patterns)?  Certainly IRMs and FAPs have a long and honorable evolutionary history.  There is certainly reason to say that lower animals are a soup of IRMs and FAPs.  Why not higher animals, too?  If I don’t know what I want until I see what I do, is that just a way of saying that I don’t have direct access to my IRMs?  Or is that just silly?

And what does it make sense for evolution to select as generic wants to be activated when there’s nothing pressing?  How about something like

–    Learn something new
–    Acquire a new skill (What’s a skill?  A complex perceptual motor pattern?)
–    Practice an acquired skill
–    Think about something interesting (What’s interesting?)
–    Stimulate yourself
–    Play with the external world (What’s play?)

You can’t have a theory of consciousness without including:

–    Wanting (approach)
–    Absence of wanting / indifference
–    Negatively directed wanting / wanting not (avoidance)
–    Learning
–    Skill acquisition (Perceptual / Motor Learning)
–    Imitation (human see, human do)
–    Pleasure / Satisfaction
–    Pain / Frustration
–    Salience / Interest
–    Metaphor

[Is this my own rediscovery of what Jerry Fodor (and presumably many others) call propositional attitudes?  Some of the items are, but others are not.]

If you stick out your tongue at a baby, from a very early age, the baby will imitate the action.  But the baby can’t see its tongue, so how does it know what to do.  It’s a visual stimulus, but the mirroring is not visual.  Now, it’s possible that a baby can see its tongue, if it sticks it out far enough, but unless the baby has spent time in front of a mirror, there’s no reason to believe the baby has ever seen its own face head-on (as it were).

Children want to do what they see their older siblings doing.  It seems to be innate.  It would seem to be rather peculiar to argue that children learn to want to imitate.  But how does a child (or anybody, for that matter) decide what it wants to imitate now?  There’s “What do I do now?”  “Imitate.” and “what do I want to imitate?”

A “high performance skill” (Schneider 1985): more than 100 hours of specialist training required; substantial numbers of trainees fail to acquire proficiency; performance of adepts is qualitatively different (whatever that means) from that of non-adepts.  There are lots of examples of high performance skills.  People spend lots of time practicing sports, learning to work machinery, etc.  Why?  Improving a skill (developing a skill and further developing it) is satisfying.  Does general knowledge count as a skill?  Can we lump book learning with horsemanship?

What about Henry Molaison, whose perceptual motor skills improved but he did not consciously recognize the testing apparatus?  Not really a problem.  There’s a sense in which the development of perceptual motor skills is precisely intended to create motor programs that don’t require problem solving on-the-fly.  Ha!  We can create our own FAPs!  [This is like blindsight.  Things that do not present themselves to the conscious-reporting system (e.g., Oh, yeah, I know how to do this pursuit rotor thing.) are available to be triggered as a consequence of consciously reportable intentions and states of mind (e.g., I’m doing this pursuit rotor thing.).  So part of what we learn to do consciously is learned and stored in non-reportable form (cf. Larry Squire’s papers on the topic).  But in the case of blindsight, some trace of detectablility is present.]

But if we can create our own FAPs, we must also create our own IRMs.  That means we have to create structures (patterns) that stretch from perceptions to behaviors.  Presumably, they are all specializations.  We create shortcuts.  If shortcuts are faster (literally) then they will happen first.  In other words, the better you get at dealing with a particular pattern, the more likely that pattern will be able to get to the effectors (or to the next stage of processing) first.   Is that what lateral inhibition does?  It gives the shortcut enough precedence to keep interference from messing things up.  In other words, lateral inhibition helps resolve race conditions.  [“Race conditions” reminds me that synchronous firing in the nervous system proceeds faster than anything else.]

Consciousness (whatever that means, still) is a tool for learning or for dealing with competing IRM/FAPs.  What do I mean “dealing with”?  Selecting among them, strengthening them or weakening them, refining them.  (There.  I got revising which was close, but not quite correct.  I typed it and then I got refining which was le mot juste (and it varies only in two consonants /f/ for /v/ which is only unvoiced for voiced and /s/ for /n/ which have no connection as far as I can tell).  [Find research on tip-of-the-tongue (TOT) phenomena.]

TOT: “partial activation” model v. “interference” model.  It seems to me that these are the same thing in my model of shortcuts and races.

The problem of observational learning: assuming that human infants are primed to learn from observation (or is it that they are primed to imitate actions they perceive, particularly humanish actions?).  Suppose moreover that humans have a way of segmenting perceptions and associating the segments.  Be real careful here: Marr suggests that visual inputs get taken apart and pieces processed hither, thither, and yon.  They never need to get put together because there’s no Cartesian observer.  So associations between percepts and imitative action patterns are spread out (multi-dimensional, if you will) without the need to segment the patterns any more than they are naturally.

As Oliphant (1998? submitted to Cognitive Behavior, p.15) says, “Perhaps it is an inability to constrain the possible space of meanings that prevents animals from using learned systems of communication, even systems that are no more complicated than existing innate signaling systems.”

Oliphant also says (1998? submitted to Cognitive Behavior, p.15), “When children learn words, they seem to simplify the task of deciding what a word denotes through knowledge of the existence of taxonomic categories (Markman, 1989), awareness of pragmatic context (Tomasello, 1995), and reading the intent of the speaker (Bloom, 1997).”  [Are some or all of these consequences of the development of attractor basins?  Is part of the developmental / maturational process the refinement of the boundaries of attractor basins?  Surely.]

It begins to feel as if imitation is key.  Is the IRM human-see and the FAP human-do?  Refinement is also the name of the game: patterns (input and output) can be refined with shortcuts.  There are innate groundings.  The innate groundings are most likely body-centric, but then again, imitation has an external stimulus: the behavior to imitate.

I’ve been finding lots of AI articles about cognitive models that use neural networks.  Granting that they are by nature schematic oversimplifications, there is one thing that seems to characterize all of them, and it’s something that has bothered me about neural networks all along: they assume grandmother-detectors.  That is, they have a set of input nodes that fire if and only if a particular stimulus occurs.  The outputs are similarly specific: each output node fires to signal a specific response.  Of course, this is pretty much a description of the IRM / FAP paradigm and, following Oliphant (1998?), the interesting problems seem to be happening in the system before and after this kind of model.

There are two easy ways of initializing a neural network simulation: set all weights to zero or set the weights to random values.  But assuming that what goes on in the brain bears at least some resemblance to what goes on in a neural network simulation, it seems clear that evolution guarantees that neither of these initialization strategies is used ontogenetically.  Setting all connection strengths to zero gives you a vegetable, and setting connection strengths randomly gives you a mess.  Surely evolution has found a better starting point.  [Cf. research on ontogenetic self-organization.]

One researcher’s baby is another researcher’s bathwater.  Hmmm.  Ain’t thinking grand?

Given that there aren’t grandmother detectors [although there are some experiments that claim Raquel Welch detectors, I think] and that there are not similarly specific effectors, we are back to Lashley’s problem of serial behavior.  What keeps the pandemonium from just thrashing?  I keep coming back to a substrate of plastic (i.e., tunable, mutable, modifiable, subsettable, short-cuttable) IRMs and FAPs.  Babies don’t get “doggie” all at once.  There seems to be a sort of bootstrap process involved.  Babies have to have enough built in to get the process started.  From that point on, it’s successive refinement.

I wrote “invisible figre” then stopped.  My intention had been to write “invisible fingers”.  I had been reading French.   I don’t know for [shure] sure how the ‘n’ got lost, but the “gre” would have been a Frenchified spelling and “figre” would not have had the nasalized consonant that would have (if pronounced in French) produced “fingres”.

All these little sensory and motor homuncuili in the cortex—maybe what they are telling us is pretty much what Lakoff was saying, namely that our conception of the universe is body-centric.  Makes good sense.  That’s where the external universe impinges upon us and that’s where we impinge on the external universe.  I couldn’t think of a better reference system.

Chalmers (The Conscious Mind, 1996) believes that zombies are logically possible because he can imagine them.  He believes that a reductionist explanation of consciousness is impossible.  It is certainly true that it is a long jump from the physics of the atom to the dynamics of Earth’s atmosphere that give rise to meteorological phenomena, but we don’t for that reason argue that a reductionist explanation is impossible.  Yes, it’s a hard problem, but it requires poking one hell of a big hole in our understanding of physics to believe that a scientific explanation is impossible and therefore consciousness must be supernatural.  I don’t think I want to read his book now.  I feel it will be like reading a religious tract arguing that evolution is impossible.  my Spanish Literature Professor Juan Marichal once observed, a propos a book written by a Mexican author who had conceived a virulent hatred for Cortez (from a vantage point 400 years after the conquest of Mexico) that it is possible to learn something even from works written by people who have peculiar axes to grind.  So maybe sometime I’ll revisit Chalmers, but not now.

Antonio Damasio (1999, The Feeling of What Happens: Body and Emotion in the Making of Consciousness.) The trouble with neural nets is often that they have no memory other than the connection weights acquired during training.  A new set of data erases or modifies the existing weights rather than taking into account what had been learned thus far.  Learning from experience means that there is some record of past experience to learn from.  Of course, that may just be the answer: memory systems server to counterbalance the tendency to oscillate or to go with the latest fad.  If a new pattern has some association with what has gone before, then what has gone before will shape the way in which the new pattern is incorporated.  If there is a long-term record of an old pattern, it will still be available at some processing stage even if the new pattern becomes dominant at some other processing stage.  So, it may not be necessary to solve in a single stage of processing the problem of new data causing forgetfulness.

Learning has to be going on at multiple levels simultaneously.  Alternatively, there are nested (layered? as in cortical layers) structures that feed information forward, so some structures learn from direct inputs and subsequent structures learn from the outputs of the structures that get direct inputs and so on.

Antonio Damasio (1999) has given me the idea that will, I think, account for wanting.  Homeostasis.  The argument goes like this.  In unicellular organisms, homeostasis doesn’t have a lot of ways to operate.  When an organism becomes mobile, homeostatic processes can trigger behaviors that with better than chance probability (from an evolutionary standpoint) result in internal state changes that serve to maintain homeostasis.  In effect, evolution favors behaviors that can be triggered to achieve homeostatic goals.

In complex organisms, there are homeostatic mechanisms that work on the internal environment directly, but there are some internal environment changes for which it is not possible to compensate adequately by modifying the internal environment directly.  Thence, hunger.  Hunger is how we experience the process that is initiated when homeostatic mechanisms detect an insufficiency of fuel.  (Actually, it’s probably more sophisticated than that—more like detection of a condition in which the reserve of fuel drops below a particular threshold—and maybe there are multiple thresholds, but the broad outline is clear.)

All organisms have phylogenetically established (built-in) boot processes for incorporating food.  In mammals, there is a rooting reflex and a suckle reflex.  Chewing (which starts out as gumming, but who’s worrying?) and swallowing are built-ins as well.  But those only help when food is presented.  Problem: how to get food to be presented?  Well, if food is presented before hunger sets in, it’s not a homeostatic problem.  If not, homeostatic mechanisms switch the organism into “need-fuel mode”.  In “need-fuel” mode, organisms do things that tend to increase the likelihood that fuel will become available.  Babies fuss, and even cry, sometimes lots and loudly.

Pain is another place where internal homeostatic processes intersect with the external universe.  Pain is how we experience the process that is initiated when homeostatic sensors detect deviations from damage to internal stability that arise from a physical process (heat, cold, puncture, etc.).  Again, evolution has sophisticated the process somewhat.  The pain process arises when a threshold condition is passed.  Pain does not wait for serious damage to take place, pain is triggered when it’s time to take action to prevent serious damage.

Pain actually has to be a bit subtle, too.  Some pain may and should be ignored.  If fight is an alternative to flight, then fight arguably ups the threshold for debilitating pain.

There are other obvious situations in which homeostatic considerations require some action with respect to the outside world.  Urination and defecation are two.  Similarly, vomiting (with its warning homeostatic signal, nausea).

Our wanting, then, has its origin as the experience of a process that responds to some (serious or prospectively serious) homeostatic imbalance.

As an aside, I want to propose that one of the characteristics that distinguishes reptiles from mammals is that when a reptile is in reasonable homeostatic equilibrium, it does nothing.  When a mammal is in the same state, it does something—explores its environment, plays, writes poetry, etc.  In the most general terms, it sets out to learn something.  This characteristic arguably confers at least a marginal advantage to animals that possess it, viz. it is possible that something learned in the absence (at the time) of any pressing need will turn out to be valuable in dealing with future situations in which there will be no opportunity to learn it.  So, the concept of homeostasis has to be broadly construed.

My central point, however, is that ultimately our wants, wishes, desires, dislikes, disgusts, and delights all refer to internal homeostatic processes.  The fact that there are so many distinguishable variants of wanting suggests to me that the many shades of our experience reflect the many kinds of homeostatic processes that have been phylogenetically established in our brains and bodies, each presumably for the most part having proved advantageous over evolutionary time.