030715

July 15th, 2003

030715

Hauser, Chomsky, and Fitch in their Science review article (2002) indicate that “comparative studies of chimpanzees and human infants suggest that only the latter read intentionality into action, and thus extract unobserved rational intent.” this goes along with my own conviction that internal models are significant in the phenomenon of human and self-awareness.

Hauser, Chomsky, and Fitch argue that “the computational mechanism of recursion” is critical to language ability, “is recently involved and unique to our species.”  I am well aware that many have died attempting to oppose Chomsky and his insistence that practical limitations have no place in the description of language capabilities.  I am reminded of Dennett’s discussion of the question of whether zebra is a precise term, that is, whether there exists anything that can be correctly called a zebra.  It seems fairly clear that Chomsky assumes that language exists in the abstract (much the way we naively assume that zebras exist in the abstract) and then proceeds to draw conclusions based on that assumption.  The alternative is that language, like zebras, is in the mind of the beholder, but that when language is placed under the microscope it becomes fuzzy at the boundaries precisely because it is implemented in the human brain and not in a comprehensive design document.

Uncritical acceptance of the idea that our abstract understanding of the computational mechanism of recursion is anything other than a convenient crutch for understanding the way language is implemented in human beings is misguided.  In this I vote with David Marr (1982) who believed that neither computational iteration nor computational recursion is implemented in the nervous system.

On the other hand, it is interesting that a facility which is at least a first approximation to the computational mechanism of recursion exists in human beings.  Perhaps the value of the mechanism from an evolutionary standpoint is that it does make possible the extraction of intentionality from the observed behavior of others.  I think I want to turn that around.  It seems reasonable to believe that the ability to extract intentionality from observed behavior would confer an evolutionary advantage.  In order to do that, it is necessary to have or create an internal model of the other in order to get access to the surmised state of the other.

Once such a model is available it can be used online to surmise intentionality and it can be used off line for introspection, that is, it can be used as a model of the self.  Building from Grush’s idea that mental imagery is the result of running a model in off line mode, we may ask what kind of imagery would result from running a model of a human being off line.  Does it create an image of a self?

Alternatively, since all of the other models proposed by Grush are in models of some aspect of the organism itself, it might be more reasonable to suppose that a model of the complete self could arise as a relatively simple generalization of the mechanism used in pre-existing models of aspects of the organism.

If one has a built-in model of one’s self in the same way one has a built-in model of the musculoskeletal system, then language learning may become less of a problem.  Here’s how it would work.  At birth, the built-in model is rudimentary and needs to be fine-tuned to bring it into closer correspondence with the system it models.  An infant is only capable of modeling the behavior of another infant.  Adults attempting to teach language skills to infants use their internal model to surmise what the infant is attending to and then name it for the child.  To the extent that the adult has correctly modeled the infant and the infant has correctly modeled the adult (who has tried to make it easy to be modeled), the problem of establishing what it is that a word refers to becomes less problematical.

030714

July 14th, 2003

030714

Here’s what’s wrong with Dennett’s homunculus exception.  It’s a bit misleading to discuss a flow chart for a massively parallel system.  We’re accustomed to high bandwidth interfaces between modules where high bandwidth is implemented as a high rate of transmission through a narrow pipe.  In the brain, high bandwidth is implemented as a leisurely rate of transmission through the Mississippi river delta.

030711

July 11th, 2003

030711

The body is hierarchical in at least one obvious way.  In order to make a voluntary movement, a particular muscle is targeted, but the specific muscle cell doesn’t matter.  What matters is the strength of the contraction.  In assessing the result of the contraction, what matters is the change in position of the joint controlled by the muscle, not the change in position of specific muscle cells.  Thus, an internal model needs only to work with intensity of effort, predicted outcome, and perceived outcome.  This kind of model is something that computer “neural networks” can shed some light on.  Certainly, there are more parameters, like “anticipated resistance” but there are probably not an overwhelming number of them.

The point of this is that, as Grush (2002) points out, the internal model has to be updateable in order to enable the organism to handle changes to its own capabilities over time.  At least at this level, Hebbian learning (as if I knew exactly what that denoted) seems sufficient.

030710

July 10th, 2003

030710

Here’s what Daniel Dennett has to say (passage said to be cited in Pinker 1997, How the Mind Works, p. 79) about homunculi:

Homunculi are bogeyman only if they duplicate entire the talents they are rung in to explain….  If one can get a team or committee of relatively ignorant, narrow minded, blind, homunculi to produce the intelligent behavior of the whole, this is progress.  A flowchart is typically the organizational chart of a committee of homunculi (investigators, librarians, accountants, executive); each box specifies the homunculus by prescribing a function without saying how it is accomplished (one says, in effect: put a little man in there to do the job).  If we look closer at the individual boxes we see that the function in each is accomplished by subdividing it via another flowchart into still smaller, more stupid homunculi.  Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is remember whether to say yes or no when asked) that they can be, as one says, “replaced by a machine.”  One discharges fancy homunculi from one’s scheme by organizing armies of idiots to do the work.

030709

July 9th, 2003

030709

I think of the sensorium as being something like an extremely resonant bell.  Sensory inputs or rather the processed remnants of sensory inputs, that is to say the effects of sensory inputs, prime various patterns in the sensorium and alter the way that subsequent sensory inputs are processed.  This process manifests itself as what we call short-term memory, intermediate term memory, long-term memory, as well as some form of learning.  Because we think of memory as the acquisition of factual information and not the development in the sensorium of patterns or the recognition of patterns, we’re not accustomed to thinking of short-term memory as learning.

Assuming that there is some sort of automatic recirculating mechanism, I wonder if the fact that in short-term (and long-term) memory experiments there is a clear effect favoring recall of the first item of a list is simply an artifact that results because generally the first item of a list is preceded by silence or by some irrelevant stimulus.  I wonder if short-term memory is some kind of more or less fixed time constant.  One might think of an initial stage of processing in which inputs are recirculated after some time.  This raises the question of whether observed limits on the number of memory chunks that can be stored in short-term memory is a result of the amount of time it takes for each chunk to be entered.  No, it’s probably much more complicated than that.  There is already an interaction between sensory inputs and pre-existing patterns from the get go.  That’s why zillions of short-term memory experiments use nonsense syllables.

Rather than thinking of attention as adding processing power to particular sensory inputs it may make more sense to think of attention as a way of suppressing, or at least reducing, the strength of competing sensory inputs.  That of course makes more sense than thinking that the brain has excess processing capacity just lying around waiting to be called into action for the purpose of attention.

How long is “now”?  I don’t think the question really has an answer.  I think the hetero phenomenological experience of “now” depends on the contents of the recirculating short-term memory buffer.  When I talk about the recirculating short-term memory buffer I mean that at a certain point in the processing of incoming sensory inputs, the processed inputs are fed back to an earlier point in the processing and somehow combined with the current incoming sensory inputs.  At the same time, the processed inputs continue to be further processed.

As I think more about “now” I realized that there are a number of different now’s depending on the sensory modality.  Well even that’s not right.  We know from various tachistoscopic experiments that there is a short-term visual buffer with a very short time constant, which suggests that there is a very short visual “now”.  I can’t think of any good evolutionary reason why each modality’s “now” should have the same time constant.

I see that I’ve written “the” recirculating short-term memory buffer.  I certainly don’t know that there’s only one, and I don’t know that any of my conclusions depend on there being only one.  Indeed I think that patterns recirculate with differing time constants depending in some way on the nature (whatever that means) of each pattern.

030708 – Computer consciousness

July 8th, 2003

030708 – Computer consciousness

I begin to understand the temptation to write papers that takes the form of diatribes against another academic’s position.  I just found the abstract of the paper written by someone named Maurizio Tirassa in 1994.  In the abstract he states, “I take it for granted that computational systems cannot be conscious.”

Oh dear.  I just read a 1995 response to Tirassa’s paper by someone in the department of philosophy and the department of computer science at Rensselaer Polytechnic Institute who says we must remain agnostic toward dualism.  Note to myself: stay away from this kind of argument; it will just make me crazy.

For the record: I take it for granted that computational systems can be conscious.  I do not believe in dualism.  There is no Cartesian observer.

I do like what Rick Grush has to say in his 2002 article “An introduction to the main principles of emulation: motor control, imagery, and perception”.  He posits the existence of internal models that can be disconnected from effectors and used as predictors.

Grush distinguishes between simulation and emulation.  He states that, “The difference is that emulation theory claims that mere operation of the motor centers is not enough, that to produce imagery they must be driving an emulator of the body (the musculoskeletal system and relevant sensors).”  He contrasts what he calls a “motor plan” with “motor imagery”.  “Motor imagery is a sequence of faux proprioception.  The only way to get … [motor imagery] is to run the motor plans through something that maps motor plans to proprioception and the two candidates here are a) the body (which yields real proprioception), and b) a body emulator (yielding faux proprioception).”

What’s nice about this kind of approach is that its construction is evolutionarily plausible.  That is, the internal model is used both for the production of actual behavior and for the production of predictions of behavior.  Evolution seems to like repurpose systems so long as the systems are reasonably modular.

Grush distinguishes between what he calls “modal” and “amodal” models.  “Modal” models are specific to a sensory modality (e.g., vision, audition, proprioception) and “amodal” models (although he writes as if there were only one) model the organism in the universe.  I do not much care for the terminology because I think it assumes facts not in evidence, to wit: that the principal distinguishing characteristic is the presence or absence of specificity to a sensory modality.  I also think it misleads in that it presumes (linguistically at least) to be an exhaustive categorization of model types.

That said, the most interesting thing in Grush for me is the observation that the same internal model can be used both to guide actual behavior and to provide imagery for “off-line” planning of behavior.  I had been thinking about the “on-line” and “off-line” uses of the language generation system.  When the system is “on-line”, physical speech is produced.  When the system is “off-line”, its outputs can be used to “talk to oneself” or to write.  Either way, it’s the same system.  It doesn’t make any sense for there to be more than one.

When a predator is crouched, waiting to spring as soon as the prey it has spotted comes into range, arguably it has determined how close the prey has to come for a pounce to be effective.  The action plan is primed, it’s a question of waiting for the triggering conditions (cognitively established by some internal mental model) to be satisfied.

It is at least plausible to suggest that if evolution developed modeling and used it to advantage in some circumstances; modeling will be used in other circumstances where it turns out to be beneficial.  I suppose this is a variant of Grush’s Kalman filters argument which says that Kalman filters turn out to be a good solution to a problem that organisms have and it would not be surprising to discover that evolution has hit upon a variant of Kalman filters to assist in dealing with that problem.

It’s clear (I hope, and if not, I’ll make an argument as to why) that a mobile organism gains something by having some kind of model (however rudimentary) of its external environment.  In “higher” organisms, that model extends beyond the range of that which is immediately accessible to its senses.  It’s handy to have a rough idea of what is behind one without having to look around to find out.  It’s also handy to know where one lives when one goes for a walk out of sight of one’s home.

Okay, so we need an organism-centric model of the universe, that is, one that references things outside the organism to the organism itself.  But more interestingly, does this model include a model of the organism itself?

Certain models cannot be inborn (or at least the details cannot be).  What starts to be fun is when the things modeled have a mind of their own (so to speak).  It’s not just useful to humans to be able to model animals and other humans (to varying degrees of specificity and with varying degrees of success).  It would seem to be useful to lots of animals to be able to model animals and other conspecifics.

What is the intersection of “modeling” with “learning” and “meaning”?  How does “learning” (a sort of mental sum of experience) interact with ongoing sensations?  “Learning” takes place with respect to sensible (that is capable of being sensed) events involving the organism, including things going on inside the organism that are sensible.  Without letting the concept get out of hand, I have said in other contexts that humans are voracious pattern-extractors.  “Pattern” in this context means a model of how things work.  That is, once a pattern is “identified” (established, learned), it tends to assert its conclusions.

This is not quite correct.  I seem to be using “pattern” in several different ways.  Let’s take it apart.  The kicker in just about every analysis of “self” and “consciousness” is the internal state of the organism.  Any analysis that fails to take into account the internal state of the organism at the time a stimulus is presented is not, in general, going to do well in predicting the organism’s response.  At the same time, I am perfectly willing to assert that the organism’s response—any organism’s response—is uniquely determined by the stimulus (broadly construed) and the organism’s state (also broadly construed).  Uniquely determined.  Goodbye free will.  [For the time being, I am going to leave it to philosophers to ponder the implications of this fact.  I am sorry to say that I don’t have a lot of faith that many of them will get them right, but some will.  This is just one of many red herrings that make it difficult to think about “self” and “consciousness”.]

Anyway, when I think about the process, I think of waves of data washing over and into the sensorium (a wonderfully content-free word).  In the sensorium are lots of brain elements (I’m not restricting this to neurons because there are at least ten times as many glia listening in and adding or subtracting their two cents) that have been immersed in this stream of information since they became active.  They have “seen” a lot of things.  There have been spatio-temporal-modal patterns in the stream, and post hoc ergo propter hoc many of these patterns have been “grooved”.  So, when data in the stream exhibit characteristics approximating some portion of a “grooved” pattern, other brain elements in the groove are activated to some extent, the extent depending on all sorts of things, like the “depth” of the “groove”, the “extent” of the match, etc.

In order to think about this more easily, remember that the sensorium does not work on just a single instantaneous set of data.  It takes some time for data to travel from neural element to neural element.  Data from “right now” enter the sensorium and begin their travel “right now”, hot on the heels of data from just before “right now”, and cool on the heels of data from a bit before “right now” and so on.  Who knows how long data that are already in the sensorium “right now” have been there.  [The question is, of course, rhetorical.  All the data that ever came into the sensorium are still there to the extent that they caused alterations in the characteristics of the neural elements there.  Presumably, they are not there in their original form, and more of some are there than of others.]  The point is that the sensorium “naturally” turns sequential data streams into simultaneous data snapshots.  In effect, the sensorium deals with pictures of history.

Now back to patterns.  A pattern may thus be static (as we commonly think of a pattern), and at the same time represent a temporal sequence.  In that sense, a pattern is a model of how things have happened in the past.  Now note that in this massively parallel sensorium, there is every reason to believe that at any instant many many patterns have been or are being activated to a greater or lesser extent and the superposition (I don’t know what else to call it) of these patterns gives rise to behavior in the following way.

Some patterns are effector patterns.  They are activated (“primed” is another term used here, meaning activated somewhat, but not enough to be “triggered”) by internal homeostatic requirements.  I’m not sure I am willing to state unequivocally that I believe all patterns have an effector component, but I’m at least willing to consider it.  Maybe not.  Maybe what I think is that data flows from sensors to effectors and the patterns I am referring to shape and redirect the data (which are ultimately brain element activity) into orders that are sent to effectors.

That’s existence.  That’s life.  I don’t know what in this process gives rise to a sense of self, but I think the description is fundamentally correct.  Maybe the next iteration through the process will provide some clues.  Or the next.  Or the next.

Hunger might act in the following way.  Brain elements determine biochemically and biorhythmically that it’s time to replenish the energy resources.  So data begin to flow associated with the need to replenish the energy resources.  That primes patterns associated with prior success replenishing the energy resources.  A little at first.  Maybe enough so if you see a meal you will eat it.  Not a lot can be hard-wired (built-in) in this process.  Maybe as a baby there’s a mechanism (a built-in pattern) that causes fretting in response to these data.  But basically, what are primed are patterns the organism has learned that ended up with food being consumed.  By adulthood, these patterns extend to patterns as complex as going to the store, buying food, preparing it, and finally consuming it.

This is not to say that the chain of determinism imposes rigid behaviors.  Indeed, what is triggered deterministically is a chain of opportunism.  Speaking of which, I have to go to the store to get fixings for dinner.  Bye.

030615 – (Heterophenomnological) Consciousness

June 15th, 2003

030615 – (Heterophenomenological) Consciousness

It’s dreary and raining and that may make people a bit depressed.  That, in turn, may make it harder for people to find a satisfactory solution to their problems.   Realizing that, I feel a bit better.  It is sometimes useful to bring something into consciousness so one can look at it.

Although we may not have access to the underlying stimulus events (constellations) that directly determine our feelings, we can learn about ourselves just as we learn about other things and other people.  We can then shine the spotlight of consciousness on our inner state and try to glean what clues we can by carefully attention.

When I say we can learn about ourselves, that is to say that we can create an internal model of ourselves and use the predictions of that model to feed back into our decision-making process.  Such feedback has the result of modifying our behavior (as a feedback system does).

The interesting thing about the internal model is that it not only models external behavior, but also models internal state.

Interesting aside: consciousness can be switched on and off.  We can be awake or asleep.  We can be “unconscious”.

What are the design criteria for human beings such that consciousness is an appropriate engineering solution?

Goals:

  • Exist in world.
  • Basic provisioning.  Homeostasis. Obtain fuel.
  • Reproduction.  Mate.  Ensure survival of offspring.

Capabilities Required to Attain Goals:

  • Locomotion.
  • Navigation.
  • Manipulation.

Functions Required to Implement Required Capabilities

  • Identification of things relevant to implementation of goals.
  • Acquisition of skills relevant to implementation of goals (note that skills may be physical or cognitive).

Capabilities Required to Support Required Functions

  • Observation.  Primary exterosensors.
  • Memory.
  • Ability to manipulate things mentally (saves energy).  This includes the ability to manipulate the self mentally.
  • Ability to reduce power consumption during times when it is diseconomic to be active (e.g., sleep at night).

Damasio (1999, p.260) says:

“Homeostatic regulation, which includes emotion, requires periods of wakefulness (for energy gathering); periods of sleep (presumably for restoration of depleted chemicals necessary for neuronal activity); attention (for proper interaction with the environment); and consciousness (so that a high level of planning or responses concerned with the individual organism can eventually take place). The body-relatedness of all these functions and the anatomical intimacy of the nuclei subserving them are quite apparent.”

Well, I have an alternative theory of the utility of sleep, but Damasio’s is certainly plausible and has been around for a while in the form of the “cleanup” hypothesis: that there is something that is generated or exhausted over a period of wakefulness that needs to be cleaned up or replenished and sleep is when that gets done.  It raises the question of whether sleep is an essential part of consciousness and self-awareness or is it a consequence of the physical characteristics of the equipment in which consciousness and self-awareness are implemented.

One talks to oneself by inhibiting (or is it failing to activate) the effectors that would turn ready-to-speak utterances into actual utterances.  In talking to oneself, ready-to-speak utterances are fed back into the speech understanding system.  This is only a slight variation of the process of careful (e.g., public speaking) speech or the process used in writing.  In writing, the speech utterance effectors are not activated and the ready-to-speak stuff is fed into the writing system.

But does it always pass through the speech understanding system?  IOW is it possible to speak without knowing what you are going to say?  Possibly.  Specific evidence: on occasion one thinks one has said one thing and has in fact said something else.  Sometimes one catches it oneself.  Sometimes somebody says you said X, don’t you mean Y and you say oh, did I say X, I meant Y.

Nonetheless, I don’t think it’s necessary to talk to oneself to be conscious.  There are times when the internal voice is silent.  OTOH language is the primary i/o system for humans.  One might argue that language enhances consciousness.  As an aside, people who are deaf probably have an internal “voice” that “talks” to them.  Does talking to yourself help you to work things out?  Does the “voice” “speak” in unexpressed signs?  When a deaf person does something dumb, does he/she sign “dumb” to him/herself?

Is there something in the way pattern matching takes place that is critical to the emergence of consciousness?  The more I think about consciousness, the less certain I am that I know what I am talking about.  I don’t think that is bad.  It means that I am recognizing facets of the concept that I had not recognized before.  That seems to be what happened to Dennett and to Damasio.  They each had to invent terminology to express differences they had discovered.

Ultimately, we need an operational definition of whatever it is that I’m talking about here.  That is the case because at the level I am trying to construct a theory, there is no such thing as consciousness.  If there were, we’d just be back in the Cartesian theatre.  Is the question: How does it happen that human beings behave as if they have a sense of self?  I’m arriving at Dennett’s heterophenomenology.  (1991, p.96) “You are not authoritative about what is happening in you, but only about what seems to be happening in you….”

To approach the question of how heterophenomenological consciousness emerges, it is essential to think “massively parallel”.  What is the calculus of the brain.  A + B = ?  A & B ?  A | B ?  A followed by B?  Thinking massive parallelism, the answer could be: All of the above.  It must be the case that serial inputs are cumulatively deserialized.  There’s an ongoing accumulation of history at successively higher levels of abstraction (well, that’s one story, or one way of putting it).  Understanding language seems to work by a process of successive refinement.  Instinctively it’s like A & B in a Venn diagram, but that feels too sharp.

The system doesn’t take “red cow” to mean the intersection of red things with cow things.  The modifier adds specificity to an otherwise unspecified (default) attribute.  So the combination of activation of “red” and the activation of “cow” in “red cow”  leads to a new constellation of activation which is itself available for further modification (generalization or restriction or whatever).  This probably goes on all the time in non-linguistic processing as well.  A pattern that is activated at one point gets modified (refined) as additional information becomes available.  Sounds like a description of the process of perception.

Massively parallel, always evolving.  It doesn’t help to start an analysis when the organism wakes up, because the wake-up state is derived from (is an evolution of) the organism’s previous life.  Learning seems to be closely tied to consciousness.  Is it the case that the “degree” of consciousness of an organism is a function of the “amount of learning” previously accumulated by the organism?

We know how to design an entity that responds to its environment.  An example is called a PC (Personal Computer).

There’s learning (accumulation of information) and there’s self-programming (modification of processing algorithms).  Are these distinguishable in “higher” biological entities?  Does learning in say mammals, necessarily involve self-programming?  Is a distinction between learning and self-programming just a conceptual convenience for dealing with Von Neuman computers?

There’s “association” and “analysis”

There is learning and there’s self programming.  Lots of things happen automatically.  Association and analysis.  Segmentation is important: chunking is a common mechanism.  Chunking is a way of parallelizing the processing of serial inputs.  Outputs of parallel processors may move along as chunks.  Given that there’s no Cartesian observer, every input is being processed for its output consequences.  And every input is being shadow processed to model its consequences and the model consequences are fed back or fed along.  Associations are also fed back or fed along.  In effect there is an ongoing assessment of what Don Norman called “affordances”, e.g., what can be done in the current context?  The model projects alternate futures.  The alternate futures coexist with the current inputs.  The alternate futures are tagged with valences.  Are these Dennett’s “multiple drafts”?  I still don’t like his terminology.  Are the alternate futures available to consciousness?  Clearly sometimes.  What does that mean?  It is certainly possible for a system to do load balancing and prioritization.  If there is additional processing power available or if processing power can be reassigned to a particular problem.  Somehow, I don’t think it works that way.  Maybe some analyses are dropped or, more likely, details are dropped as a large freight train comes roaring through.  Tracking details isn’t much of a problem because of the constant stream of new inputs coming in.  Lost details are indeed lost, but most of the time, so what?

Language output requires serialization as do certain motor skills.  The trick is to string together a series of sayings that are themselves composed of ordered (or at least coordinated) series of sayings.  Coordination is a generalization of serialization because it entails multiple parallel processors.  Certainly, serial behavior is a challenge for a parallel organism, but so are all types of coordinated behavior.  Actions can be overlaid (to a certain extent, for example: walk and chew gum; ride a horse and shoot; drive and talk; etc.) Week can program computers in a way that evolution cannot hardwire organisms.  On the other hand, evolution has made the human organism programmable (and even self programmable).  Not only that, we are programmable in languages that we learn and we are programmable in perceptual motor skills that we practice and learn.  Is there some (any) reason to think that language is not a perceptual motor skill (possibly writ large)?

Suppose we believe that learning involves modifications of synaptic behavior.  What do we make of the dozen or so neurotransmitters?  Is there a hormonal biasing system that influences which transmitters are most active?  Is that what changes mode beyond just neural activity in homeostatic systems?  Otherwise, does the nature of neuronal responses change depending on the transmitter mix, and can information about that mix be communicated across the synaptic gap?  These are really not questions that need to be answered in order to create a model of consciousness (even though they are interesting questions) but they do serve as a reminder that the system on which consciousness is based is only weakly understood and probably much more complicated even than we think (and we think it’s pretty complicated).

I seem to have an image — well, a paradigm– in mind involving constraints and feature slots, but I don’t quite see how to describe it as an algorithm.  This is a pipelined architecture, but with literally millions of pipelines that interact locally and globally.  The answer to “what’s there?” or “what’s happening?” is not a list, but a coruscating array of facets.  It is not necessary to extract “the meaning” or even “a meaning” to appreciate what is going on.  A lot of the time, nothing is “going on”; things are what they are and are not changing rapidly.

Awareness and attention seem to be part of consciousness.  One can be aware of something and not pay attention to it.  Attention seems central — the ability to select or emphasize certain input (and/or output) streams.  What is “now”?  It seems possible to recirculate the current state of things.  Or just let them pass by.  Problem: possible how?  What “lets” things pass by?  The Cartesian observer is so seductive.  We think we exist and watch our own private movie, but it cannot happen that way.  What is it that creates the impression of “me”?  Yes, it’s all stimulus-response, and but the hyphen is where all the state information is stored.  What might give the impression of “me”?  I keep thinking it has something to do with the Watslawyck et al. [(1969(?) The Pragmatics of Human Communication] idea of multiple models.  This is the way I see you. This is the way I see you seeing me.  This is the way I see you seeing me seeing you.  And then nothing.  Embedding works easily once: “The girl the squirrel bit cried.”  But “The girl the squirrel the boy saw bit cried” is pathological.

As a practical matter, if we want to create an artificial mind, we probably want to have some sort of analog to the homunculus map in order to avoid the problem of having to infer absolutely everything from experience.  That is, being able to refer stimuli to an organism centric and gravity aware coordinate system, goes a long way towards establishing a lot of basic concepts: up-down, above-below, top-bottom, towards-away, left-right, front-back.  Add an organism/world boundary and you get inside-outside.  I see that towards-away actually cheats in that it implies motion.  Not a problem because motion is change of position over time and with multiple temporal snapshots (naturally produced as responses to stimuli propagate through neural fields), motion can be pretty easily identified.  So that gets things like fast-slow, into-out of, before-after.  We can even get to “around” once the organism has a finite extent to get around.

What would we expect of an artificial mind?  We would like its heterophenomenology to be recognizably human.  What does that mean?  Consider the Turing test.  Much is made of the fact that certain programs have fooled human examiners over some period of time.  Is it then the case that the Turing test is somehow inadequate in principle?  Probably not.  At least I’m not convinced yet that it’s not adequate.  I think the problem may be that we are in the process of learning what aspects of human behavior can be (relatively) easily simulated.  People have believed that it is easy to detect machines by attempting to engage them in conversation about abstract things.  But it seems that things like learning and visualization are essential to the human mind.  Has anyone tried things like: imagine a capital a.  Now in your imagination remove the horizontal stroke and turn the resulting shape upside down.  What letter does it look like?

Learning still remains intransigent problem.  We don’t know how it takes place.  Recall is equally dicey.  We really don’t seem to know any more about learning skills that we do about learning information.  We’re not even very clear about memorizing nonsense syllables for all the thousands of psychological experiments involving them.  Is learning essential to mind?  Well, maybe not.  Henry can’t learn any conscious facts, and he clearly has a mind (no one I know of has suggested otherwise).  Okay, so there could be a steady state of learning.  The ability to learn facts of the kind Henry Molaison couldn’t learn isn’t necessary for a mind to exist.  We don’t know whether the capacity for perceptual motor learning is necessary for a mind to exist.  Does a baby have a mind?  Is this a sensical question?  If not, when does it get one?  If so when did it develop?  How?

It begins to feel like the problem it is to figure out what the question should be.  “Consciousness” seems not to be enough.  “Mind” seems ill-defined.  “Self awareness” has some appeal, though I struggle to pin down what it denotes: clearly “awareness” of one’s “self” but then what’s a “self” and what does “awareness” mean?  Surely self-awareness means 1)  there is something that is “aware” (whatever “aware” means”), 2) that thing has a “self” (whatever “self” means), and that thing can be and is “aware” of its “self”.  A person could go crazy.

Is this a linguistic question — or rather a meta linguistic question: what does “I” mean?  What is “me”?  In languages that distinguish a “first person” it would appear that these questions can be asked.  And by the way, what difference does it make if the language doesn’t have appropriate pronouns and resorts to things like “this miserable wretch begs forgiveness”?  Who’s doing the begging?  No.  That’s not the question.  What’s doing the begging. heterophenomenologically, it doesn’t matter if I say it referring to myself or referring to another person.  Except that it has for me a special meaning when it refers to “my self” and that special meeting is appreciated, that is, understood, by others hearing “me” say it.

I don’t know anything about children learning what “I” and “me” refer to.  I remember reading something about an (autistic I think) child who referred to himself in the third person, for example: “he’s thirsty”

Consciousness seems to require inputs.  That is, one cannot just “be conscious” rather one must “be conscious of” things.  That sounds a bit forced, but not if it is precisely the inputs that give rise to consciousness.  No inputs, no consciousness.  Something in the processing of inputs gives rise to the heterophenomenological feeling of being conscious.

Does self-awareness have to do with internal models?  Does the organism have an internal model of the universe in which exists?  Does that model include among the entities modeled, the organism itself?  And is it necessary that the model of the organism include a model of the internal model of the universe and its component model of the organism?  It may not be an infinite series.  In fact it can’t be.  The brain (or any physical computer) has finite capacity.

But doesn’t a model imply someone or something that makes use of the model?  We keep coming back to metaphors that encourage the Cartesian fallacy.

Let’s think computer systems design.  Hell, let’s go all the way, let’s think robot design.  The robot exists in a universe.  The robot’s program receives inputs from its exteroceptors about the state of the universe and its inputs, suitably processed, are abstracted into a set of signals representing the inputs — in fact representing the inputs over a period of time.  The same thing is happening with samples representing the interoceptors monitoring the robot’s internal mechanical state: position of limbs, orientation, inertial state (falling, turning, whatever), battery/power level, structural integrity.

On the goals side, based on the internal state, the robot has certain not action triggers, but propensity triggers.  For example: When the internal power level or the internal power reserves fall below a particular threshold, the goal of increasing power reserves is given increased priority.  But we do not assume that the robot has a program that specifies exactly what to do in this state.  The state should trigger increased salience (whatever that means) and attention to things in the current environment that are (or have been in the learned past) associated with successful replenishing of power reserves.

At all times, the important question is: “what do I do now?”  The answer to this question helps determine what needs “attention” and what doesn’t need “attention”.  As a first approximation, things not “associated” with current priority goals are not attended to.  Well, it’s not quite a simple as that.  Things that don’t need attention, even though associated with an ongoing task (like walking or driving) don’t get attention processing.  Attention is the assignment of additional processing power to something.  Additional processing power can boost the signal level to above the consciousness threshold and can reduce the decay rate of attended signals.

No one has succeeded in explaining why heterophenomenological evidence indicates that people feel “conscious” sometimes and when they don’t feel “conscious” they don’t “feel” anything and they shift back and forth.  It’s a processing thing.  If I close my eyes and lie quietly, I’m not asleep.  I still hear things.  I can still think about things.  So consciousness can be turned on and off in the normal organism.  What’s going on here?  Understanding the neural connections won’t do it.  We would need to know what the connections “do”; how they “work”.

Sleep.  In effect, the organism can “power down” into a standby state (for whatever evolutionary reason).  If the threshold for external events is set high, most of them won’t make an impact (have an effect).  It’s like a stabilized image on the retina.  It disappears — well, it fades.  No change equals no signal.  If there’s nothing to react to, the organism, well, doesn’t react.

If outside inputs are suppressed, where do daydream inputs come from?  Not a critical question, but an interesting one.  Somebody pointed out that so-called “dream paralysis” is a good thing in that it keeps us from harming ourselves or others in reaction to dream threats or situations.

030604 – Wants (more)

June 4th, 2003

030604 – Wants (more)

Could it be that the fundamental nature of wanting is IRMs (innate releasing mechanisms) and FAPs (fixed action patterns)?  Certainly IRMs and FAPs have a long and honorable evolutionary history.  There is certainly reason to say that lower animals are a soup of IRMs and FAPs.  Why not higher animals, too?  If I don’t know what I want until I see what I do, is that just a way of saying that I don’t have direct access to my IRMs?  Or is that just silly?

And what does it make sense for evolution to select as generic wants to be activated when there’s nothing pressing?  How about something like

–    Learn something new
–    Acquire a new skill (What’s a skill?  A complex perceptual motor pattern?)
–    Practice an acquired skill
–    Think about something interesting (What’s interesting?)
–    Stimulate yourself
–    Play with the external world (What’s play?)

You can’t have a theory of consciousness without including:

–    Wanting (approach)
–    Absence of wanting / indifference
–    Negatively directed wanting / wanting not (avoidance)
–    Learning
–    Skill acquisition (Perceptual / Motor Learning)
–    Imitation (human see, human do)
–    Pleasure / Satisfaction
–    Pain / Frustration
–    Salience / Interest
–    Metaphor

[Is this my own rediscovery of what Jerry Fodor (and presumably many others) call propositional attitudes?  Some of the items are, but others are not.]

If you stick out your tongue at a baby, from a very early age, the baby will imitate the action.  But the baby can’t see its tongue, so how does it know what to do.  It’s a visual stimulus, but the mirroring is not visual.  Now, it’s possible that a baby can see its tongue, if it sticks it out far enough, but unless the baby has spent time in front of a mirror, there’s no reason to believe the baby has ever seen its own face head-on (as it were).

Children want to do what they see their older siblings doing.  It seems to be innate.  It would seem to be rather peculiar to argue that children learn to want to imitate.  But how does a child (or anybody, for that matter) decide what it wants to imitate now?  There’s “What do I do now?”  “Imitate.” and “what do I want to imitate?”

A “high performance skill” (Schneider 1985): more than 100 hours of specialist training required; substantial numbers of trainees fail to acquire proficiency; performance of adepts is qualitatively different (whatever that means) from that of non-adepts.  There are lots of examples of high performance skills.  People spend lots of time practicing sports, learning to work machinery, etc.  Why?  Improving a skill (developing a skill and further developing it) is satisfying.  Does general knowledge count as a skill?  Can we lump book learning with horsemanship?

What about Henry Molaison, whose perceptual motor skills improved but he did not consciously recognize the testing apparatus?  Not really a problem.  There’s a sense in which the development of perceptual motor skills is precisely intended to create motor programs that don’t require problem solving on-the-fly.  Ha!  We can create our own FAPs!  [This is like blindsight.  Things that do not present themselves to the conscious-reporting system (e.g., Oh, yeah, I know how to do this pursuit rotor thing.) are available to be triggered as a consequence of consciously reportable intentions and states of mind (e.g., I’m doing this pursuit rotor thing.).  So part of what we learn to do consciously is learned and stored in non-reportable form (cf. Larry Squire’s papers on the topic).  But in the case of blindsight, some trace of detectablility is present.]

But if we can create our own FAPs, we must also create our own IRMs.  That means we have to create structures (patterns) that stretch from perceptions to behaviors.  Presumably, they are all specializations.  We create shortcuts.  If shortcuts are faster (literally) then they will happen first.  In other words, the better you get at dealing with a particular pattern, the more likely that pattern will be able to get to the effectors (or to the next stage of processing) first.   Is that what lateral inhibition does?  It gives the shortcut enough precedence to keep interference from messing things up.  In other words, lateral inhibition helps resolve race conditions.  [“Race conditions” reminds me that synchronous firing in the nervous system proceeds faster than anything else.]

Consciousness (whatever that means, still) is a tool for learning or for dealing with competing IRM/FAPs.  What do I mean “dealing with”?  Selecting among them, strengthening them or weakening them, refining them.  (There.  I got revising which was close, but not quite correct.  I typed it and then I got refining which was le mot juste (and it varies only in two consonants /f/ for /v/ which is only unvoiced for voiced and /s/ for /n/ which have no connection as far as I can tell).  [Find research on tip-of-the-tongue (TOT) phenomena.]

TOT: “partial activation” model v. “interference” model.  It seems to me that these are the same thing in my model of shortcuts and races.

The problem of observational learning: assuming that human infants are primed to learn from observation (or is it that they are primed to imitate actions they perceive, particularly humanish actions?).  Suppose moreover that humans have a way of segmenting perceptions and associating the segments.  Be real careful here: Marr suggests that visual inputs get taken apart and pieces processed hither, thither, and yon.  They never need to get put together because there’s no Cartesian observer.  So associations between percepts and imitative action patterns are spread out (multi-dimensional, if you will) without the need to segment the patterns any more than they are naturally.

As Oliphant (1998? submitted to Cognitive Behavior, p.15) says, “Perhaps it is an inability to constrain the possible space of meanings that prevents animals from using learned systems of communication, even systems that are no more complicated than existing innate signaling systems.”

Oliphant also says (1998? submitted to Cognitive Behavior, p.15), “When children learn words, they seem to simplify the task of deciding what a word denotes through knowledge of the existence of taxonomic categories (Markman, 1989), awareness of pragmatic context (Tomasello, 1995), and reading the intent of the speaker (Bloom, 1997).”  [Are some or all of these consequences of the development of attractor basins?  Is part of the developmental / maturational process the refinement of the boundaries of attractor basins?  Surely.]

It begins to feel as if imitation is key.  Is the IRM human-see and the FAP human-do?  Refinement is also the name of the game: patterns (input and output) can be refined with shortcuts.  There are innate groundings.  The innate groundings are most likely body-centric, but then again, imitation has an external stimulus: the behavior to imitate.

I’ve been finding lots of AI articles about cognitive models that use neural networks.  Granting that they are by nature schematic oversimplifications, there is one thing that seems to characterize all of them, and it’s something that has bothered me about neural networks all along: they assume grandmother-detectors.  That is, they have a set of input nodes that fire if and only if a particular stimulus occurs.  The outputs are similarly specific: each output node fires to signal a specific response.  Of course, this is pretty much a description of the IRM / FAP paradigm and, following Oliphant (1998?), the interesting problems seem to be happening in the system before and after this kind of model.

There are two easy ways of initializing a neural network simulation: set all weights to zero or set the weights to random values.  But assuming that what goes on in the brain bears at least some resemblance to what goes on in a neural network simulation, it seems clear that evolution guarantees that neither of these initialization strategies is used ontogenetically.  Setting all connection strengths to zero gives you a vegetable, and setting connection strengths randomly gives you a mess.  Surely evolution has found a better starting point.  [Cf. research on ontogenetic self-organization.]

One researcher’s baby is another researcher’s bathwater.  Hmmm.  Ain’t thinking grand?

Given that there aren’t grandmother detectors [although there are some experiments that claim Raquel Welch detectors, I think] and that there are not similarly specific effectors, we are back to Lashley’s problem of serial behavior.  What keeps the pandemonium from just thrashing?  I keep coming back to a substrate of plastic (i.e., tunable, mutable, modifiable, subsettable, short-cuttable) IRMs and FAPs.  Babies don’t get “doggie” all at once.  There seems to be a sort of bootstrap process involved.  Babies have to have enough built in to get the process started.  From that point on, it’s successive refinement.

I wrote “invisible figre” then stopped.  My intention had been to write “invisible fingers”.  I had been reading French.   I don’t know for [shure] sure how the ‘n’ got lost, but the “gre” would have been a Frenchified spelling and “figre” would not have had the nasalized consonant that would have (if pronounced in French) produced “fingres”.

All these little sensory and motor homuncuili in the cortex—maybe what they are telling us is pretty much what Lakoff was saying, namely that our conception of the universe is body-centric.  Makes good sense.  That’s where the external universe impinges upon us and that’s where we impinge on the external universe.  I couldn’t think of a better reference system.

Chalmers (The Conscious Mind, 1996) believes that zombies are logically possible because he can imagine them.  He believes that a reductionist explanation of consciousness is impossible.  It is certainly true that it is a long jump from the physics of the atom to the dynamics of Earth’s atmosphere that give rise to meteorological phenomena, but we don’t for that reason argue that a reductionist explanation is impossible.  Yes, it’s a hard problem, but it requires poking one hell of a big hole in our understanding of physics to believe that a scientific explanation is impossible and therefore consciousness must be supernatural.  I don’t think I want to read his book now.  I feel it will be like reading a religious tract arguing that evolution is impossible.  my Spanish Literature Professor Juan Marichal once observed, a propos a book written by a Mexican author who had conceived a virulent hatred for Cortez (from a vantage point 400 years after the conquest of Mexico) that it is possible to learn something even from works written by people who have peculiar axes to grind.  So maybe sometime I’ll revisit Chalmers, but not now.

Antonio Damasio (1999, The Feeling of What Happens: Body and Emotion in the Making of Consciousness.) The trouble with neural nets is often that they have no memory other than the connection weights acquired during training.  A new set of data erases or modifies the existing weights rather than taking into account what had been learned thus far.  Learning from experience means that there is some record of past experience to learn from.  Of course, that may just be the answer: memory systems server to counterbalance the tendency to oscillate or to go with the latest fad.  If a new pattern has some association with what has gone before, then what has gone before will shape the way in which the new pattern is incorporated.  If there is a long-term record of an old pattern, it will still be available at some processing stage even if the new pattern becomes dominant at some other processing stage.  So, it may not be necessary to solve in a single stage of processing the problem of new data causing forgetfulness.

Learning has to be going on at multiple levels simultaneously.  Alternatively, there are nested (layered? as in cortical layers) structures that feed information forward, so some structures learn from direct inputs and subsequent structures learn from the outputs of the structures that get direct inputs and so on.

Antonio Damasio (1999) has given me the idea that will, I think, account for wanting.  Homeostasis.  The argument goes like this.  In unicellular organisms, homeostasis doesn’t have a lot of ways to operate.  When an organism becomes mobile, homeostatic processes can trigger behaviors that with better than chance probability (from an evolutionary standpoint) result in internal state changes that serve to maintain homeostasis.  In effect, evolution favors behaviors that can be triggered to achieve homeostatic goals.

In complex organisms, there are homeostatic mechanisms that work on the internal environment directly, but there are some internal environment changes for which it is not possible to compensate adequately by modifying the internal environment directly.  Thence, hunger.  Hunger is how we experience the process that is initiated when homeostatic mechanisms detect an insufficiency of fuel.  (Actually, it’s probably more sophisticated than that—more like detection of a condition in which the reserve of fuel drops below a particular threshold—and maybe there are multiple thresholds, but the broad outline is clear.)

All organisms have phylogenetically established (built-in) boot processes for incorporating food.  In mammals, there is a rooting reflex and a suckle reflex.  Chewing (which starts out as gumming, but who’s worrying?) and swallowing are built-ins as well.  But those only help when food is presented.  Problem: how to get food to be presented?  Well, if food is presented before hunger sets in, it’s not a homeostatic problem.  If not, homeostatic mechanisms switch the organism into “need-fuel mode”.  In “need-fuel” mode, organisms do things that tend to increase the likelihood that fuel will become available.  Babies fuss, and even cry, sometimes lots and loudly.

Pain is another place where internal homeostatic processes intersect with the external universe.  Pain is how we experience the process that is initiated when homeostatic sensors detect deviations from damage to internal stability that arise from a physical process (heat, cold, puncture, etc.).  Again, evolution has sophisticated the process somewhat.  The pain process arises when a threshold condition is passed.  Pain does not wait for serious damage to take place, pain is triggered when it’s time to take action to prevent serious damage.

Pain actually has to be a bit subtle, too.  Some pain may and should be ignored.  If fight is an alternative to flight, then fight arguably ups the threshold for debilitating pain.

There are other obvious situations in which homeostatic considerations require some action with respect to the outside world.  Urination and defecation are two.  Similarly, vomiting (with its warning homeostatic signal, nausea).

Our wanting, then, has its origin as the experience of a process that responds to some (serious or prospectively serious) homeostatic imbalance.

As an aside, I want to propose that one of the characteristics that distinguishes reptiles from mammals is that when a reptile is in reasonable homeostatic equilibrium, it does nothing.  When a mammal is in the same state, it does something—explores its environment, plays, writes poetry, etc.  In the most general terms, it sets out to learn something.  This characteristic arguably confers at least a marginal advantage to animals that possess it, viz. it is possible that something learned in the absence (at the time) of any pressing need will turn out to be valuable in dealing with future situations in which there will be no opportunity to learn it.  So, the concept of homeostasis has to be broadly construed.

My central point, however, is that ultimately our wants, wishes, desires, dislikes, disgusts, and delights all refer to internal homeostatic processes.  The fact that there are so many distinguishable variants of wanting suggests to me that the many shades of our experience reflect the many kinds of homeostatic processes that have been phylogenetically established in our brains and bodies, each presumably for the most part having proved advantageous over evolutionary time.

030515 – Perceptual(?) oddities

May 15th, 2003

030515 – Perceptual(?) oddities

In 1969, I remarked to Jerry Fodor, who was then my research advisor at M.I.T., that I had recently had an experience in which I glanced at a scene out the window of a bus, glanced away, and then realized that I had seen some word that seemed unlikely to have appeared in the scene.  I had no idea where the word had appeared, but I felt confident that I had indeed seen it.  I returned my gaze to the scene, scanning it as systematically as I could, looking for the word.  After perhaps five or more seconds, I finally located it.  It was on a sign on a building with many signs.

What struck me as notable was that I knew I had seen the word, but I had no idea where in the scene, I had no idea whether the word was written in large or small letters, or what color the letters were.  All I had was the experience of having seen the word, but beyond knowing what the word was, apparently nothing.  Fodor’s comment to me was that clearly the brain’s system for reading does not mark words it reads with their location in space.  (Although note that students often remember where a certain piece of information was located on a page.)

Since then, I have noticed the same phenomenon, reading a word and not knowing where the word appeared in the visual field without having to mount a conscious search.  Is it right to call this perception without awareness or is it actually awareness without perception.

This morning, while listening to the news on the radio, the phone chirruped briefly as if someone had dialed an incorrect number and almost immediately realized it and hung up.  I noticed that I had no idea whether the sound of the phone had occurred before during or after whatever was being said on the radio at the time, that is, my uncertainty of the temporal relationship between the sound of the phone and the sound of the voice on the radio was uncertain over a range of two seconds or more.  This does not worry me.  I can’t think of a reason I would need to be able to make a finer distinction (except in a psycho-acoustic experimental paradigm).

There is, of course, a line of psycholinguistic experiments that explore variants of this phenomenon.  The subject listens to a sentence and at some point in the sentence there is a beep or a click.  The subject is then asked to identify where in the sentence the sound occurred.  The perceived temporal position of the sound can be manipulated by systematically varying the grammatical structure of the sentence.

030502 – Lakoff, metaphor and metonymy

May 2nd, 2003

We observe displacement activities in animals.  When an animal is frustrated in an attempt to attain a goal, it sometimes exhibits activity characteristic of an attempt to reach a different goal.  We characterize this as displacement.  Such behavior is sufficiently widespread to conclude that it is at least not detrimental to survival and it may be that an argument may be made for it being favorable to survival.  Arguably, the majority of human time is spent in such displacement activities.

If we grant a certain universality of displacement, at least among animals we regard as closely related to us on the evolutionary scale (e.g., mammals), can we identify patterns of displacement that are analogous to what Lakoff describes as metaphor and metonymy?  In other words, is the human capacity (facility, propensity) to use these patterns (processes) evident in other animals about which it may be easier to think?  Metaphor is, after all, a redirection of attention from one pattern to another pattern which is in some significant way different from and in some significant way similar to the original pattern.

Lakoff insists on a distinction between metaphor, where one thing stands for another, and metonymy, where a part of one thing stands for the whole.  I guess I see metonymy as a special case of metaphor, and I would change its definition slightly to say that in metonymy a part of one thing is used as a metaphor for the whole.

I guess I think in terms of activation of patterns, where a pattern is like a Prolog [a programming language] predicate: it has slots that may or may not be bound to particular instances of things which may themselves be things or patterns.

But more than that, I see humans as pattern-extraction machines.   It may be that the way we understand our environment is that our perceptions and experiences activate patterns that we know about (that we know how to deal with).  After all, that’s pretty much what the visual system does.  It finds patterns and provides gestalts.  A gestalt is a percept.  It comes to us as a whole (a gestalt), but we can inspect its components.

Indeed, gestalt is the prototypical part / whole pattern.

Short-cuts are an essential part of behavior.  There seems to be a race for the effectors or a race to create an effector plan.  First one to show up with a complete plan or plan fragment wins.  Example: I can write the word ‘of’ using the ‘o’ and the ‘f’ I was taught to use when learning to write in script; but I have also acquired a digraph ‘of’ glyph that I can write faster.  Most of the time, I write the digraph for the word ‘of’, but use the two individual glyphs when the letter ‘o’ is followed by ‘f’ in any other context.  Similarly, I have variant forms for ‘s’, ‘t’, and ‘r’.  Whichever I produce, I rarely, if ever, end up creating a monster glyph that is some amalgam or of two forms or a doubling in two forms (which one might expect to happen if multiple motor plans ran at once).

This was part of Lashley’s serial order in behavior paper, although he did not, to my recollection, characterize the process as a race, but that was the gist of it.  Serving as the staging area for serial behavior plans may be one of the functions of short-term memory that David Marr was alluding to when he said, “I expect that there are several ‘intellectual reflexes’ that operate on items held there about which nothing is yet known and which will eventually be held to be the crucial things about short-term memory.”  (Marr 1982, p.348)

Preconscious processing of ambiguous words and phrases slows down when two alternative interpretations are more balanced in likelihood and the ambiguity is more difficult to resolve.  (Paraphrased from Baars 1994, which cites MacKay 1966 “To End Ambiguous Sentences” as the source.  See also Paula M. Niedenthal 2007.  “”Embodying Emotion.”  Science, 316: 1002-1005)

(Lakoff 1987, p452) “Any adequate psychological account of the learning of, and memory for, the human lexicon will have to take account of the phenomenon of folk etymology—that is, it will have to include an account of why expressions with motivating links are easier to learn and remember than random pairings.”

‘”Much in language is a matter of degree”. This section states most strongly Langacker’s conviction that most of the psychological grounding of language uses mechanisms which work by approximation rather than by any type of formal logic. He believes that the basic nature of categorization is well stated by the prototype model and that most distinctions are based on gradients rather than dichotomies. (from www.lloydrice.com review retrieved 030502 of Langacker 1987 Foundations of Cognitive Grammar, Vol. I)