Archive for the ‘modeling’ Category

030828 – Are human beings rational?

Thursday, August 28th, 2003

030828 – Are human beings rational?

My wife asked an interesting question: Do I think that human beings are inherently rational.  I think the answer is emphatically no.  Human beings have the ability to learn procedures.  One of the procedures that human beings have discovered, found useful, and passed along culturally is the procedure of logical analysis or logical thinking.  The fact that in many cases logic enables us to find good solutions to certain classes of significant problems ensures that logical analysis will be one of the procedures activated as a candidate for execution in a broad range of external circumstances and internal states.

What strikes me is that the end result of evolution selecting organisms with greater and greater ability to learn and apply procedural patterns has resulted in an organism that is capable of learning to simulate serial computations, at least on a limited scale.  Certainly it was Dennett who put this idea into my mind, but I do not believe that he arrived at this conclusion by the same path that I did.

This raises an interesting question: what kind of pattern and procedural learning capabilities are required in order to be able to simulate serial computations or, more precisely, to be able to learn and execute a logical thinking pattern?  Human beings certainly aren’t much in the way of serial computers.  We’re not fast.  We’re not computationally adept.  We don’t have a lot of dynamic memory.  Our push down stack for recursion seems to be limited to one level.  (The fact that we must use the logical thinking pattern to analyze pathological sentences like, “The pearl the squirrel the girl hit bit split,” rather than the (unconscious) language understanding pattern simply underlines this limitation on our capability for recursion.)

So, is human language ability the result of the evolution of ever more sophisticated procedural pattern learning capabilities?  Is the driving force behind the evolution of such enhanced procedural pattern learning the advantage obtained by the organisms who best understand their conspecifics?  Is this evolution’s de facto recognition that brawn being equal, better brains confer a reproductive advantage?  Now if better understanding of one’s conspecifics is the goal, language ability may just fall out automatically, because if one has a mechanism that can build a model of others, it makes it a lot easier to figure out what the other intends or is responding to.

Clearly, since the ability to take the viewpoint of another person does not manifest itself in children until some time after they have acquired at least the rudiments of language, the manifestation of the ability to take the viewpoint of another person is not a requirement for the acquisition of at least the rudiments of the language.  There seems to be a subtle distinction to be made here: when daddy says “hudie” (the Chinese equivalent of “butterfly”) and looks at, or taps, or points to a butterfly or a representation of a butterfly, something has to help the child attend to both the butterfly instance and the sound.  That something may be the emerging model of the other.  Or maybe it’s the other way around as I suggested earlier: the trick is for the parent to take advantage of his or her own model of the child in order to intuitively construct or take advantage of the situation in which both the butterfly and the sound of the word will be salient to the child.

Still, I keep coming back to the idea that the internal model of the other is somehow crucial and the even more crucial is the idea that the internal model of the other contains the other’s model of others.  As I think about it though, it seems to me that creating an internal pattern, that is to say learning a pattern, based on experience and observation of the behavior of another organism is not a capability that is uniquely human.  It would seem to be a valuable ability to have.  What seems to be special about the patterns we humans develop of other people is that we attribute to the other a self.  An or to animal can get a long way without attributing a self (whatever that means) to other creatures with which it interacts.

030723 – Limits of consciousness

Wednesday, July 23rd, 2003

030723 – Limits of consciousness

Is important to note that we’re not conscious of absolutely everything that goes on in our bodies.  We’re not conscious of the normal functioning of our lymphatic system.  We’re not conscious of the normal functioning of the stomach, the liver, the pancreas, etc. We’re not conscious of changes in the iris of the eye.  With respect to these functions, we’re zombies.

We’re not ordinarily conscious of breathing, although we have the ability to take deep breaths or to hold our breaths.  Breathing is sometimes conscious, sometimes not.

I wouldn’t say we’re very good at imagining smells or tastes, but I can’t speak to the abilities of a skilled smeller or taster.  Still, we can recognize specific tastes and smells (new Coca-Cola didn’t taste like “Classic” Coca-Cola and people didn’t need a side-by-side comparison to know that).

I think I vote with Lakoff on the fact that our model of just about anything is ultimately based on our model of our self.  Or at least our models ultimately refer “metaphorically” to built-in [well, maybe not built-in, but acquired in the course of post-natal (and possibly some pre-natal) experience] “concepts” relating in some way to perceptual experience, often kinesthetic.  It is certainly the case that some of our knowledge is factual, e.g., the battle of Hastings was fought in 1066.  Other knowledge is procedural, I would say “model based”.  Model based knowledge is of necessity based on metaphor.  That is, the behavior of something is referenced mutatis mutandis to the behavior of something else already understood or at least already modeled.

An important model is our internal model of another person.  Is not clear to me whether the origin of this model is self-observation or observation of others.  Is there an internal model of the self and an internal model of another person?  Or are they one and the same, available to be applied equally to oneself or another?  Certainly, a key element of our model of another is projection of our own understanding onto the other.  Now comes the fun part.  By “introspection” it is clear that because I have a model of another person, my model of another person should include a model of that person’s model of yet another person.  So from these models, I now have available my own behavior (whether actual or under consideration), my anticipation of the behavior of another, and my anticipation of the other’s understanding of my behavior [and so on, but not infinitely because of (literally) memory limitations].

030721 – Consciousness and (philosophical) zombies

Monday, July 21st, 2003

[Added 040426]

Is consciousness an expert system that can answer questions about the behavior of the organism?  That is, does SHRDLU have all the consciousness there is?  Does consciousness arise from the need to have a better i/o interface?  Maybe the answer to the zombie problem is that there are nothing but zombies, so it’s not a problem.

In effect, everything happens automatically.  The i/o system is available to request clarification if the input is ambiguous and is available to announce the result of the computations as an output report.

030721 – Consciousness and zombies

The reason the zombie problem and the Chinese room problem are significant is that they are both stand-ins for the physicalism/dualism problem.  That being the case, it seems pointless to continue arguing about zombies and Chinese rooms absent a convincing explanation of how self-awareness can arise in a physical system.  That is the explanation I am looking for.

Ted Honderich (2000) observes that, “Something does go out of existence when I lose consciousness.”  From a systems point of view, loss of consciousness entails loss of the ability (the faculty?) to respond to ordinary stimuli and to initiate ordinary activities.  Loss of consciousness is characterized by inactivity and unresponsiveness.  Loss of consciousness is distinguished from death in that certain homeostatic functions necessary to the continued biological existence of the organism, but not generally accessible to consciousness, are preserved.

In sleep, the most commonly occurring loss of consciousness, these ongoing homeostatic functions have the ability to “reanimate” consciousness in response to internal or external stimuli.

Honderich observes that “consciousness can be both effect and cause of physical things.”  This is consistent with my sense that consciousness is an emergent property of the continuous flow of stimuli into the organism and equally continuous flow of behaviors emanating from the organism.  I’m not real happy about “emergent property”, but it’s the best I can do at the moment.

Honderich identifies three kinds of consciousness: perceptual consciousness, which “contains only what we have without inference;” reflective consciousness, which “roughly speaking is thinking without perceiving;” and affective consciousness, “which has to do with desire, emotion and so on.”

Aaron Sloman (“The Evolution of What?”  1998) notes that in performing a systems analysis of consciousness, we need to consider “what sorts of information the system has access to…, how it has access to this information (e.g., via some sort of inference, or via something more like sensory perception), [and] in what form it has the information (e.g., in linguistic form or pictorial form or diagrammatic form or something else).”

Sloman also identifies the problem that I had independently identified that leads to it being in the general case impossible for one to predict what one will do in any given situation.  “In any system, no matter how sophisticated, self-monitoring will always be limited by the available access mechanisms and the information structures used to record the results.  The only alternative to limited self-monitoring is an infinite explosion of monitoring of monitoring of monitoring…  A corollary of limited self-monitoring is that whatever an agent believes about itself on the basis only of introspection is likely to be incomplete or possibly even wrong.”

Sloman (and others), in discussing what I would call levels of models or types of models, identifies “a reactive layer, a deliberative layer, and a meta management (or self-monitoring) layer.”

030718 – Self-Reporting

Friday, July 18th, 2003

030718 – Self-Reporting

Is there any advantage to an organism to be able to report its own internal state to another organism?  For that is one of the things that human beings are able to do.  Is there any advantage to an organism to be able to use language internally without actually producing an utterance?

Winograd’s SHRDLU program had the ability to answer questions about what it was doing.  Many expert system programs have the ability to answer questions about the way they reached their conclusions.  In both cases, the ability to answer questions is implemented separately from the part of the program that “does the work” so to speak.  However, in order to be able to answer questions about its own behavior, the question answering portion of the program must have access to the information required to answer the questions.  That is, the expertise required to perform the task is different from the expertise required to answer questions about the performance of the task.

In order to answer questions about a process that has been completed, there must be a record of, or a way to reconstruct, the steps in the process.  Actually, is not sufficient simply to be able to reconstruct the steps in the process.  At the very least, there must be some record that enables the organism to identify the process to be reconstructed.

Not all questions posed to SHRDLU require memory.  For example one can ask SHRDLU, “What is on the red block?”  To answer a question like this, SHRDLU need only observe the current state of its universe and report the requested information.  However, to answer at question like, “Why did you remove the pyramid from the red block?”  SHRDLU must examine the record of its recent actions and the “motivations” for its recent actions to come up with an answer such as, “In order to make room for the blue cylinder.”

Not all questions that require memory require information about motivation as, for example, “When was the blue cylinder placed on the red cube?”

Is SHRDLU self-aware?  I don’t think anyone would say so.  Is an expert system that can answer questions about its reasoning self-aware?  I don’t think anyone would say so.  Still, the fact remains that it is possible to perform a task without being able to answer questions about the way the task was performed.  Answering questions is an entirely different task.

030716

Wednesday, July 16th, 2003

030716

Babies are born with reflexes (IRM-FAP’s).  I wonder if the corresponding models mirror the reflexes.  It’s certainly a better place to start than a) all connection weights set to zero or b) connection weights set to random values.

How do babies to imitation?  How does the organization make the connection between what is seen at its own body?  Is the basic rule for babies: imitate unless homeostatic needs dictate otherwise?

“No” is an active response.  Absence of “no” seems to indicate “no objection”.

With respect to internal models, updating the model is not the trick.  The trick is turning off the Plant (effectors) for the purpose of thinking about actions.  Being able to talk to oneself is an outgrowth of being able to think about actions without acting.  The model can only be updated when action is taken, because that’s the only time the model can get an error signal.  Well, that’s true when the model models an internal process.  It’s interesting question to consider when a model of an external process gets updated.

An appeal to parsimony would suggest that a model of an external process gets updated when the model is being used, shall I say, unconsciously.  That is, if we assume a model of an external process is some kind of generalization of a model of an internal process, then the circumstances under which a model of an external process is updated will be some kind of generalization of the circumstances under which a model of an internal process is updated.

As an off the wall aside, this might account for the difficulty humans experience in psychotherapeutic circumstances.  Simply thinking about one’s worldview and recognizing that it should be changed is, by my model, not going to change one’s worldview.  In effect, change to an unconscious process can only take place unconsciously.

Margaret Foerster (personal communication) indicates that in her therapeutic experience, change begins when a patient is confronted with a highly specific example of his/her maladaptive behavior.  Not all highly specific examples have the effect of initiating change, but examples that do are recognizable by the therapist from the reaction of the patient (who also recognizes at a “gut” level) the significance of the example.  That is, the example creates a state in which the predictions of the patient’s internal model do not match the actual results.  To the extent that the internal model was invoked automatically rather than using the model analytically, the mismatch triggers (by my hypothesis) the (automatic) model correction (learning) process.

Foerster observes that in the sequel to such a significant therapeutic intervention, the patient experiences (and reports) additional related mismatches.  I don’t know that my model has anything to say about the fact that such mismatches are experienced consciously.  Nonetheless, I would be surprised to find that an unconscious model would change in a major way in response to a single mismatch.  I would rather expect gradual change based on accumulating evidence of consistently erroneous predictions.  On the other hand, I would expect the model to respond fairly rapidly to correct itself.  Notice that I say “correct itself”.  That is my way of indicating that the process is unconscious and not directly accessible, although significant change will manifest itself in the form of a recognizably (to both patient and therapist) different “way of thinking”.

Actually, I don’t think I have to worry about the fact that the mismatches Foerster describes are experienced consciously.  On reflection, I think mismatches are experienced consciously.  For example, when one is not paying attention and steps off a curb, the mismatch between expectation (no curb) and reality (sudden drop in the level of the ground) is most assuredly experienced consciously.

But back to the double life of models: it is all very well to say that a model can be used off line and that the experience of so doing is a mental image of some sort, but aside from the question of how a model is placed on line or off line, there remains the question of how inputs to the off line model are created.  Not to mention, of course, the question of why we “experience” anything.  So far, it would seem that there is nothing in a description of human behavior from the outside (for example, as seen by a Martian) that would lead one to posit “experience”, aside, that is, from our hetero phenomenological reports of “experience”.  That’s still a stumper.

Query: do hetero phenomenological reports of “experience” require the faculty of language?  Without the faculty of language how could one obtain a hetero phenomenological report?  How could one interpret such a report?  Is it the case that the only way a Martian can understand a hetero phenomenological report is to learn the language in which the report is made?  How much of the language?

Would it be sufficient for a Martian who only understood some form of pidgin like “me happy feelings now”.  The point seems to be that somehow English speakers generally come to understand what the word “experience” means and can use in appropriate hetero phenomenological contexts.  What would be necessary for a Martian to understand what “experience” means?

030715

Tuesday, July 15th, 2003

030715

Hauser, Chomsky, and Fitch in their Science review article (2002) indicate that “comparative studies of chimpanzees and human infants suggest that only the latter read intentionality into action, and thus extract unobserved rational intent.” this goes along with my own conviction that internal models are significant in the phenomenon of human and self-awareness.

Hauser, Chomsky, and Fitch argue that “the computational mechanism of recursion” is critical to language ability, “is recently involved and unique to our species.”  I am well aware that many have died attempting to oppose Chomsky and his insistence that practical limitations have no place in the description of language capabilities.  I am reminded of Dennett’s discussion of the question of whether zebra is a precise term, that is, whether there exists anything that can be correctly called a zebra.  It seems fairly clear that Chomsky assumes that language exists in the abstract (much the way we naively assume that zebras exist in the abstract) and then proceeds to draw conclusions based on that assumption.  The alternative is that language, like zebras, is in the mind of the beholder, but that when language is placed under the microscope it becomes fuzzy at the boundaries precisely because it is implemented in the human brain and not in a comprehensive design document.

Uncritical acceptance of the idea that our abstract understanding of the computational mechanism of recursion is anything other than a convenient crutch for understanding the way language is implemented in human beings is misguided.  In this I vote with David Marr (1982) who believed that neither computational iteration nor computational recursion is implemented in the nervous system.

On the other hand, it is interesting that a facility which is at least a first approximation to the computational mechanism of recursion exists in human beings.  Perhaps the value of the mechanism from an evolutionary standpoint is that it does make possible the extraction of intentionality from the observed behavior of others.  I think I want to turn that around.  It seems reasonable to believe that the ability to extract intentionality from observed behavior would confer an evolutionary advantage.  In order to do that, it is necessary to have or create an internal model of the other in order to get access to the surmised state of the other.

Once such a model is available it can be used online to surmise intentionality and it can be used off line for introspection, that is, it can be used as a model of the self.  Building from Grush’s idea that mental imagery is the result of running a model in off line mode, we may ask what kind of imagery would result from running a model of a human being off line.  Does it create an image of a self?

Alternatively, since all of the other models proposed by Grush are in models of some aspect of the organism itself, it might be more reasonable to suppose that a model of the complete self could arise as a relatively simple generalization of the mechanism used in pre-existing models of aspects of the organism.

If one has a built-in model of one’s self in the same way one has a built-in model of the musculoskeletal system, then language learning may become less of a problem.  Here’s how it would work.  At birth, the built-in model is rudimentary and needs to be fine-tuned to bring it into closer correspondence with the system it models.  An infant is only capable of modeling the behavior of another infant.  Adults attempting to teach language skills to infants use their internal model to surmise what the infant is attending to and then name it for the child.  To the extent that the adult has correctly modeled the infant and the infant has correctly modeled the adult (who has tried to make it easy to be modeled), the problem of establishing what it is that a word refers to becomes less problematical.

030714

Monday, July 14th, 2003

030714

Here’s what’s wrong with Dennett’s homunculus exception.  It’s a bit misleading to discuss a flow chart for a massively parallel system.  We’re accustomed to high bandwidth interfaces between modules where high bandwidth is implemented as a high rate of transmission through a narrow pipe.  In the brain, high bandwidth is implemented as a leisurely rate of transmission through the Mississippi river delta.

030711

Friday, July 11th, 2003

030711

The body is hierarchical in at least one obvious way.  In order to make a voluntary movement, a particular muscle is targeted, but the specific muscle cell doesn’t matter.  What matters is the strength of the contraction.  In assessing the result of the contraction, what matters is the change in position of the joint controlled by the muscle, not the change in position of specific muscle cells.  Thus, an internal model needs only to work with intensity of effort, predicted outcome, and perceived outcome.  This kind of model is something that computer “neural networks” can shed some light on.  Certainly, there are more parameters, like “anticipated resistance” but there are probably not an overwhelming number of them.

The point of this is that, as Grush (2002) points out, the internal model has to be updateable in order to enable the organism to handle changes to its own capabilities over time.  At least at this level, Hebbian learning (as if I knew exactly what that denoted) seems sufficient.

030710

Thursday, July 10th, 2003

030710

Here’s what Daniel Dennett has to say (passage said to be cited in Pinker 1997, How the Mind Works, p. 79) about homunculi:

Homunculi are bogeyman only if they duplicate entire the talents they are rung in to explain….  If one can get a team or committee of relatively ignorant, narrow minded, blind, homunculi to produce the intelligent behavior of the whole, this is progress.  A flowchart is typically the organizational chart of a committee of homunculi (investigators, librarians, accountants, executive); each box specifies the homunculus by prescribing a function without saying how it is accomplished (one says, in effect: put a little man in there to do the job).  If we look closer at the individual boxes we see that the function in each is accomplished by subdividing it via another flowchart into still smaller, more stupid homunculi.  Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is remember whether to say yes or no when asked) that they can be, as one says, “replaced by a machine.”  One discharges fancy homunculi from one’s scheme by organizing armies of idiots to do the work.

030708 – Computer consciousness

Tuesday, July 8th, 2003

030708 – Computer consciousness

I begin to understand the temptation to write papers that takes the form of diatribes against another academic’s position.  I just found the abstract of the paper written by someone named Maurizio Tirassa in 1994.  In the abstract he states, “I take it for granted that computational systems cannot be conscious.”

Oh dear.  I just read a 1995 response to Tirassa’s paper by someone in the department of philosophy and the department of computer science at Rensselaer Polytechnic Institute who says we must remain agnostic toward dualism.  Note to myself: stay away from this kind of argument; it will just make me crazy.

For the record: I take it for granted that computational systems can be conscious.  I do not believe in dualism.  There is no Cartesian observer.

I do like what Rick Grush has to say in his 2002 article “An introduction to the main principles of emulation: motor control, imagery, and perception”.  He posits the existence of internal models that can be disconnected from effectors and used as predictors.

Grush distinguishes between simulation and emulation.  He states that, “The difference is that emulation theory claims that mere operation of the motor centers is not enough, that to produce imagery they must be driving an emulator of the body (the musculoskeletal system and relevant sensors).”  He contrasts what he calls a “motor plan” with “motor imagery”.  “Motor imagery is a sequence of faux proprioception.  The only way to get … [motor imagery] is to run the motor plans through something that maps motor plans to proprioception and the two candidates here are a) the body (which yields real proprioception), and b) a body emulator (yielding faux proprioception).”

What’s nice about this kind of approach is that its construction is evolutionarily plausible.  That is, the internal model is used both for the production of actual behavior and for the production of predictions of behavior.  Evolution seems to like repurpose systems so long as the systems are reasonably modular.

Grush distinguishes between what he calls “modal” and “amodal” models.  “Modal” models are specific to a sensory modality (e.g., vision, audition, proprioception) and “amodal” models (although he writes as if there were only one) model the organism in the universe.  I do not much care for the terminology because I think it assumes facts not in evidence, to wit: that the principal distinguishing characteristic is the presence or absence of specificity to a sensory modality.  I also think it misleads in that it presumes (linguistically at least) to be an exhaustive categorization of model types.

That said, the most interesting thing in Grush for me is the observation that the same internal model can be used both to guide actual behavior and to provide imagery for “off-line” planning of behavior.  I had been thinking about the “on-line” and “off-line” uses of the language generation system.  When the system is “on-line”, physical speech is produced.  When the system is “off-line”, its outputs can be used to “talk to oneself” or to write.  Either way, it’s the same system.  It doesn’t make any sense for there to be more than one.

When a predator is crouched, waiting to spring as soon as the prey it has spotted comes into range, arguably it has determined how close the prey has to come for a pounce to be effective.  The action plan is primed, it’s a question of waiting for the triggering conditions (cognitively established by some internal mental model) to be satisfied.

It is at least plausible to suggest that if evolution developed modeling and used it to advantage in some circumstances; modeling will be used in other circumstances where it turns out to be beneficial.  I suppose this is a variant of Grush’s Kalman filters argument which says that Kalman filters turn out to be a good solution to a problem that organisms have and it would not be surprising to discover that evolution has hit upon a variant of Kalman filters to assist in dealing with that problem.

It’s clear (I hope, and if not, I’ll make an argument as to why) that a mobile organism gains something by having some kind of model (however rudimentary) of its external environment.  In “higher” organisms, that model extends beyond the range of that which is immediately accessible to its senses.  It’s handy to have a rough idea of what is behind one without having to look around to find out.  It’s also handy to know where one lives when one goes for a walk out of sight of one’s home.

Okay, so we need an organism-centric model of the universe, that is, one that references things outside the organism to the organism itself.  But more interestingly, does this model include a model of the organism itself?

Certain models cannot be inborn (or at least the details cannot be).  What starts to be fun is when the things modeled have a mind of their own (so to speak).  It’s not just useful to humans to be able to model animals and other humans (to varying degrees of specificity and with varying degrees of success).  It would seem to be useful to lots of animals to be able to model animals and other conspecifics.

What is the intersection of “modeling” with “learning” and “meaning”?  How does “learning” (a sort of mental sum of experience) interact with ongoing sensations?  “Learning” takes place with respect to sensible (that is capable of being sensed) events involving the organism, including things going on inside the organism that are sensible.  Without letting the concept get out of hand, I have said in other contexts that humans are voracious pattern-extractors.  “Pattern” in this context means a model of how things work.  That is, once a pattern is “identified” (established, learned), it tends to assert its conclusions.

This is not quite correct.  I seem to be using “pattern” in several different ways.  Let’s take it apart.  The kicker in just about every analysis of “self” and “consciousness” is the internal state of the organism.  Any analysis that fails to take into account the internal state of the organism at the time a stimulus is presented is not, in general, going to do well in predicting the organism’s response.  At the same time, I am perfectly willing to assert that the organism’s response—any organism’s response—is uniquely determined by the stimulus (broadly construed) and the organism’s state (also broadly construed).  Uniquely determined.  Goodbye free will.  [For the time being, I am going to leave it to philosophers to ponder the implications of this fact.  I am sorry to say that I don’t have a lot of faith that many of them will get them right, but some will.  This is just one of many red herrings that make it difficult to think about “self” and “consciousness”.]

Anyway, when I think about the process, I think of waves of data washing over and into the sensorium (a wonderfully content-free word).  In the sensorium are lots of brain elements (I’m not restricting this to neurons because there are at least ten times as many glia listening in and adding or subtracting their two cents) that have been immersed in this stream of information since they became active.  They have “seen” a lot of things.  There have been spatio-temporal-modal patterns in the stream, and post hoc ergo propter hoc many of these patterns have been “grooved”.  So, when data in the stream exhibit characteristics approximating some portion of a “grooved” pattern, other brain elements in the groove are activated to some extent, the extent depending on all sorts of things, like the “depth” of the “groove”, the “extent” of the match, etc.

In order to think about this more easily, remember that the sensorium does not work on just a single instantaneous set of data.  It takes some time for data to travel from neural element to neural element.  Data from “right now” enter the sensorium and begin their travel “right now”, hot on the heels of data from just before “right now”, and cool on the heels of data from a bit before “right now” and so on.  Who knows how long data that are already in the sensorium “right now” have been there.  [The question is, of course, rhetorical.  All the data that ever came into the sensorium are still there to the extent that they caused alterations in the characteristics of the neural elements there.  Presumably, they are not there in their original form, and more of some are there than of others.]  The point is that the sensorium “naturally” turns sequential data streams into simultaneous data snapshots.  In effect, the sensorium deals with pictures of history.

Now back to patterns.  A pattern may thus be static (as we commonly think of a pattern), and at the same time represent a temporal sequence.  In that sense, a pattern is a model of how things have happened in the past.  Now note that in this massively parallel sensorium, there is every reason to believe that at any instant many many patterns have been or are being activated to a greater or lesser extent and the superposition (I don’t know what else to call it) of these patterns gives rise to behavior in the following way.

Some patterns are effector patterns.  They are activated (“primed” is another term used here, meaning activated somewhat, but not enough to be “triggered”) by internal homeostatic requirements.  I’m not sure I am willing to state unequivocally that I believe all patterns have an effector component, but I’m at least willing to consider it.  Maybe not.  Maybe what I think is that data flows from sensors to effectors and the patterns I am referring to shape and redirect the data (which are ultimately brain element activity) into orders that are sent to effectors.

That’s existence.  That’s life.  I don’t know what in this process gives rise to a sense of self, but I think the description is fundamentally correct.  Maybe the next iteration through the process will provide some clues.  Or the next.  Or the next.

Hunger might act in the following way.  Brain elements determine biochemically and biorhythmically that it’s time to replenish the energy resources.  So data begin to flow associated with the need to replenish the energy resources.  That primes patterns associated with prior success replenishing the energy resources.  A little at first.  Maybe enough so if you see a meal you will eat it.  Not a lot can be hard-wired (built-in) in this process.  Maybe as a baby there’s a mechanism (a built-in pattern) that causes fretting in response to these data.  But basically, what are primed are patterns the organism has learned that ended up with food being consumed.  By adulthood, these patterns extend to patterns as complex as going to the store, buying food, preparing it, and finally consuming it.

This is not to say that the chain of determinism imposes rigid behaviors.  Indeed, what is triggered deterministically is a chain of opportunism.  Speaking of which, I have to go to the store to get fixings for dinner.  Bye.