Archive for July, 2003

030710

Thursday, July 10th, 2003

030710

Here’s what Daniel Dennett has to say (passage said to be cited in Pinker 1997, How the Mind Works, p. 79) about homunculi:

Homunculi are bogeyman only if they duplicate entire the talents they are rung in to explain….  If one can get a team or committee of relatively ignorant, narrow minded, blind, homunculi to produce the intelligent behavior of the whole, this is progress.  A flowchart is typically the organizational chart of a committee of homunculi (investigators, librarians, accountants, executive); each box specifies the homunculus by prescribing a function without saying how it is accomplished (one says, in effect: put a little man in there to do the job).  If we look closer at the individual boxes we see that the function in each is accomplished by subdividing it via another flowchart into still smaller, more stupid homunculi.  Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is remember whether to say yes or no when asked) that they can be, as one says, “replaced by a machine.”  One discharges fancy homunculi from one’s scheme by organizing armies of idiots to do the work.

030709

Wednesday, July 9th, 2003

030709

I think of the sensorium as being something like an extremely resonant bell.  Sensory inputs or rather the processed remnants of sensory inputs, that is to say the effects of sensory inputs, prime various patterns in the sensorium and alter the way that subsequent sensory inputs are processed.  This process manifests itself as what we call short-term memory, intermediate term memory, long-term memory, as well as some form of learning.  Because we think of memory as the acquisition of factual information and not the development in the sensorium of patterns or the recognition of patterns, we’re not accustomed to thinking of short-term memory as learning.

Assuming that there is some sort of automatic recirculating mechanism, I wonder if the fact that in short-term (and long-term) memory experiments there is a clear effect favoring recall of the first item of a list is simply an artifact that results because generally the first item of a list is preceded by silence or by some irrelevant stimulus.  I wonder if short-term memory is some kind of more or less fixed time constant.  One might think of an initial stage of processing in which inputs are recirculated after some time.  This raises the question of whether observed limits on the number of memory chunks that can be stored in short-term memory is a result of the amount of time it takes for each chunk to be entered.  No, it’s probably much more complicated than that.  There is already an interaction between sensory inputs and pre-existing patterns from the get go.  That’s why zillions of short-term memory experiments use nonsense syllables.

Rather than thinking of attention as adding processing power to particular sensory inputs it may make more sense to think of attention as a way of suppressing, or at least reducing, the strength of competing sensory inputs.  That of course makes more sense than thinking that the brain has excess processing capacity just lying around waiting to be called into action for the purpose of attention.

How long is “now”?  I don’t think the question really has an answer.  I think the hetero phenomenological experience of “now” depends on the contents of the recirculating short-term memory buffer.  When I talk about the recirculating short-term memory buffer I mean that at a certain point in the processing of incoming sensory inputs, the processed inputs are fed back to an earlier point in the processing and somehow combined with the current incoming sensory inputs.  At the same time, the processed inputs continue to be further processed.

As I think more about “now” I realized that there are a number of different now’s depending on the sensory modality.  Well even that’s not right.  We know from various tachistoscopic experiments that there is a short-term visual buffer with a very short time constant, which suggests that there is a very short visual “now”.  I can’t think of any good evolutionary reason why each modality’s “now” should have the same time constant.

I see that I’ve written “the” recirculating short-term memory buffer.  I certainly don’t know that there’s only one, and I don’t know that any of my conclusions depend on there being only one.  Indeed I think that patterns recirculate with differing time constants depending in some way on the nature (whatever that means) of each pattern.

030708 – Computer consciousness

Tuesday, July 8th, 2003

030708 – Computer consciousness

I begin to understand the temptation to write papers that takes the form of diatribes against another academic’s position.  I just found the abstract of the paper written by someone named Maurizio Tirassa in 1994.  In the abstract he states, “I take it for granted that computational systems cannot be conscious.”

Oh dear.  I just read a 1995 response to Tirassa’s paper by someone in the department of philosophy and the department of computer science at Rensselaer Polytechnic Institute who says we must remain agnostic toward dualism.  Note to myself: stay away from this kind of argument; it will just make me crazy.

For the record: I take it for granted that computational systems can be conscious.  I do not believe in dualism.  There is no Cartesian observer.

I do like what Rick Grush has to say in his 2002 article “An introduction to the main principles of emulation: motor control, imagery, and perception”.  He posits the existence of internal models that can be disconnected from effectors and used as predictors.

Grush distinguishes between simulation and emulation.  He states that, “The difference is that emulation theory claims that mere operation of the motor centers is not enough, that to produce imagery they must be driving an emulator of the body (the musculoskeletal system and relevant sensors).”  He contrasts what he calls a “motor plan” with “motor imagery”.  “Motor imagery is a sequence of faux proprioception.  The only way to get … [motor imagery] is to run the motor plans through something that maps motor plans to proprioception and the two candidates here are a) the body (which yields real proprioception), and b) a body emulator (yielding faux proprioception).”

What’s nice about this kind of approach is that its construction is evolutionarily plausible.  That is, the internal model is used both for the production of actual behavior and for the production of predictions of behavior.  Evolution seems to like repurpose systems so long as the systems are reasonably modular.

Grush distinguishes between what he calls “modal” and “amodal” models.  “Modal” models are specific to a sensory modality (e.g., vision, audition, proprioception) and “amodal” models (although he writes as if there were only one) model the organism in the universe.  I do not much care for the terminology because I think it assumes facts not in evidence, to wit: that the principal distinguishing characteristic is the presence or absence of specificity to a sensory modality.  I also think it misleads in that it presumes (linguistically at least) to be an exhaustive categorization of model types.

That said, the most interesting thing in Grush for me is the observation that the same internal model can be used both to guide actual behavior and to provide imagery for “off-line” planning of behavior.  I had been thinking about the “on-line” and “off-line” uses of the language generation system.  When the system is “on-line”, physical speech is produced.  When the system is “off-line”, its outputs can be used to “talk to oneself” or to write.  Either way, it’s the same system.  It doesn’t make any sense for there to be more than one.

When a predator is crouched, waiting to spring as soon as the prey it has spotted comes into range, arguably it has determined how close the prey has to come for a pounce to be effective.  The action plan is primed, it’s a question of waiting for the triggering conditions (cognitively established by some internal mental model) to be satisfied.

It is at least plausible to suggest that if evolution developed modeling and used it to advantage in some circumstances; modeling will be used in other circumstances where it turns out to be beneficial.  I suppose this is a variant of Grush’s Kalman filters argument which says that Kalman filters turn out to be a good solution to a problem that organisms have and it would not be surprising to discover that evolution has hit upon a variant of Kalman filters to assist in dealing with that problem.

It’s clear (I hope, and if not, I’ll make an argument as to why) that a mobile organism gains something by having some kind of model (however rudimentary) of its external environment.  In “higher” organisms, that model extends beyond the range of that which is immediately accessible to its senses.  It’s handy to have a rough idea of what is behind one without having to look around to find out.  It’s also handy to know where one lives when one goes for a walk out of sight of one’s home.

Okay, so we need an organism-centric model of the universe, that is, one that references things outside the organism to the organism itself.  But more interestingly, does this model include a model of the organism itself?

Certain models cannot be inborn (or at least the details cannot be).  What starts to be fun is when the things modeled have a mind of their own (so to speak).  It’s not just useful to humans to be able to model animals and other humans (to varying degrees of specificity and with varying degrees of success).  It would seem to be useful to lots of animals to be able to model animals and other conspecifics.

What is the intersection of “modeling” with “learning” and “meaning”?  How does “learning” (a sort of mental sum of experience) interact with ongoing sensations?  “Learning” takes place with respect to sensible (that is capable of being sensed) events involving the organism, including things going on inside the organism that are sensible.  Without letting the concept get out of hand, I have said in other contexts that humans are voracious pattern-extractors.  “Pattern” in this context means a model of how things work.  That is, once a pattern is “identified” (established, learned), it tends to assert its conclusions.

This is not quite correct.  I seem to be using “pattern” in several different ways.  Let’s take it apart.  The kicker in just about every analysis of “self” and “consciousness” is the internal state of the organism.  Any analysis that fails to take into account the internal state of the organism at the time a stimulus is presented is not, in general, going to do well in predicting the organism’s response.  At the same time, I am perfectly willing to assert that the organism’s response—any organism’s response—is uniquely determined by the stimulus (broadly construed) and the organism’s state (also broadly construed).  Uniquely determined.  Goodbye free will.  [For the time being, I am going to leave it to philosophers to ponder the implications of this fact.  I am sorry to say that I don’t have a lot of faith that many of them will get them right, but some will.  This is just one of many red herrings that make it difficult to think about “self” and “consciousness”.]

Anyway, when I think about the process, I think of waves of data washing over and into the sensorium (a wonderfully content-free word).  In the sensorium are lots of brain elements (I’m not restricting this to neurons because there are at least ten times as many glia listening in and adding or subtracting their two cents) that have been immersed in this stream of information since they became active.  They have “seen” a lot of things.  There have been spatio-temporal-modal patterns in the stream, and post hoc ergo propter hoc many of these patterns have been “grooved”.  So, when data in the stream exhibit characteristics approximating some portion of a “grooved” pattern, other brain elements in the groove are activated to some extent, the extent depending on all sorts of things, like the “depth” of the “groove”, the “extent” of the match, etc.

In order to think about this more easily, remember that the sensorium does not work on just a single instantaneous set of data.  It takes some time for data to travel from neural element to neural element.  Data from “right now” enter the sensorium and begin their travel “right now”, hot on the heels of data from just before “right now”, and cool on the heels of data from a bit before “right now” and so on.  Who knows how long data that are already in the sensorium “right now” have been there.  [The question is, of course, rhetorical.  All the data that ever came into the sensorium are still there to the extent that they caused alterations in the characteristics of the neural elements there.  Presumably, they are not there in their original form, and more of some are there than of others.]  The point is that the sensorium “naturally” turns sequential data streams into simultaneous data snapshots.  In effect, the sensorium deals with pictures of history.

Now back to patterns.  A pattern may thus be static (as we commonly think of a pattern), and at the same time represent a temporal sequence.  In that sense, a pattern is a model of how things have happened in the past.  Now note that in this massively parallel sensorium, there is every reason to believe that at any instant many many patterns have been or are being activated to a greater or lesser extent and the superposition (I don’t know what else to call it) of these patterns gives rise to behavior in the following way.

Some patterns are effector patterns.  They are activated (“primed” is another term used here, meaning activated somewhat, but not enough to be “triggered”) by internal homeostatic requirements.  I’m not sure I am willing to state unequivocally that I believe all patterns have an effector component, but I’m at least willing to consider it.  Maybe not.  Maybe what I think is that data flows from sensors to effectors and the patterns I am referring to shape and redirect the data (which are ultimately brain element activity) into orders that are sent to effectors.

That’s existence.  That’s life.  I don’t know what in this process gives rise to a sense of self, but I think the description is fundamentally correct.  Maybe the next iteration through the process will provide some clues.  Or the next.  Or the next.

Hunger might act in the following way.  Brain elements determine biochemically and biorhythmically that it’s time to replenish the energy resources.  So data begin to flow associated with the need to replenish the energy resources.  That primes patterns associated with prior success replenishing the energy resources.  A little at first.  Maybe enough so if you see a meal you will eat it.  Not a lot can be hard-wired (built-in) in this process.  Maybe as a baby there’s a mechanism (a built-in pattern) that causes fretting in response to these data.  But basically, what are primed are patterns the organism has learned that ended up with food being consumed.  By adulthood, these patterns extend to patterns as complex as going to the store, buying food, preparing it, and finally consuming it.

This is not to say that the chain of determinism imposes rigid behaviors.  Indeed, what is triggered deterministically is a chain of opportunism.  Speaking of which, I have to go to the store to get fixings for dinner.  Bye.