Archive for the ‘zombies’ Category

030721 – Consciousness and (philosophical) zombies

Monday, July 21st, 2003

[Added 040426]

Is consciousness an expert system that can answer questions about the behavior of the organism?  That is, does SHRDLU have all the consciousness there is?  Does consciousness arise from the need to have a better i/o interface?  Maybe the answer to the zombie problem is that there are nothing but zombies, so it’s not a problem.

In effect, everything happens automatically.  The i/o system is available to request clarification if the input is ambiguous and is available to announce the result of the computations as an output report.

030721 – Consciousness and zombies

The reason the zombie problem and the Chinese room problem are significant is that they are both stand-ins for the physicalism/dualism problem.  That being the case, it seems pointless to continue arguing about zombies and Chinese rooms absent a convincing explanation of how self-awareness can arise in a physical system.  That is the explanation I am looking for.

Ted Honderich (2000) observes that, “Something does go out of existence when I lose consciousness.”  From a systems point of view, loss of consciousness entails loss of the ability (the faculty?) to respond to ordinary stimuli and to initiate ordinary activities.  Loss of consciousness is characterized by inactivity and unresponsiveness.  Loss of consciousness is distinguished from death in that certain homeostatic functions necessary to the continued biological existence of the organism, but not generally accessible to consciousness, are preserved.

In sleep, the most commonly occurring loss of consciousness, these ongoing homeostatic functions have the ability to “reanimate” consciousness in response to internal or external stimuli.

Honderich observes that “consciousness can be both effect and cause of physical things.”  This is consistent with my sense that consciousness is an emergent property of the continuous flow of stimuli into the organism and equally continuous flow of behaviors emanating from the organism.  I’m not real happy about “emergent property”, but it’s the best I can do at the moment.

Honderich identifies three kinds of consciousness: perceptual consciousness, which “contains only what we have without inference;” reflective consciousness, which “roughly speaking is thinking without perceiving;” and affective consciousness, “which has to do with desire, emotion and so on.”

Aaron Sloman (“The Evolution of What?”  1998) notes that in performing a systems analysis of consciousness, we need to consider “what sorts of information the system has access to…, how it has access to this information (e.g., via some sort of inference, or via something more like sensory perception), [and] in what form it has the information (e.g., in linguistic form or pictorial form or diagrammatic form or something else).”

Sloman also identifies the problem that I had independently identified that leads to it being in the general case impossible for one to predict what one will do in any given situation.  “In any system, no matter how sophisticated, self-monitoring will always be limited by the available access mechanisms and the information structures used to record the results.  The only alternative to limited self-monitoring is an infinite explosion of monitoring of monitoring of monitoring…  A corollary of limited self-monitoring is that whatever an agent believes about itself on the basis only of introspection is likely to be incomplete or possibly even wrong.”

Sloman (and others), in discussing what I would call levels of models or types of models, identifies “a reactive layer, a deliberative layer, and a meta management (or self-monitoring) layer.”