Archive for the ‘computation’ Category

090226 (originally 081224) – Is all computation epiphenomenal

Thursday, February 26th, 2009

081224

Is all computation epiphenomenal?

[090226]

Is COMPUTATION a concept with no extension?  In other words does computation always require an intensional context?  Maybe this is what Searle is getting at when he insists that computation is in the mind of the beholder.  It would seem that there are quite a few such concepts, e.g., METAPHYSICS, CAUSE, TRUTH, ONTOLOGY, EPISTEMOLOGY, FREEDOM, HAPPINESS, etc.  Is it the case that only concepts whose content is physical actually have an extension?  Is even that consolation ephemeral?  Does the unending cycle that is the external environment acting upon the internal environment acting upon the external environment acting upon the internal environment … ad infinitum necessarily entail only probabilistic (Bayesian) certainties?  Or does it entail only intensional certainties (whatever that may mean)?

Fodor 2008 (n.18) says that ‘it’s probable that …’ is extensional and unable to reconstruct intensionality in any form.  “An intensional context is one in which the substitution of coextensive expressions is not valid.”  (n.1)  But isn’t it the case that ‘it’s probable that …’ becomes intensional if ‘…’ is replaced by an intensional attribute as, for example if Oedipus were to say, “It’s probable that my mother dwells many leagues hence.”

Intensionality is about invariants and irrelevancies, about fixed and free parameters that map via a characteristic transduction process to and from an environmental state that is extensional content (where coextensive expressions are indistinguishable).  Intensionality develops in both evolutionary and epigenetic time.  It is real easy to get confused about what goes on here.  That seems to be what the idea of ‘rigid designators’ is about.

In the context of computer science, the programmer has intentions, but the program has only intensions (with an s).  Or at least that is the way things seem now.[1]  The fact that we are willing to accept in this statement the attribution of intentionality to the programmer is significant because it suggests that the boundary between intentionality and intensionality (with an s) may shift about depending on the context of the polemic.  This is a puzzling thought.  Shouldn’t the intentionality / intensionality distinction be essential?  It might be, for example that Oedipus the programmer writes an algorithm to determine, based on some kind of survey data, which women are mothers and of whom.  The (incorrect) program he writes looks like the following:

For each woman w do
   If w is not married
      Then w is not a mother
   Else
      If w has children c
         Then w is the mother of c
      Else
         w is not a mother
End do

It’s not that Oedipus thinks that an unmarried woman with children is not a mother; he just writes the program incorrectly.  So, the extension of the world-at-large’s intensional concept of MOTHER-OF[2] differs from the extension of Oedipus’s intensional concept of MOTHER-OF, which differs from the extension of the intensional concept MOTHER-OF that his program implements.  This just goes to show that the wise child knows his own mother and that one person’s extension may be another’s intension.

‘WATER is H2O’ and ‘it is probable that WATER is H2O’

This is an epistemological problem. Epistemological WATER is epistemological H2O only insofar as platonic WATER and platonic H2O (if such there be) have interacted in the context of the history of the universe   that includes the evolution of human beings capable of framing the concepts and doing the science necessary to connect the two.  But the problem is the same as the one Fodor 2008 raises in relation to evolution and the essential difference between selection and selection for.  Muddy water and water containing other impurities aside, H2O arguably isn’t a natural kind, since there are three naturally occurring isotopes of hydrogen and more than three naturally occurring isotopes of oxygen and all of those can be in distinct quantum states that can, given appropriate laboratory equipment, be physically distinguished from one another.

As Fodor 2008 observes in a slightly different context, “what’s selected underdetermines what’s selected for because actual outcomes always underdetermine intentions.”  (p.6)  This is as true when doing science as it is when doing evolution: what’s observed underdetermines what happened because actual observations always underdetermine total postdiction of experimental conditions.  You can refine a bit, but you can’t pin down, especially when you try to pin down things so precisely that you are in the realm of Heisenberg uncertainty and quantum mechanical indeterminacy.  So precision as we commonly understand it is a platonic ideal without a real world correlate and, more to the point, an intensional process that doesn’t have an extension.

Fodor 2008 further observes (p.9) that “who wins a t1 versus t2 competition is massively context sensitive.”  Ditto, whether WATER is H2O or XYZ or both or neither.

===================== Notes =====================

[1]  This is the nature of many a program bug.  The programmatic identification of content from transduced data (the type that the code assigns to those data) may not accurately track the programmer’s intended identification of that content even if the transduction is accurate and the transduced data are sufficient to make the determination.  If the programmer errs in writing the type determination code, the type determination the program makes will err (from the programmer’s standpoint), but no inconsistency will be detectable within the program.

[2] Which includes that Jocasta is the mother of Oedipus.

030715

Tuesday, July 15th, 2003

030715

Hauser, Chomsky, and Fitch in their Science review article (2002) indicate that “comparative studies of chimpanzees and human infants suggest that only the latter read intentionality into action, and thus extract unobserved rational intent.” this goes along with my own conviction that internal models are significant in the phenomenon of human and self-awareness.

Hauser, Chomsky, and Fitch argue that “the computational mechanism of recursion” is critical to language ability, “is recently involved and unique to our species.”  I am well aware that many have died attempting to oppose Chomsky and his insistence that practical limitations have no place in the description of language capabilities.  I am reminded of Dennett’s discussion of the question of whether zebra is a precise term, that is, whether there exists anything that can be correctly called a zebra.  It seems fairly clear that Chomsky assumes that language exists in the abstract (much the way we naively assume that zebras exist in the abstract) and then proceeds to draw conclusions based on that assumption.  The alternative is that language, like zebras, is in the mind of the beholder, but that when language is placed under the microscope it becomes fuzzy at the boundaries precisely because it is implemented in the human brain and not in a comprehensive design document.

Uncritical acceptance of the idea that our abstract understanding of the computational mechanism of recursion is anything other than a convenient crutch for understanding the way language is implemented in human beings is misguided.  In this I vote with David Marr (1982) who believed that neither computational iteration nor computational recursion is implemented in the nervous system.

On the other hand, it is interesting that a facility which is at least a first approximation to the computational mechanism of recursion exists in human beings.  Perhaps the value of the mechanism from an evolutionary standpoint is that it does make possible the extraction of intentionality from the observed behavior of others.  I think I want to turn that around.  It seems reasonable to believe that the ability to extract intentionality from observed behavior would confer an evolutionary advantage.  In order to do that, it is necessary to have or create an internal model of the other in order to get access to the surmised state of the other.

Once such a model is available it can be used online to surmise intentionality and it can be used off line for introspection, that is, it can be used as a model of the self.  Building from Grush’s idea that mental imagery is the result of running a model in off line mode, we may ask what kind of imagery would result from running a model of a human being off line.  Does it create an image of a self?

Alternatively, since all of the other models proposed by Grush are in models of some aspect of the organism itself, it might be more reasonable to suppose that a model of the complete self could arise as a relatively simple generalization of the mechanism used in pre-existing models of aspects of the organism.

If one has a built-in model of one’s self in the same way one has a built-in model of the musculoskeletal system, then language learning may become less of a problem.  Here’s how it would work.  At birth, the built-in model is rudimentary and needs to be fine-tuned to bring it into closer correspondence with the system it models.  An infant is only capable of modeling the behavior of another infant.  Adults attempting to teach language skills to infants use their internal model to surmise what the infant is attending to and then name it for the child.  To the extent that the adult has correctly modeled the infant and the infant has correctly modeled the adult (who has tried to make it easy to be modeled), the problem of establishing what it is that a word refers to becomes less problematical.