Archive for April, 2008

Computations Need Not Respect Content

Wednesday, April 2nd, 2008

(Fodor 1998, Concepts: Where Cognitive Science Went Wrong, p. 10) “In a nutshell: token mental representations are symbols.  Tokens of symbols are physical objects with semantic properties.”  

(p. 11) “[I]f computation is just causation that preserves semantic values, then the thesis that thought is computation requires of mental representations only that they have semantic values and causal powers that preserve them….  [F]ollowing Turing, I’ve introduced the notion of computation by reference to such semantic notions as content and representation; a computation is some kind of content-respecting causal relation among symbols.  However, this order of explication is OK only if the notion of a symbol doesn’t itself presuppose the notion of a computation.  In particular, it’s OK only if you don’t need the notion of a computation to explain what it is for something to have semantic properties.” 

Fodor’s derivation from semantic notions (content and representation) to computation seems to be backwards.  It’s semantics that is to be derived and it’s deterministic physical processes (computation, but not using the weird definition Fodor proposes) from which semantics is to be derived.  

My notion of a symbol presupposes the notion of a computation, so both my terminology and the structure of my argument diverge from Fodor’s.  A symbol is a syntactic structure that controls a computation.  As far as I can see, syntax and syntactically controlled computation is all there is to the brain; i.e., mental representations and processes are purely syntactic.  In effect, a symbol is an algorithm whose execution has semantic value via the instantiation of the basis functions over which the algorithm is defined.  That is, the implementation of the basis functions and the implementation of the process that governs the execution of an algorithm represented within the implementation are the elements that give semantic value to an algorithm.  Semantics arises from the physical characteristics of the instantiation of the syntactical processor (the brain).  However abstract the algorithmic processes that describe the functioning of the mind, the semantics of those processes absolutely depend on the physical characteristics of the device (in the instant case, the brain) that instantiates those processes.  In short, syntax is semantics when it comes to the brain. 

The usual definition of a computation is in terms of Turing Machines.  A Turing Machines has three required elements: 1) a finite alphabet of atomic symbols; 2) a sequentially ordered mutable storage medium (a tape from which symbols of the alphabet may be written and to which they may written); 3) a set of state transition laws (a program) governing the operation of the machine.  Symbols, in this formulation, have no semantics.  Any meanings associated with individual symbols or arrangements of groups of symbols is imposed from without.  Operation of a Turing Machine proceeds absolutely without reference to any such meanings. 

When Fodor proposes that for the purposes of his definition of computation it must be the case that “the notion of a symbol does not presuppose the notion of a computation”, I hardly know what to make of it.  In order for an object to serve as a symbol in the sense required by a Turing Machine, such an object must at a minimum be a member of a specifiable finite set (the alphabet) and susceptible to reading, writing, and identity testing (identity testing is required by the state transition laws).  Thus, the class of objects that can serve as symbols in a computation is not unrestricted, and it is incumbent on Fodor’s theory to assert that the objects he proposes to use as symbols satisfy these conditions. 

The problem is that the standard model of computation is literally blind to content, so it is not sensical to assert that computation is “some kind of content-respecting causal relation among symbols” (p. 11).  Fodor says his notion follows Turing, but I can’t figure out what of Turing’s he thinks he is following.  

A computation is the execution of an algorithm.  The effect of the execution of an algorithm is determined by a causal process that is sensitive only to identity and ordering.  In other words, the execution of an algorithm is a syntactic process.  In my book, which this is, computation tout court does not in general respect content.  To assert that computation respects content presupposes a definition of content and a definition of what it would mean to respect it.  Moreover, the phrase computations that respect content (preserve truth conditions, as Fodor would have it), picks out an extraordinarily highly constrained subset of the class of computations.  Indeed, there is no good reason I can think of to believe that the class is non-empty.  Certainly, Fodor is on the hook to provide an argument as to why he thinks such computations exist.  I’ve been taking Fodor to task here, but he’s not the only one who’s been seduced by this idea.  John Searle seems to have the same peculiar notion that computation tout court preserves truth value.   

I am not arguing that people can’t think about things and come up with pretty good results—that’s what we do—but we aren’t 100% accurate, and AI attempts to write programs that are have not achieved complete accuracy either, so the notion of a computation that respects content is blurry indeed. 

What bothers me about the subsumption of truth preservation under the rubric of computation is that I think it elides an important issue, viz. what it means to preserver truth or respect content.  I am willing to allow that the brain is phylogenetically structured to facilitate the ontogenetic development of specific algorithms that pretty well track certain kinds of characteristics of the physical environment.  To a first approximation, one might, as Fodor does, say that those algorithms respect content or that they preserve truth conditions, but that still begs the question.  The problem is that whatever the brain does (and therefore the mind does), it does it algorithmically.  Preserve Truth is not an algorithm, nor is Respect Content.  To the extent that a process or computation is deterministic, the process is constrained to “respect content” in the sense that symbol identity, not content, is the only input to the process and thus the only thing that can determine what is produced.  I still don’t see describing that as somehow “preserving truth” even with the most trivial interpretation I can possibly put on the phrase.