121029 – Function and Teleology

October 29th, 2012

Tyler Burge talks about representational function as if it is something that the human perceptual system has.  He says that a representational function succeeds if it represents veridically.  But that really doesn’t properly characterize the human perceptual system.  I end up feeling that the analysis is running the wrong way.

What we are concerned about is not what the perceptual system is supposed to do, that is, what we conclude that it should do (although that may be an interesting thing to speculate on), but rather we should ask what the perceptual system actually does and how it does it.  This is the difference between positing an algorithm or a set of requirements and then trying to find evidence for them on the one hand, and on the other, trying to understand what actually happens.

Failure to represent veridically is perhaps causally related to behavior that is suboptimal from the standpoint of an observer with access to the veridical facts, but an organism behaves based on what it has available, not what it would be nicer to have available.  It is already granted that proximal inputs underspecify distal reality.  The point is to make the most of what one gets.


July 24th, 2012


And what would be the standard of veridicality for a perception as of something as red?  Red, as we have come to understand it, is complicated.  Red has to do with the spectral characteristics of illuminants and their intensities, as well as surface reflectances over an area often larger than the area seen as red.  The best way I can think of to test the veridicality of my perception of something as red is to ask around to see if I can find a consensus.  Who knows?  It might be a green orange under peculiar conditions of illumination.  The other way is just to act as if the perception is veridical, actionable, reliable until proven otherwise or until it doesn’t matter any more.

The point of intensionality (with the ‘s’) is that apparently evolution hasn’t come up with a way to infer much in depth about distal reality on the basis of woefully underdetermined proximal stimulation.  But opaque references are more actionable than no references.  It’s a wonder evolution has eventuated in as much as it has.

So, we have an unbewusster Schluss mechanism to get opaque specifications of what is out there, and on top of that we somehow acquired a separate mechanism of bewusster Schluss to discover that Hesperus and Phosphorus are the same heavenly body and to believe experts who tell us so.


July 22nd, 2012


The problem with ‘veridicality’ as a criterion for ‘successful’ perception is that veridicality is an ideal that has no counterpart in the real world.  I would prefer something along the lines of ‘actionable’ to replace ‘veridical’, the idea being that good enough is good enough, and it is unnecessary to set an unattainable standard against which to measure successful representation.

Veridicality is recognized as an idealized standard.  Fodor noted that water includes stuff that may be cloudy and the stuff that is in polluted lakes.  Scientists tell us that jade is a disjunction.  Jade can be either of two minerals, jadeite and nephrite, with distinct chemical compositions.  In nature, bulk water, even H2O water, is a mixture of molecules formed of the three isotopes of hydrogen—hydrogen, deuterium, and tritium—and the pure forms of all three isotopic kinds of H2O have different physical and biological characteristics, e.g., pure deuterium water freezes at a different temperature and is poisonous.

What would be the standard of veridicality for a perception of something as water?  Surely, one would like it to be that water is present; and that pushes matters onto the (middle level) concept WATER, but the semantics of WATER then cannot be that WATER is H2O tout court.  So, we have to abandon the idea that WATER is anything but water.

We can empirically examine stuff that we agree to be water (or jade), and scientists can study the stuff and explicate the discernible variations among things that we successfully perceive to be that stuff.  I don’t think this is a intolerable.  It relieves us from having to posit a world filled with ideal exemplars that we have to conceptualize through a glass darkly.

Put another way, concepts and their formation are as much a product of evolution as is whatever ability there is to perceive stuff as of particulars of such concepts.  This is as it should be.  The organisms (us) we are interested in are the product of the interactions of organisms (our ancestors) with their environment.  That the outcome of billions of years of interaction is systems whose pitifully underdetermined proximal inputs provide them with generally actionable information about the external environment just goes to show that evolution–a really stupid process by just about any criteria I can think of—has remarkable consequences.


July 7th, 2012


Can there be representation without representation as?  Perception without perception as?  Can there be perception without concepts?

What is going on when we see an artichoke and can’t remember what it is called?  How does the word ‘artichoke’ fit in with the perception of an artichoke as an ARTICHOKE?  Take carrots (please): if I know English and Spanish and I see a carrot, must I see it as either a CARROT or a ZANAHORIA if I am to see it at all?  (No seeing without concepts.)  What does it mean to say I see a carrot as such?  Is that just a transparent attempt to beg the question of which concept I see it as?  If a cat sees a carrot, it must see a carrot as something.  A CARROTCAT ? It can’t be a CARROT or a ZANAHORIA, although is is surely a carrot.  There in Thailand I had for breakfast exotic fruits whose names I never knew, but which I recognized in terms at least of which ones I liked and which ones I didn’t care for.  So at first I saw them as BREAKFAST FRUITS OF UNKNOWN DESIRABILITY.  I’m willing to grant that as a concept.

What if I’m driving, listening to the radio, and thinking about buying an iPad.  I see and react to all sorts of driving related things: cars, traffic signals, etc., but a lot of the things I see don’t appear to make an appearance in consciousness.  Do I have to say I saw them?  How do I distinguish terminologically between things that made it to (How shall I say?) first class consciousness and thing that were handled by second class consciousness? If I can’t say that I saw them, what must I say to indicate that at some level I took them into consideration because I stayed on the road in my lane and didn’t crash into anything?

Free Will Examined Further

April 20th, 2012

With respect to free will. Lots of philosophers and scientists (including me in a previous incarnation, having since seen the error of my ways) look to quantum effects as a way to square a completely physical universe with the possibility of free will. As I understand it, quantum phenomena are deterministic in the sense that something determinate has to happen as the end result of the collapse of the quantum wave function. Before the collapse we have a determinate probability density function. I take this to be the unvarnished meaning of Kauffman’s remark that “the quantum-classical boundary [is] non-random yet lawless.”

I agree that this implies that it is literally the case that “no algorithmic simulation of the world or ourselves can calculate the real world.” As my friend Mitchell has pointed out to me, infinite precision is not possible because of uncertainty constraints. Either one believes in a hidden variable theory of quantum mechanics or one does not. If one does, then we’re back to plain vanilla determinism and maybe uncertainty goes away, too. If one does not, then things are still deterministic with a dash of probability thrown in, the effect of which, no matter how “lawful” succeeds only in constraining the randomness a bit—and all of that subject to uncertainty limitations.

I don’t think randomness, even randomness selected from a deterministic probability density function helps a free will argument at all. What we want is responsibility, not random behavior. The only way I have ever seen quantum indeterminacy used as an argument for the possibility of free will is as part of a dualistic program in which mind and the physical universe are distinct. The idea seems to be that the mind gets to tweak quantum outcomes and that is enough to guarantee freedom and responsibility. Too much hand waving at too small a scale, I say. I don’t believe it for a second.

John Searle in his (2007) book, Freedom & Neurobiology worries about the philosophical consequences of physical determinism, too.  Searle says (p.64) that the conscious, voluntary decision-making aspects of the brain are not deterministic, in effect for our purposes asserting that if there is an algorithm that describes conscious, voluntary decision-making processes, it must be (at least perceived as) non-deterministic. Although it would be possible to extend the definition of an algorithm to include non-deterministic processes, the prospect is distasteful at best. How can we respond to this challenge?

Searle reasons (p.57) that

We have the first-person conscious experience of acting on reasons. We state these reasons in the form of explanations. [T]hey are not of the form A caused B. They are of the form, a rational self S performed act A, and in performing A, S acted on reason R.

He further remarks (p.42) that an essential feature of voluntary decision-making is the readily-perceivable presence of a gap:

In typical cases of deliberating and acting, there is a gap, or a series of gaps between the causes of each stage in the processes of deliberating, deciding and acting, and the subsequent stages.

Searle feels the need to interpret this phenomenological gap as the point at which non-determinism is required in order for free will to assert itself.

Searle takes a non-determinist position in respect of free will as his response to the proposition that in theory absolutely everything is and always has been determined at the level of physical laws.

If the total state of Paris’s brain at t1 is causally sufficient to determine the total state of his brain at t2, in this and in other relevantly similar cases, then he has no free will. (p. 61)

As noted above, the literal total determinism position is formally untenable and a serious discussion requires assessing how much determinism there actually is. As my friend Mitchell also points out, in neuro-glial systems, whether an active element fires (depolarizes) or not may be determined by precisely when a particular calcium ion arrives, a fact that ultimately depends on quantum mechanical effects. On the other hand, Edelman and Gally 2001 have observed that real world neuro-glial systems exhibit degeneracy, which is to say that at some suitable macro level of detail equivalent responses eventuate from a range of non-equivalent stimulation patterns. This would tend to iron out at a macro level the effects of micro level quantum variability. Even so, macro catastrophes (in the mathematical sense) ultimately depend on micro rather than macro variations, again leaving us with not quite total determinism.

To my way of thinking, the presence of Searle’s gap is better explained if we make two assumptions that I do not think to be tendentious: 1) that the outcome of the decision-making process is not known in advance because the decision really hasn’t been made yet and 2) that details of the processes that perform the actual function of reaching a decision are not consciously accessible beyond the distinctive feeling (perception?) that one is thinking about the decision. When those processes converge on, arrive at, a decision, the gap is perceived to end and a high-level summary or abstract of the process becomes available, which we perceive as the reason(s) for, but not cause(s) of, the decision taken.

Presumably, based on what we know of the brain, the underlying process is complex, highly detailed and involves many simultaneous (parallel) deterministic (or as close to deterministic as modern physics allows) evaluations and comparisons. Consciousness, on the other hand, is as Searle describes it a unified field, which I take to mean that it is not well-suited to comprehend, deal with, simultaneous awareness of everything that determined the ultimate decision. There is a limit to the number of things (chunks, see Miller 1956) we can keep in mind at one time. Presumably, serious decision-making involves weighing too many chunkable elements for consciousness to deal with. This seems like a pretty good way for evolution to have integrated complex and sophisticated decision-making into our brains.

That the processes underlying our decision-making are as deterministic as physics will allow is, I think, reassuring. We make decisions 1) precisely when we think (perceive) we are making them, 2) on the basis of the reasons and principles we think we act on when making them. It seems to me that this is just what we want from free will. After all, when we say we have free will, we mean that our decisions are the result of who we are, which is in turn the result of several billion years of history in our genes combined with our epigenetic encounters with the world in the form of our own personal histories. If we have formed a moral character, that is where it has come from. When we have to decide something, we do not just suddenly go into mindless zombie slave mode during the gap and receive arbitrary instructions from some unknown free-will agency with which we have no causal physical connection. Rather, we consider the alternatives and somehow arrive at a decision. Nor would it be desirable that the process be non-deterministic in any macro sense. To hold non-determinism to be a virtue would be to argue for the desirability of randomness rather than consistency in decision-making. We do not have direct perceptual access to the details of its functioning, but I do not doubt that what we have is everything one could desire of free will.

[My notes show that this entry dates from July 27, 2009]

090226 (originally 081224) – Is all computation epiphenomenal

February 26th, 2009


Is all computation epiphenomenal?


Is COMPUTATION a concept with no extension?  In other words does computation always require an intensional context?  Maybe this is what Searle is getting at when he insists that computation is in the mind of the beholder.  It would seem that there are quite a few such concepts, e.g., METAPHYSICS, CAUSE, TRUTH, ONTOLOGY, EPISTEMOLOGY, FREEDOM, HAPPINESS, etc.  Is it the case that only concepts whose content is physical actually have an extension?  Is even that consolation ephemeral?  Does the unending cycle that is the external environment acting upon the internal environment acting upon the external environment acting upon the internal environment … ad infinitum necessarily entail only probabilistic (Bayesian) certainties?  Or does it entail only intensional certainties (whatever that may mean)?

Fodor 2008 (n.18) says that ‘it’s probable that …’ is extensional and unable to reconstruct intensionality in any form.  “An intensional context is one in which the substitution of coextensive expressions is not valid.”  (n.1)  But isn’t it the case that ‘it’s probable that …’ becomes intensional if ‘…’ is replaced by an intensional attribute as, for example if Oedipus were to say, “It’s probable that my mother dwells many leagues hence.”

Intensionality is about invariants and irrelevancies, about fixed and free parameters that map via a characteristic transduction process to and from an environmental state that is extensional content (where coextensive expressions are indistinguishable).  Intensionality develops in both evolutionary and epigenetic time.  It is real easy to get confused about what goes on here.  That seems to be what the idea of ‘rigid designators’ is about.

In the context of computer science, the programmer has intentions, but the program has only intensions (with an s).  Or at least that is the way things seem now.[1]  The fact that we are willing to accept in this statement the attribution of intentionality to the programmer is significant because it suggests that the boundary between intentionality and intensionality (with an s) may shift about depending on the context of the polemic.  This is a puzzling thought.  Shouldn’t the intentionality / intensionality distinction be essential?  It might be, for example that Oedipus the programmer writes an algorithm to determine, based on some kind of survey data, which women are mothers and of whom.  The (incorrect) program he writes looks like the following:

For each woman w do
   If w is not married
      Then w is not a mother
      If w has children c
         Then w is the mother of c
         w is not a mother
End do

It’s not that Oedipus thinks that an unmarried woman with children is not a mother; he just writes the program incorrectly.  So, the extension of the world-at-large’s intensional concept of MOTHER-OF[2] differs from the extension of Oedipus’s intensional concept of MOTHER-OF, which differs from the extension of the intensional concept MOTHER-OF that his program implements.  This just goes to show that the wise child knows his own mother and that one person’s extension may be another’s intension.

‘WATER is H2O’ and ‘it is probable that WATER is H2O’

This is an epistemological problem. Epistemological WATER is epistemological H2O only insofar as platonic WATER and platonic H2O (if such there be) have interacted in the context of the history of the universe   that includes the evolution of human beings capable of framing the concepts and doing the science necessary to connect the two.  But the problem is the same as the one Fodor 2008 raises in relation to evolution and the essential difference between selection and selection for.  Muddy water and water containing other impurities aside, H2O arguably isn’t a natural kind, since there are three naturally occurring isotopes of hydrogen and more than three naturally occurring isotopes of oxygen and all of those can be in distinct quantum states that can, given appropriate laboratory equipment, be physically distinguished from one another.

As Fodor 2008 observes in a slightly different context, “what’s selected underdetermines what’s selected for because actual outcomes always underdetermine intentions.”  (p.6)  This is as true when doing science as it is when doing evolution: what’s observed underdetermines what happened because actual observations always underdetermine total postdiction of experimental conditions.  You can refine a bit, but you can’t pin down, especially when you try to pin down things so precisely that you are in the realm of Heisenberg uncertainty and quantum mechanical indeterminacy.  So precision as we commonly understand it is a platonic ideal without a real world correlate and, more to the point, an intensional process that doesn’t have an extension.

Fodor 2008 further observes (p.9) that “who wins a t1 versus t2 competition is massively context sensitive.”  Ditto, whether WATER is H2O or XYZ or both or neither.

===================== Notes =====================

[1]  This is the nature of many a program bug.  The programmatic identification of content from transduced data (the type that the code assigns to those data) may not accurately track the programmer’s intended identification of that content even if the transduction is accurate and the transduced data are sufficient to make the determination.  If the programmer errs in writing the type determination code, the type determination the program makes will err (from the programmer’s standpoint), but no inconsistency will be detectable within the program.

[2] Which includes that Jocasta is the mother of Oedipus.

081225 – Why disjunctions can figure in laws?

December 25th, 2008


Why Disjunctions Can Figure in Laws?

Loewer 2007a[1] argues for a Non-Reductive Physicalism (NRP) as contradistinguished from plain old Reductive Physicalism (RP).  This is something of a family quarrel to begin with because both sides seem to agree that dualism is out and mentation supervenes on a purely physical substrate.

In particular, Loewer considers and dismisses a “line of thought that threatens to show that NRP is unstable” and thus not a coherent alternative to RP.

Suppose that M is a mental property and occurs in some law, say M →R (the law may be a ceteris paribus law) and so is a G-property.  Suppose that physicalism is true.  Won’t there be some … physical property Q constructed out of physical genuine properties—i.e. a disjunction of physical properties or configurations of physical properties—that is coextensive with M in all physically possible worlds?  But then won’t it be the case that Q →R is also a law?  If so, it follows that Q is a G-property since it figures in a law.  If this is correct, then NRP comes very close to collapsing into RP since either M = Q or M* = Q where M* is the property M restricted to the class of physically possible worlds.  In the first case RP holds; in the second case it is close enough to make the difference between RP and NRP look awfully trivial.

Loewer offers two counterarguments.  The first is one that he dismisses out of hand because, he says, it looks “a lot like ‘declaring victory and withdrawing’”:

If any construct out of physical properties that is coextensive (or coextensive in every physically possible world) with a G-property counts as a P-property then indeed NRP and RP come to much the same.

The problem he says is that

considerations involving functionalism and externalism show that Q will have an enormously complex characterization in terms of physics and plausibly has no characterization in terms of any of the special sciences.

In effect, Loewer invokes Occam’s razor, which says; Simpler is better; don’t complicate things unnecessarily.  In so doing, Loewer is following Fodor’s argument that complex (and sometimes potentially unbounded) disjunctions of physical properties are not natural kinds.  As Loewer summarizes Fodor, the problem is that the disjunctive properties at issue need not be kinds, and

disjunctions of physically heterogeneous properties are not kinds of physics.  [Fodor] seems to mean by this that the various [properties] can be realized by various configurations of physical entities that are made from different materials.

On the other hand, although the disjunction of the realizers of F may be physically heterogeneous (and so not a kind of physics) they may be psychologically homogenous so that F is a kind of psychology. If F is a functional natural kind of psychology its instances are psychologically homogeneous since they share the same psychological role.

Although Fodor doesn’t say this he might add that psychological properties and laws may obtain even in worlds whose fundamental properties and laws are very different from those of the actual world. In these worlds psychological properties are realized by alien fundamental properties and psychological laws by alien fundamental laws.[2]

Yates 2005[3] analyzes Fodor’s cavil as (I think properly) a question of “gerrymanderedness rather than disjunctiveness or heterogeneity.” (p. 218, original italics).  He proposes that we grant Fodor that gerrymandered disjunctions are not suitable for framing laws.  The crucial point to note now is that disjunctions of the realizers of functional kinds are not gerrymandered.  Why?  Because in order to count as realizers of a given functional property, all the disjuncts must play the causal role that defines it.  This is where Papineau’s [1985] argument comes in.  If special science properties are multiply realizable (and so irreducible), then their realizers must be heterogeneous.  But in that case, something has to explain how all the non-identical realizer properties at, say, the physical level, share the causal power constitutive of the functional properties at some special science level, say biology.  (p. 219)

The problem of evolutionary selection arises.

It would be miraculous if all the different realizer properties play the same causal roles by coincidence.  Whence a dilemma: either there is an explanation of the otherwise miraculous coincidence, or special science properties are not multiply realizable after all. (p.219)

Is this really an evolutionary problem?  I’m not sure I understand Yates’s argument here.  He talks about ‘projectibility’ just as Loewer does, and I don’t know what that is.  It may be that special science properties are indeed multiply realizable, but that there is something special about whatever realization happened to develop first.  The algorithm doesn’t care about how it is realized (implemented) just so long as an implementation of the basis functions is available.

Now, I don’t care whether RP or NRP is the right name to blazon on the banner of Truth, but I do care about making sense of things. Rather than talk about special sciences, let’s talk about algorithms and their implementations.

==================== Notes ===================

[1] Loewer, Barry.  2007a.  “Mental Causation, or Something Near Enough.” in Philosophy of Mind.

[2] Loewer, Barry.  2007b.  “Why is There Anything Except Physics?” To appear in Synthese Special Issue on Fodor (2007).

[3] Yates, David.  2005.  The Causal Argument for Physicalism.  King’s College London.  Doctoral Dissertation.


December 4th, 2008


Suppose that what it means to be a particular individual in possession of a particular individual mind is to be a particular individual physical device that implements a particular individual algorithm.

What an Algorithm Is
Searle 2004 (p.67) describes an algorithm as “a method for solving a problem by going through a precise series of steps.  The steps must be finite in number, and if carried out correctly, they guarantee a solution to the problem.”

By talking about solving a problem, the description takes for granted that in order to create or recognize an algorithm we must have1) a problem, 2) a way to describe it, and 3) a way to describe its solution.  Within this formulation, an algorithm can only have so-called derived intentionality, viz. before an algorithm can come into existence somebody has to have a purpose that determines what the algorithm is to be about.  As Searle points out, a calculator (an instantiation of a calculation algorithm) doesn’t do calculations.  What it does is change states in a deterministic way in response to physical movements of some of its components (key presses) and alter its appearance (displaying results) as a side effect of its internal state changes.  A calculator is usable as a calculator by human beings only because human beings assign calculation-related meanings to key presses and display patterns.  The meaning does not reside in the calculator, it resides in the user.  Following this line of thought, Searle concludes that neither syntax nor semantics is present in an algorithm.  This he says is because syntax and semantics are constructs present only in conscious human beings.

These conclusions are warranted under the given definition of what constitutes an algorithm.  However, I will propose an alternative definition that I will argue allows for something we can still reasonably call an algorithm to have syntax and, in its instantiations, semantics without having to provide either from an external source.

I propose to consider algorithms as being about the implementation (realization) of behaviors in time.  In a sense, then, an algorithm is an abstraction that specifies a particular deterministic computer program.  More formally, an algorithm is a finite series of instructions[1] (steps) that comprise a behavior (a generalization of the idea of performing a task).  Algorithms are constructed on the basis of a set of primitive functions (the basis functions) that, taken together specify the operation of an abstract (virtual) machine (computer).  It is not possible to specify an algorithm without specifying the set of primitive functions in terms of which the algorithm is expressed, although informally the set of primitives is simply taken to contain whatever functions the specification of a particular algorithm requires.  The abstract machine implicit in the set of primitive functions can be described in terms of its computational power (the class of calculations it is capable of).  The two most common (and for our purposes most relevant) levels of computational power are 1) computations that are possible for finite state machines and 2) computations that are possible for (unrestricted) Turing machines.  The former is formally equivalent to (has identical computational power as) the subset of Turing machines having only a finite tape.

It is, in general, tedious in the extreme to express algorithms in terms of Turing machine functions.  And it is also tedious in many cases to make explicit the set of primitive functions that provide the basis for a particular algorithm or set of algorithms.  For that reason (and, one assumes, not incidentally because people thought up the list-of-instructions concept long before Turing thought up the machine formalism that bears his name) the specification of most algorithms leaves the specification of the underlying set of primitive functions implicit.  That works pretty well and we all learn (maybe now I have to say used to learn) addition, subtraction, multiplication, division, and square root algorithms in elementary school arithmetic, without belaboring or worrying overmuch about the specifics of the underlying primitive functions, e.g., the fact that the set of functions on which the addition algorithm depends includes a function that enables one to write a sort of superscript number above and to the left of a specified decimal position in the topmost number of a column of numbers to be added (the “carry”) and a function that enables one to read it back to oneself at a later time, and so on.

Special Considerations re: Primitive Functions
Without attempting a rigorous exposition, we may take a mathematical function to be a (deterministic) relation that uniquely associates each element of its domain (the set of possible input values) with an elements of its range (the set of possible output values), in other words, an input-output specification.  By definition, mathematical functions do not have side effects.  This has all sorts of good consequences for proving theories and doing mathematical logic.  However, for the specification of algorithms useful for talking about how the brain works, we need the computer science definition of a function, which generalizes the definition of function to include processes that have side effects.

Side Effects and Referential Opacity

A side effect is an event or process affected by and/or affecting the state of the environment external to the (implementation of the) function and occurring or commencing conceptually within the space or interval between the arrival of its input and the corresponding return of its output.  The most common use of computer functions with side effects is to obtain inputs from or deliver outputs to the external environment.[2]

Function side effects are problematical for mathematical theories of computation, because they introduce the unconstrained external world into an otherwise nicely circumscribed theoretical construct.  The formal response of computer science has been to expand the boundaries of the theoretical construct to include the possibility of a limited set of side effects explicitly in the domain and range of the function.  The drawback this creates is that the more the domain and range include of the external world, the more difficult it is to formally prove program correctness.  Nonetheless, functions with side effects are an integral part of the standard description of a Turing machine: specifically, the operations (functions) that read from and write to the machine’s (infinitely extensible) tape.[3]

At the most fundamental level, the issues raised by side-effects and referential opacity relate to the (theoretically, at least) arbitrary boundary between a system selected for analysis and the external environment in which it is embedded.[4]  Because a theory of the mind must I think be about the brain in the context of an (external) environment that is affected by and affects brains, we need to be able to draw a boundary between an entity in possession of a mind and the environment in which it operates.[5]  We thus need to allow for side effects in a theory of the mind, simply in recognition of the fact that not everything that happens in a mind originates in processes within the mind.  There is an outside world, and it matters.[6]

Side effects show up in the realization (instantiation, physical implementation) of an algorithm in two ways.  1) The set of basis functions for an algorithmic system may include functions that explicitly query and manipulate the physical environment.  2) The physical processes that implement the algorithm have real physical side effects that are above and beyond (outside of) the abstract description of the algorithm—outside, even, the abstractly specified side effects that may be required in the basis functions.  For example, a computer needs a power source that meets certain specifications and will operate only under a specified range of environmental conditions.

When analyzing or describing the behavior of a particular physical realization of an algorithm, we generally concentrate on side effects of the first kind and take for granted that the physical side effects of the second kind—those that ground or enable the basis functions themselves—are in place.

Significance of Side Effects
[Didn’t I say what’s in this paragraph earlier?]
The introduction of functions with open-ended side-effects has the effect of vastly complicating (often to the point of practical impossibility) any complete formal analysis of algorithmic behavior.  This, because a formal analysis can only take place within a formal description of a closed system (one in which all events occur deterministically through fully specified rules and circumstances).  To the extent that side effects bring aspects of the external world into a system, a formal description of the system must comprehend a formal description of at least those aspects of the external world.  In effect, the less constrained the range of allowable side effects, the broader must be the scope of the formal description of the system.

Sequencing and Counterfactuals
Philosophers appeal to the idea of counterfactuals in order to deal with the fact that at the macro physical (as opposed to the quantum physical) level events only happen one way, although our intuitions tell us that if things had been sufficiently different (a counterfactual condition) events would have turned out differently.  In planning for the future, where there are no facts yet, we just use conditionals (e.g., If the creek don’t rise and the sky don’t fall, I’ll visit you next Thursday).  Computer programming is a quintessential case of formal planning for the future.  The programmer’s task is to devise an algorithm that will accomplish whatever it is supposed to accomplish under constrained, but as yet undetermined conditions.

Sequencing and counterfactuals are at the heart of causal descriptions.  Sequencing and conditionals are at the heart of algorithmic descriptions.  Every entry in the state table of a Turing machine consists in a conditional (e.g., if the symbol under the read/write head on the tape is “1”), an action (e.g., write “0” on the tape and move the read/write head to the left), and a sequencing directive (e.g., go to state 275).  In the abstract, sequencing begs the question of causality.  If B must follow A, does that mean that A must cause B or is it sufficient that A must be correlated with B?  Does it matter?  In an algorithmic specification, the answer is no.  In fact, it is not even a meaningful question because sequencing is primitive (and thus opaque).  So causality is not an issue in an algorithmic specification.

It feels like we have two choices with respect to causality.  We can stumble ahead in despite of all that Hume warned us about, and we will fall into the mire he described.  Alternatively, we can take the view that informs physics, viz. systems evolve over time according to empirically reliable formulas.  On this view, the attribution of causality requires drawing arbitrary physical and temporal boundaries in the midst of an evolving process to interpret as objects of interest at particular points in time.  We then examine the state equations of the system in a small neighborhood near each selected point in time and we assign causality to the infinitesimally prior state.

In effect relativistic limits and the flow of time delimit the causes of everything.  If there is anything that counts as Hume’s necessary connexion, it is to be found in the empirically based physical theory that the state of the universe everywhere in a light-cone of radius delta-t centered on the point P at time T is what determines what will happen at P at time T plus delta-t.  The instant that one focuses attention on a proper subset of that light cone as a “cause”, necessary connexion becomes subject to ceteris paribus conditions.

If we want to say that some proper subset of the light cone centered on P at time T caused what happened at P at time T plus delta-t, we must recognize that this is a counterfactual that is falsifiable.  Such an assertion requires a ceteris paribus qualification if we are to accept it as a true statement.

====================== Notes ======================

[1] The insistence here is on the finiteness of the series of instructions, not the finiteness of the time necessary to complete the task.  Some algorithms provably finish in finite time, e.g., the well-known algorithms taught in elementary school for adding two finite integers together.  Other algorithms may continue indefinitely, e.g., the square root algorithm, which, in some cases—as when applied to the integer 2—will continue to produce additional digits indefinitely.  Of course, constraints of finite physical resources and finite time will prevent any physical instantiation of an algorithm from continuing forever.

[2] Typical examples:


·         A function whose outputs are deterministic, but are not completely determined by its inputs, e.g., a function whose side effect is to provide the current date and time as its output.  Such a function is said to be referentially opaque.  

·         A function that requests data from a user.  The data returned by such a function are determined by whatever the user enters; but such a function strains the mathematical definition of a function because an equivalent input-output specification cannot be pre‑specified—the input to the function is a request for data, and the output is whatever the user enters.  At best, one can pre-specify the range of the function by limiting what the user is allowed to enter. 

·         A function that delivers its input (the value(s) provided to the function) to some kind of display.  Such a function affects the state of the external environment and may ultimately affect the internal environment of the program if and when something in the external environment reacts to that change of state in a way detectable by the program.  Nonetheless, the output of such a function (the value(s) it returns) is (are) problematical.  Usually such a function simply returns a value indicating success (or failure) insofar as that can be determined by the computer; but strictly (mathematically) speaking the result of the function is the sum total of the effects on the computer of changes in its environment that occur as a result of the information it displays.

[3] Strictly speaking, a function that simply stores its input value for later retrieval and a complementary function that retrieves and provides as its output a value thus stored are both functions with side effects.  Writing acts upon the environment.  Reading queries the state of the environment.  By convention, the operations of read and write functions are specified to take place deterministically within the internal environment of the abstract machine and such side effects are simple enough not to be considered problematical.

[4] Arguably, the universe and everything in it is just one big system; and any subdivision of that system is ultimately arbitrary.  That said, Algorithmic Information Theory, briefly described a little later on provides a way of assessing the relative complexity of one subdivision vis-à-vis another.

[5] As far as I can tell, not being myself a substance dualist, there can’t be any real difference between what goes on in the brain and what goes on in the mind.  If I think of one, I’ll go back and change my terminology accordingly.

[6] Notwithstanding the skeptical hypothesis that you are really just a brain in a vat and all of your experience is illusory.  For an overview of the literature on brains in vats, see Brueckner 2004.

Computations Need Not Respect Content

April 2nd, 2008

(Fodor 1998, Concepts: Where Cognitive Science Went Wrong, p. 10) “In a nutshell: token mental representations are symbols.  Tokens of symbols are physical objects with semantic properties.”  

(p. 11) “[I]f computation is just causation that preserves semantic values, then the thesis that thought is computation requires of mental representations only that they have semantic values and causal powers that preserve them….  [F]ollowing Turing, I’ve introduced the notion of computation by reference to such semantic notions as content and representation; a computation is some kind of content-respecting causal relation among symbols.  However, this order of explication is OK only if the notion of a symbol doesn’t itself presuppose the notion of a computation.  In particular, it’s OK only if you don’t need the notion of a computation to explain what it is for something to have semantic properties.” 

Fodor’s derivation from semantic notions (content and representation) to computation seems to be backwards.  It’s semantics that is to be derived and it’s deterministic physical processes (computation, but not using the weird definition Fodor proposes) from which semantics is to be derived.  

My notion of a symbol presupposes the notion of a computation, so both my terminology and the structure of my argument diverge from Fodor’s.  A symbol is a syntactic structure that controls a computation.  As far as I can see, syntax and syntactically controlled computation is all there is to the brain; i.e., mental representations and processes are purely syntactic.  In effect, a symbol is an algorithm whose execution has semantic value via the instantiation of the basis functions over which the algorithm is defined.  That is, the implementation of the basis functions and the implementation of the process that governs the execution of an algorithm represented within the implementation are the elements that give semantic value to an algorithm.  Semantics arises from the physical characteristics of the instantiation of the syntactical processor (the brain).  However abstract the algorithmic processes that describe the functioning of the mind, the semantics of those processes absolutely depend on the physical characteristics of the device (in the instant case, the brain) that instantiates those processes.  In short, syntax is semantics when it comes to the brain. 

The usual definition of a computation is in terms of Turing Machines.  A Turing Machines has three required elements: 1) a finite alphabet of atomic symbols; 2) a sequentially ordered mutable storage medium (a tape from which symbols of the alphabet may be written and to which they may written); 3) a set of state transition laws (a program) governing the operation of the machine.  Symbols, in this formulation, have no semantics.  Any meanings associated with individual symbols or arrangements of groups of symbols is imposed from without.  Operation of a Turing Machine proceeds absolutely without reference to any such meanings. 

When Fodor proposes that for the purposes of his definition of computation it must be the case that “the notion of a symbol does not presuppose the notion of a computation”, I hardly know what to make of it.  In order for an object to serve as a symbol in the sense required by a Turing Machine, such an object must at a minimum be a member of a specifiable finite set (the alphabet) and susceptible to reading, writing, and identity testing (identity testing is required by the state transition laws).  Thus, the class of objects that can serve as symbols in a computation is not unrestricted, and it is incumbent on Fodor’s theory to assert that the objects he proposes to use as symbols satisfy these conditions. 

The problem is that the standard model of computation is literally blind to content, so it is not sensical to assert that computation is “some kind of content-respecting causal relation among symbols” (p. 11).  Fodor says his notion follows Turing, but I can’t figure out what of Turing’s he thinks he is following.  

A computation is the execution of an algorithm.  The effect of the execution of an algorithm is determined by a causal process that is sensitive only to identity and ordering.  In other words, the execution of an algorithm is a syntactic process.  In my book, which this is, computation tout court does not in general respect content.  To assert that computation respects content presupposes a definition of content and a definition of what it would mean to respect it.  Moreover, the phrase computations that respect content (preserve truth conditions, as Fodor would have it), picks out an extraordinarily highly constrained subset of the class of computations.  Indeed, there is no good reason I can think of to believe that the class is non-empty.  Certainly, Fodor is on the hook to provide an argument as to why he thinks such computations exist.  I’ve been taking Fodor to task here, but he’s not the only one who’s been seduced by this idea.  John Searle seems to have the same peculiar notion that computation tout court preserves truth value.   

I am not arguing that people can’t think about things and come up with pretty good results—that’s what we do—but we aren’t 100% accurate, and AI attempts to write programs that are have not achieved complete accuracy either, so the notion of a computation that respects content is blurry indeed. 

What bothers me about the subsumption of truth preservation under the rubric of computation is that I think it elides an important issue, viz. what it means to preserver truth or respect content.  I am willing to allow that the brain is phylogenetically structured to facilitate the ontogenetic development of specific algorithms that pretty well track certain kinds of characteristics of the physical environment.  To a first approximation, one might, as Fodor does, say that those algorithms respect content or that they preserve truth conditions, but that still begs the question.  The problem is that whatever the brain does (and therefore the mind does), it does it algorithmically.  Preserve Truth is not an algorithm, nor is Respect Content.  To the extent that a process or computation is deterministic, the process is constrained to “respect content” in the sense that symbol identity, not content, is the only input to the process and thus the only thing that can determine what is produced.  I still don’t see describing that as somehow “preserving truth” even with the most trivial interpretation I can possibly put on the phrase.

Married Bachelors: How Compositionality Doesn’t Work

March 31st, 2008

Jerry Fodor (1998, Concepts: Where Cognitive Science Went Wrong) does a thorough job of summarizing convincingly (to me, anyway) the arguments against the theory that concepts are constituted by definitions; so you really don’t need me to tell you that KEEP doesn’t really mean CAUSE A STATE THAT ENDURES OVER TIME or that BACHELOR doesn’t really mean UNMARRIED MAN, right?  Not convinced?  Here’s what I found over the course of a morning’s empirical research:

Put ‘elephant bachelor’ into Google and you get things like:

Bulls generally live in small bachelor groups consisting of an old bull

Males live alone or in bachelor herds.

The males will sometimes come together in bachelor herds, but these are

The adult males (bulls) stay in bachelor herds or live alone.

When they are mature, male elephants leave the herd to join bachelor herds. 

Put in ‘deer bachelor’ and you get: 

During bowhunting season (late August early September) we see mule deer in bachelor groups, as many as 15 in a bunch, feeding on bloomed canola fields

Mule deer bachelor bucks are beginning to show up in preparation for the rut.

But during October, when they’re interested mostly in proving their manhood, rattling can be deadly on bachelor bucks. 

Put in ‘“bachelor wolves”’: 

surrounded in the snow by a starving and confused pack of bachelor wolves.

You can come up with a lot of names for these youngsters: rogue males, bachelor wolves, outcasts. Some call them the Lost Boys. 

Similarly, ‘bachelor’ combined with ‘walrus’, ‘whale’, ‘dolphin’, ‘penguin’, ‘swan’ (for the birds, it helps to add the term ‘nest’ to winnow the returns).

‘ethology bachelor’ yields: 

Bachelor herds refer to gatherings of (usually) juvenile male animals who are still sexually immature, or of ‘harem’-forming animals who have been thrown out of their parent families but not yet formed a new family group. Examples include seals, lions, and horses. Bachelor herds are thought to provide useful protection for social animals against more established herd competition or aggressive dominant males. Males in bachelor herds are sometimes closely related to each other. 

So bachelors don’t need to be men.  One might try to fix this by saying a BACHELOR is an UNMARRIED MALE or even an UNMARRIED ADULT MALE (to rule out babies) instead of an UNMARRIED MAN, but I struggle with the idea of UNMARRIED whales, penguins, and elephants.  Would that also cover animals that have mates, but are living together without the benefit of clergy?  Don’t worry about this too much because even MALE won’t do the trick. 

‘“bachelor females”’ returns: 

dormitory facilities, and the 7 35 or so bachelor females residing in defense housing on Civilian Hill were transferred to the renovated dormitories.

 I feel sorry for you. And yes, this was a half-fucked attempt to gain the affection of all the bachelor females in the world. 

‘“bachelor women”’ returns: 

Today, double standards still prevail in many societies: bachelor men are envied, bachelor women are pitied.

Maggie is a composite of a number of independent, “bachelor” women who influenced my formative years.

Did you know, for example, that half–exactly 50 percent–of the 1000 bachelor women surveyed say they actively are engaged at this very moment in their

independent bachelor women that is now taking place is a permanent increase. It is probably being reinforced by a considerable number of  [H. G. Wells 1916, What is Coming? A Forecast of Things after the War. Chapter 8.] 

Of particular note is the last example, specifically the fact that it dates back to 1916, before most, if not all, discussions of BACHELOR meaning UNMARRIED MAN. 

The phrase ‘“married bachelor”’ returns lots of philosophical (and theological!) treatises on whether it is meaningless, incoherent, nonsensical, or just plain impossible (for humans or for God); but, it also returns occurrences of the phrase in the wild, where it exists and is, thus, clearly possible: 

Nevertheless, a true married bachelor, we think, would have viewed his fate philosophically. “Well, anyway,” he’d say with a touch of pride,

Ever wonder what a married bachelor does on Friday Night (that is Wednesday in Saudi)? HE GOES TO BED EARLY (and dreams about his wife).

Most Chinese men in Canada before the war were denied a conjugal family life and were forced to live in a predominantly married-bachelor society.

It was one of the golden principles in services that there should be a decent interaction with fair sex on all social occasions and going “stags” (married bachelor) was looked down upon as something socially derelict or “not done”.

Peterson’s days as a married bachelor. SAN QUENTIN – According to recent reports from San Quentin, Scott Peterson is adjusting nicely to prison life.

Walter Matthau is the “dirty married bachelor“, dentist Julian who lies to his girlfriend, Toni (Goldie Hawn)by pretending that he is married.

…that her love for camping was so dominant; he thought he’d better join her and they would start their own camp or else he would be a married bachelor.

Some bad choices: sisters dissin’ sisters; no-money no-honey approach; loving the married bachelor ; or using your finance to maintain his romance.

It was just four of us – three singles and a married bachelor. As I. tasted the deep fried and cooked egg plants, dhal curry and deep fried papadams,

India is the uncomplaining sweetheart whom this married bachelor flirts with and leaves behind. Every time. And she knows it all and yet smiles

There is no object more deserving of pity than the married bachelor. Of such was Captain Nichols. I met his wife. She was a woman of twenty-eight,    [Somerset Maugham 1919, The Moon and Sixpence. Chapter 46.]

Two of these are of particular note:  The final example dates back to 1919; and the penultimate example uses the phrase metaphorically (or more metaphorically, if you prefer).

As a child, I’m sure I would have found all of these examples quite puzzling and would have asked, “If ‘bachelor’ means ‘unmarried man,’ then how can there be a ‘married bachelor?’”

The issue here is compositionality.  How do we understand the meaning of phrases like ‘the brown cow’ or ‘the married bachelor’?  It can’t be the way Fodor (1998, p. 99) explains it.  Here’s what Fodor says, except I have substituted throughout ‘married’ for ‘brown’ and ‘bachelor’ for ‘cow’.  You will note that what makes reasonable sense for ‘the brown cow’ is incoherent for ‘the married bachelor’. 

Compositionality argues that ‘the married bachelor’ picks out, a certain bachelor; viz. the married one.  It’s because ‘married’ means married that it’s the married bachelor that ‘the married bachelor’ picks out.  If English didn’t let you use ‘married’ context-independently to mean married and ‘bachelor’ context-independently to mean bachelor, it couldn’t let you use ‘the married bachelor’ to specify a married bachelor without naming it.

It’s clear that something that distinguishes the uses documented above from the more usual UNMARRIED MAN (more or less) uses.  I was tempted to say that the more usual uses are literal as opposed to figurative (metaphorical?).  Yes, but as has been pointed out, while it may be literally correct to say that the Pope is a bachelor, it feels like an incorrect usage.

Well, it just goes on and on.  At this point, of course, apoplectic sputtering occurs to the effect that these are metaphorical uses and should be swept under the rug where all inconvenient counterexamples are kept and need never be dealt with.  But speaking of KEEP, as Fodor (pp. 49-56) points out, Jackendoff’s program (though not in so many words) to accommodate things like this by proliferating definitions of KEEP.  Fodor characterizes this as just so much more messy than thinking that KEEP just means keep.  I agree.

For more about married bachelors, see also http://plato.stanford.edu/entries/analytic-synthetic/