Archive for the ‘UNEDITED NEW MATERIAL’ Category

121029 – Function and Teleology

Monday, October 29th, 2012

121029
Tyler Burge talks about representational function as if it is something that the human perceptual system has.  He says that a representational function succeeds if it represents veridically.  But that really doesn’t properly characterize the human perceptual system.  I end up feeling that the analysis is running the wrong way.

What we are concerned about is not what the perceptual system is supposed to do, that is, what we conclude that it should do (although that may be an interesting thing to speculate on), but rather we should ask what the perceptual system actually does and how it does it.  This is the difference between positing an algorithm or a set of requirements and then trying to find evidence for them on the one hand, and on the other, trying to understand what actually happens.

Failure to represent veridically is perhaps causally related to behavior that is suboptimal from the standpoint of an observer with access to the veridical facts, but an organism behaves based on what it has available, not what it would be nicer to have available.  It is already granted that proximal inputs underspecify distal reality.  The point is to make the most of what one gets.

120724

Tuesday, July 24th, 2012

120724

And what would be the standard of veridicality for a perception as of something as red?  Red, as we have come to understand it, is complicated.  Red has to do with the spectral characteristics of illuminants and their intensities, as well as surface reflectances over an area often larger than the area seen as red.  The best way I can think of to test the veridicality of my perception of something as red is to ask around to see if I can find a consensus.  Who knows?  It might be a green orange under peculiar conditions of illumination.  The other way is just to act as if the perception is veridical, actionable, reliable until proven otherwise or until it doesn’t matter any more.

The point of intensionality (with the ‘s’) is that apparently evolution hasn’t come up with a way to infer much in depth about distal reality on the basis of woefully underdetermined proximal stimulation.  But opaque references are more actionable than no references.  It’s a wonder evolution has eventuated in as much as it has.

So, we have an unbewusster Schluss mechanism to get opaque specifications of what is out there, and on top of that we somehow acquired a separate mechanism of bewusster Schluss to discover that Hesperus and Phosphorus are the same heavenly body and to believe experts who tell us so.

120701

Sunday, July 22nd, 2012

120722

The problem with ‘veridicality’ as a criterion for ‘successful’ perception is that veridicality is an ideal that has no counterpart in the real world.  I would prefer something along the lines of ‘actionable’ to replace ‘veridical’, the idea being that good enough is good enough, and it is unnecessary to set an unattainable standard against which to measure successful representation.

Veridicality is recognized as an idealized standard.  Fodor noted that water includes stuff that may be cloudy and the stuff that is in polluted lakes.  Scientists tell us that jade is a disjunction.  Jade can be either of two minerals, jadeite and nephrite, with distinct chemical compositions.  In nature, bulk water, even H2O water, is a mixture of molecules formed of the three isotopes of hydrogen—hydrogen, deuterium, and tritium—and the pure forms of all three isotopic kinds of H2O have different physical and biological characteristics, e.g., pure deuterium water freezes at a different temperature and is poisonous.

What would be the standard of veridicality for a perception of something as water?  Surely, one would like it to be that water is present; and that pushes matters onto the (middle level) concept WATER, but the semantics of WATER then cannot be that WATER is H2O tout court.  So, we have to abandon the idea that WATER is anything but water.

We can empirically examine stuff that we agree to be water (or jade), and scientists can study the stuff and explicate the discernible variations among things that we successfully perceive to be that stuff.  I don’t think this is a intolerable.  It relieves us from having to posit a world filled with ideal exemplars that we have to conceptualize through a glass darkly.

Put another way, concepts and their formation are as much a product of evolution as is whatever ability there is to perceive stuff as of particulars of such concepts.  This is as it should be.  The organisms (us) we are interested in are the product of the interactions of organisms (our ancestors) with their environment.  That the outcome of billions of years of interaction is systems whose pitifully underdetermined proximal inputs provide them with generally actionable information about the external environment just goes to show that evolution–a really stupid process by just about any criteria I can think of—has remarkable consequences.

120701

Saturday, July 7th, 2012

120701

Can there be representation without representation as?  Perception without perception as?  Can there be perception without concepts?

What is going on when we see an artichoke and can’t remember what it is called?  How does the word ‘artichoke’ fit in with the perception of an artichoke as an ARTICHOKE?  Take carrots (please): if I know English and Spanish and I see a carrot, must I see it as either a CARROT or a ZANAHORIA if I am to see it at all?  (No seeing without concepts.)  What does it mean to say I see a carrot as such?  Is that just a transparent attempt to beg the question of which concept I see it as?  If a cat sees a carrot, it must see a carrot as something.  A CARROTCAT ? It can’t be a CARROT or a ZANAHORIA, although is is surely a carrot.  There in Thailand I had for breakfast exotic fruits whose names I never knew, but which I recognized in terms at least of which ones I liked and which ones I didn’t care for.  So at first I saw them as BREAKFAST FRUITS OF UNKNOWN DESIRABILITY.  I’m willing to grant that as a concept.

What if I’m driving, listening to the radio, and thinking about buying an iPad.  I see and react to all sorts of driving related things: cars, traffic signals, etc., but a lot of the things I see don’t appear to make an appearance in consciousness.  Do I have to say I saw them?  How do I distinguish terminologically between things that made it to (How shall I say?) first class consciousness and thing that were handled by second class consciousness? If I can’t say that I saw them, what must I say to indicate that at some level I took them into consideration because I stayed on the road in my lane and didn’t crash into anything?

090226 (originally 081224) – Is all computation epiphenomenal

Thursday, February 26th, 2009

081224

Is all computation epiphenomenal?

[090226]

Is COMPUTATION a concept with no extension?  In other words does computation always require an intensional context?  Maybe this is what Searle is getting at when he insists that computation is in the mind of the beholder.  It would seem that there are quite a few such concepts, e.g., METAPHYSICS, CAUSE, TRUTH, ONTOLOGY, EPISTEMOLOGY, FREEDOM, HAPPINESS, etc.  Is it the case that only concepts whose content is physical actually have an extension?  Is even that consolation ephemeral?  Does the unending cycle that is the external environment acting upon the internal environment acting upon the external environment acting upon the internal environment … ad infinitum necessarily entail only probabilistic (Bayesian) certainties?  Or does it entail only intensional certainties (whatever that may mean)?

Fodor 2008 (n.18) says that ‘it’s probable that …’ is extensional and unable to reconstruct intensionality in any form.  “An intensional context is one in which the substitution of coextensive expressions is not valid.”  (n.1)  But isn’t it the case that ‘it’s probable that …’ becomes intensional if ‘…’ is replaced by an intensional attribute as, for example if Oedipus were to say, “It’s probable that my mother dwells many leagues hence.”

Intensionality is about invariants and irrelevancies, about fixed and free parameters that map via a characteristic transduction process to and from an environmental state that is extensional content (where coextensive expressions are indistinguishable).  Intensionality develops in both evolutionary and epigenetic time.  It is real easy to get confused about what goes on here.  That seems to be what the idea of ‘rigid designators’ is about.

In the context of computer science, the programmer has intentions, but the program has only intensions (with an s).  Or at least that is the way things seem now.[1]  The fact that we are willing to accept in this statement the attribution of intentionality to the programmer is significant because it suggests that the boundary between intentionality and intensionality (with an s) may shift about depending on the context of the polemic.  This is a puzzling thought.  Shouldn’t the intentionality / intensionality distinction be essential?  It might be, for example that Oedipus the programmer writes an algorithm to determine, based on some kind of survey data, which women are mothers and of whom.  The (incorrect) program he writes looks like the following:

For each woman w do
   If w is not married
      Then w is not a mother
   Else
      If w has children c
         Then w is the mother of c
      Else
         w is not a mother
End do

It’s not that Oedipus thinks that an unmarried woman with children is not a mother; he just writes the program incorrectly.  So, the extension of the world-at-large’s intensional concept of MOTHER-OF[2] differs from the extension of Oedipus’s intensional concept of MOTHER-OF, which differs from the extension of the intensional concept MOTHER-OF that his program implements.  This just goes to show that the wise child knows his own mother and that one person’s extension may be another’s intension.

‘WATER is H2O’ and ‘it is probable that WATER is H2O’

This is an epistemological problem. Epistemological WATER is epistemological H2O only insofar as platonic WATER and platonic H2O (if such there be) have interacted in the context of the history of the universe   that includes the evolution of human beings capable of framing the concepts and doing the science necessary to connect the two.  But the problem is the same as the one Fodor 2008 raises in relation to evolution and the essential difference between selection and selection for.  Muddy water and water containing other impurities aside, H2O arguably isn’t a natural kind, since there are three naturally occurring isotopes of hydrogen and more than three naturally occurring isotopes of oxygen and all of those can be in distinct quantum states that can, given appropriate laboratory equipment, be physically distinguished from one another.

As Fodor 2008 observes in a slightly different context, “what’s selected underdetermines what’s selected for because actual outcomes always underdetermine intentions.”  (p.6)  This is as true when doing science as it is when doing evolution: what’s observed underdetermines what happened because actual observations always underdetermine total postdiction of experimental conditions.  You can refine a bit, but you can’t pin down, especially when you try to pin down things so precisely that you are in the realm of Heisenberg uncertainty and quantum mechanical indeterminacy.  So precision as we commonly understand it is a platonic ideal without a real world correlate and, more to the point, an intensional process that doesn’t have an extension.

Fodor 2008 further observes (p.9) that “who wins a t1 versus t2 competition is massively context sensitive.”  Ditto, whether WATER is H2O or XYZ or both or neither.

===================== Notes =====================

[1]  This is the nature of many a program bug.  The programmatic identification of content from transduced data (the type that the code assigns to those data) may not accurately track the programmer’s intended identification of that content even if the transduction is accurate and the transduced data are sufficient to make the determination.  If the programmer errs in writing the type determination code, the type determination the program makes will err (from the programmer’s standpoint), but no inconsistency will be detectable within the program.

[2] Which includes that Jocasta is the mother of Oedipus.

081225 – Why disjunctions can figure in laws?

Thursday, December 25th, 2008

081225

Why Disjunctions Can Figure in Laws?

Loewer 2007a[1] argues for a Non-Reductive Physicalism (NRP) as contradistinguished from plain old Reductive Physicalism (RP).  This is something of a family quarrel to begin with because both sides seem to agree that dualism is out and mentation supervenes on a purely physical substrate.

In particular, Loewer considers and dismisses a “line of thought that threatens to show that NRP is unstable” and thus not a coherent alternative to RP.

Suppose that M is a mental property and occurs in some law, say M →R (the law may be a ceteris paribus law) and so is a G-property.  Suppose that physicalism is true.  Won’t there be some … physical property Q constructed out of physical genuine properties—i.e. a disjunction of physical properties or configurations of physical properties—that is coextensive with M in all physically possible worlds?  But then won’t it be the case that Q →R is also a law?  If so, it follows that Q is a G-property since it figures in a law.  If this is correct, then NRP comes very close to collapsing into RP since either M = Q or M* = Q where M* is the property M restricted to the class of physically possible worlds.  In the first case RP holds; in the second case it is close enough to make the difference between RP and NRP look awfully trivial.

Loewer offers two counterarguments.  The first is one that he dismisses out of hand because, he says, it looks “a lot like ‘declaring victory and withdrawing’”:

If any construct out of physical properties that is coextensive (or coextensive in every physically possible world) with a G-property counts as a P-property then indeed NRP and RP come to much the same.

The problem he says is that

considerations involving functionalism and externalism show that Q will have an enormously complex characterization in terms of physics and plausibly has no characterization in terms of any of the special sciences.

In effect, Loewer invokes Occam’s razor, which says; Simpler is better; don’t complicate things unnecessarily.  In so doing, Loewer is following Fodor’s argument that complex (and sometimes potentially unbounded) disjunctions of physical properties are not natural kinds.  As Loewer summarizes Fodor, the problem is that the disjunctive properties at issue need not be kinds, and

disjunctions of physically heterogeneous properties are not kinds of physics.  [Fodor] seems to mean by this that the various [properties] can be realized by various configurations of physical entities that are made from different materials.

On the other hand, although the disjunction of the realizers of F may be physically heterogeneous (and so not a kind of physics) they may be psychologically homogenous so that F is a kind of psychology. If F is a functional natural kind of psychology its instances are psychologically homogeneous since they share the same psychological role.

Although Fodor doesn’t say this he might add that psychological properties and laws may obtain even in worlds whose fundamental properties and laws are very different from those of the actual world. In these worlds psychological properties are realized by alien fundamental properties and psychological laws by alien fundamental laws.[2]

Yates 2005[3] analyzes Fodor’s cavil as (I think properly) a question of “gerrymanderedness rather than disjunctiveness or heterogeneity.” (p. 218, original italics).  He proposes that we grant Fodor that gerrymandered disjunctions are not suitable for framing laws.  The crucial point to note now is that disjunctions of the realizers of functional kinds are not gerrymandered.  Why?  Because in order to count as realizers of a given functional property, all the disjuncts must play the causal role that defines it.  This is where Papineau’s [1985] argument comes in.  If special science properties are multiply realizable (and so irreducible), then their realizers must be heterogeneous.  But in that case, something has to explain how all the non-identical realizer properties at, say, the physical level, share the causal power constitutive of the functional properties at some special science level, say biology.  (p. 219)

The problem of evolutionary selection arises.

It would be miraculous if all the different realizer properties play the same causal roles by coincidence.  Whence a dilemma: either there is an explanation of the otherwise miraculous coincidence, or special science properties are not multiply realizable after all. (p.219)

Is this really an evolutionary problem?  I’m not sure I understand Yates’s argument here.  He talks about ‘projectibility’ just as Loewer does, and I don’t know what that is.  It may be that special science properties are indeed multiply realizable, but that there is something special about whatever realization happened to develop first.  The algorithm doesn’t care about how it is realized (implemented) just so long as an implementation of the basis functions is available.

Now, I don’t care whether RP or NRP is the right name to blazon on the banner of Truth, but I do care about making sense of things. Rather than talk about special sciences, let’s talk about algorithms and their implementations.

==================== Notes ===================

[1] Loewer, Barry.  2007a.  “Mental Causation, or Something Near Enough.” in Philosophy of Mind.

[2] Loewer, Barry.  2007b.  “Why is There Anything Except Physics?” To appear in Synthese Special Issue on Fodor (2007).

[3] Yates, David.  2005.  The Causal Argument for Physicalism.  King’s College London.  Doctoral Dissertation.

081204

Thursday, December 4th, 2008

081204

Suppose that what it means to be a particular individual in possession of a particular individual mind is to be a particular individual physical device that implements a particular individual algorithm.

What an Algorithm Is
Searle 2004 (p.67) describes an algorithm as “a method for solving a problem by going through a precise series of steps.  The steps must be finite in number, and if carried out correctly, they guarantee a solution to the problem.”

By talking about solving a problem, the description takes for granted that in order to create or recognize an algorithm we must have1) a problem, 2) a way to describe it, and 3) a way to describe its solution.  Within this formulation, an algorithm can only have so-called derived intentionality, viz. before an algorithm can come into existence somebody has to have a purpose that determines what the algorithm is to be about.  As Searle points out, a calculator (an instantiation of a calculation algorithm) doesn’t do calculations.  What it does is change states in a deterministic way in response to physical movements of some of its components (key presses) and alter its appearance (displaying results) as a side effect of its internal state changes.  A calculator is usable as a calculator by human beings only because human beings assign calculation-related meanings to key presses and display patterns.  The meaning does not reside in the calculator, it resides in the user.  Following this line of thought, Searle concludes that neither syntax nor semantics is present in an algorithm.  This he says is because syntax and semantics are constructs present only in conscious human beings.

These conclusions are warranted under the given definition of what constitutes an algorithm.  However, I will propose an alternative definition that I will argue allows for something we can still reasonably call an algorithm to have syntax and, in its instantiations, semantics without having to provide either from an external source.

I propose to consider algorithms as being about the implementation (realization) of behaviors in time.  In a sense, then, an algorithm is an abstraction that specifies a particular deterministic computer program.  More formally, an algorithm is a finite series of instructions[1] (steps) that comprise a behavior (a generalization of the idea of performing a task).  Algorithms are constructed on the basis of a set of primitive functions (the basis functions) that, taken together specify the operation of an abstract (virtual) machine (computer).  It is not possible to specify an algorithm without specifying the set of primitive functions in terms of which the algorithm is expressed, although informally the set of primitives is simply taken to contain whatever functions the specification of a particular algorithm requires.  The abstract machine implicit in the set of primitive functions can be described in terms of its computational power (the class of calculations it is capable of).  The two most common (and for our purposes most relevant) levels of computational power are 1) computations that are possible for finite state machines and 2) computations that are possible for (unrestricted) Turing machines.  The former is formally equivalent to (has identical computational power as) the subset of Turing machines having only a finite tape.

It is, in general, tedious in the extreme to express algorithms in terms of Turing machine functions.  And it is also tedious in many cases to make explicit the set of primitive functions that provide the basis for a particular algorithm or set of algorithms.  For that reason (and, one assumes, not incidentally because people thought up the list-of-instructions concept long before Turing thought up the machine formalism that bears his name) the specification of most algorithms leaves the specification of the underlying set of primitive functions implicit.  That works pretty well and we all learn (maybe now I have to say used to learn) addition, subtraction, multiplication, division, and square root algorithms in elementary school arithmetic, without belaboring or worrying overmuch about the specifics of the underlying primitive functions, e.g., the fact that the set of functions on which the addition algorithm depends includes a function that enables one to write a sort of superscript number above and to the left of a specified decimal position in the topmost number of a column of numbers to be added (the “carry”) and a function that enables one to read it back to oneself at a later time, and so on.

Special Considerations re: Primitive Functions
Without attempting a rigorous exposition, we may take a mathematical function to be a (deterministic) relation that uniquely associates each element of its domain (the set of possible input values) with an elements of its range (the set of possible output values), in other words, an input-output specification.  By definition, mathematical functions do not have side effects.  This has all sorts of good consequences for proving theories and doing mathematical logic.  However, for the specification of algorithms useful for talking about how the brain works, we need the computer science definition of a function, which generalizes the definition of function to include processes that have side effects.

Side Effects and Referential Opacity

A side effect is an event or process affected by and/or affecting the state of the environment external to the (implementation of the) function and occurring or commencing conceptually within the space or interval between the arrival of its input and the corresponding return of its output.  The most common use of computer functions with side effects is to obtain inputs from or deliver outputs to the external environment.[2]

Function side effects are problematical for mathematical theories of computation, because they introduce the unconstrained external world into an otherwise nicely circumscribed theoretical construct.  The formal response of computer science has been to expand the boundaries of the theoretical construct to include the possibility of a limited set of side effects explicitly in the domain and range of the function.  The drawback this creates is that the more the domain and range include of the external world, the more difficult it is to formally prove program correctness.  Nonetheless, functions with side effects are an integral part of the standard description of a Turing machine: specifically, the operations (functions) that read from and write to the machine’s (infinitely extensible) tape.[3]

At the most fundamental level, the issues raised by side-effects and referential opacity relate to the (theoretically, at least) arbitrary boundary between a system selected for analysis and the external environment in which it is embedded.[4]  Because a theory of the mind must I think be about the brain in the context of an (external) environment that is affected by and affects brains, we need to be able to draw a boundary between an entity in possession of a mind and the environment in which it operates.[5]  We thus need to allow for side effects in a theory of the mind, simply in recognition of the fact that not everything that happens in a mind originates in processes within the mind.  There is an outside world, and it matters.[6]

Side effects show up in the realization (instantiation, physical implementation) of an algorithm in two ways.  1) The set of basis functions for an algorithmic system may include functions that explicitly query and manipulate the physical environment.  2) The physical processes that implement the algorithm have real physical side effects that are above and beyond (outside of) the abstract description of the algorithm—outside, even, the abstractly specified side effects that may be required in the basis functions.  For example, a computer needs a power source that meets certain specifications and will operate only under a specified range of environmental conditions.

When analyzing or describing the behavior of a particular physical realization of an algorithm, we generally concentrate on side effects of the first kind and take for granted that the physical side effects of the second kind—those that ground or enable the basis functions themselves—are in place.

Significance of Side Effects
[Didn’t I say what’s in this paragraph earlier?]
The introduction of functions with open-ended side-effects has the effect of vastly complicating (often to the point of practical impossibility) any complete formal analysis of algorithmic behavior.  This, because a formal analysis can only take place within a formal description of a closed system (one in which all events occur deterministically through fully specified rules and circumstances).  To the extent that side effects bring aspects of the external world into a system, a formal description of the system must comprehend a formal description of at least those aspects of the external world.  In effect, the less constrained the range of allowable side effects, the broader must be the scope of the formal description of the system.

Sequencing and Counterfactuals
Philosophers appeal to the idea of counterfactuals in order to deal with the fact that at the macro physical (as opposed to the quantum physical) level events only happen one way, although our intuitions tell us that if things had been sufficiently different (a counterfactual condition) events would have turned out differently.  In planning for the future, where there are no facts yet, we just use conditionals (e.g., If the creek don’t rise and the sky don’t fall, I’ll visit you next Thursday).  Computer programming is a quintessential case of formal planning for the future.  The programmer’s task is to devise an algorithm that will accomplish whatever it is supposed to accomplish under constrained, but as yet undetermined conditions.

Sequencing and counterfactuals are at the heart of causal descriptions.  Sequencing and conditionals are at the heart of algorithmic descriptions.  Every entry in the state table of a Turing machine consists in a conditional (e.g., if the symbol under the read/write head on the tape is “1”), an action (e.g., write “0” on the tape and move the read/write head to the left), and a sequencing directive (e.g., go to state 275).  In the abstract, sequencing begs the question of causality.  If B must follow A, does that mean that A must cause B or is it sufficient that A must be correlated with B?  Does it matter?  In an algorithmic specification, the answer is no.  In fact, it is not even a meaningful question because sequencing is primitive (and thus opaque).  So causality is not an issue in an algorithmic specification.

It feels like we have two choices with respect to causality.  We can stumble ahead in despite of all that Hume warned us about, and we will fall into the mire he described.  Alternatively, we can take the view that informs physics, viz. systems evolve over time according to empirically reliable formulas.  On this view, the attribution of causality requires drawing arbitrary physical and temporal boundaries in the midst of an evolving process to interpret as objects of interest at particular points in time.  We then examine the state equations of the system in a small neighborhood near each selected point in time and we assign causality to the infinitesimally prior state.

In effect relativistic limits and the flow of time delimit the causes of everything.  If there is anything that counts as Hume’s necessary connexion, it is to be found in the empirically based physical theory that the state of the universe everywhere in a light-cone of radius delta-t centered on the point P at time T is what determines what will happen at P at time T plus delta-t.  The instant that one focuses attention on a proper subset of that light cone as a “cause”, necessary connexion becomes subject to ceteris paribus conditions.

If we want to say that some proper subset of the light cone centered on P at time T caused what happened at P at time T plus delta-t, we must recognize that this is a counterfactual that is falsifiable.  Such an assertion requires a ceteris paribus qualification if we are to accept it as a true statement.

====================== Notes ======================

[1] The insistence here is on the finiteness of the series of instructions, not the finiteness of the time necessary to complete the task.  Some algorithms provably finish in finite time, e.g., the well-known algorithms taught in elementary school for adding two finite integers together.  Other algorithms may continue indefinitely, e.g., the square root algorithm, which, in some cases—as when applied to the integer 2—will continue to produce additional digits indefinitely.  Of course, constraints of finite physical resources and finite time will prevent any physical instantiation of an algorithm from continuing forever.

[2] Typical examples:

 

·         A function whose outputs are deterministic, but are not completely determined by its inputs, e.g., a function whose side effect is to provide the current date and time as its output.  Such a function is said to be referentially opaque.  

·         A function that requests data from a user.  The data returned by such a function are determined by whatever the user enters; but such a function strains the mathematical definition of a function because an equivalent input-output specification cannot be pre‑specified—the input to the function is a request for data, and the output is whatever the user enters.  At best, one can pre-specify the range of the function by limiting what the user is allowed to enter. 

·         A function that delivers its input (the value(s) provided to the function) to some kind of display.  Such a function affects the state of the external environment and may ultimately affect the internal environment of the program if and when something in the external environment reacts to that change of state in a way detectable by the program.  Nonetheless, the output of such a function (the value(s) it returns) is (are) problematical.  Usually such a function simply returns a value indicating success (or failure) insofar as that can be determined by the computer; but strictly (mathematically) speaking the result of the function is the sum total of the effects on the computer of changes in its environment that occur as a result of the information it displays.

[3] Strictly speaking, a function that simply stores its input value for later retrieval and a complementary function that retrieves and provides as its output a value thus stored are both functions with side effects.  Writing acts upon the environment.  Reading queries the state of the environment.  By convention, the operations of read and write functions are specified to take place deterministically within the internal environment of the abstract machine and such side effects are simple enough not to be considered problematical.

[4] Arguably, the universe and everything in it is just one big system; and any subdivision of that system is ultimately arbitrary.  That said, Algorithmic Information Theory, briefly described a little later on provides a way of assessing the relative complexity of one subdivision vis-à-vis another.

[5] As far as I can tell, not being myself a substance dualist, there can’t be any real difference between what goes on in the brain and what goes on in the mind.  If I think of one, I’ll go back and change my terminology accordingly.

[6] Notwithstanding the skeptical hypothesis that you are really just a brain in a vat and all of your experience is illusory.  For an overview of the literature on brains in vats, see Brueckner 2004.

030718 – Self-Reporting

Friday, July 18th, 2003

030718 – Self-Reporting

Is there any advantage to an organism to be able to report its own internal state to another organism?  For that is one of the things that human beings are able to do.  Is there any advantage to an organism to be able to use language internally without actually producing an utterance?

Winograd’s SHRDLU program had the ability to answer questions about what it was doing.  Many expert system programs have the ability to answer questions about the way they reached their conclusions.  In both cases, the ability to answer questions is implemented separately from the part of the program that “does the work” so to speak.  However, in order to be able to answer questions about its own behavior, the question answering portion of the program must have access to the information required to answer the questions.  That is, the expertise required to perform the task is different from the expertise required to answer questions about the performance of the task.

In order to answer questions about a process that has been completed, there must be a record of, or a way to reconstruct, the steps in the process.  Actually, is not sufficient simply to be able to reconstruct the steps in the process.  At the very least, there must be some record that enables the organism to identify the process to be reconstructed.

Not all questions posed to SHRDLU require memory.  For example one can ask SHRDLU, “What is on the red block?”  To answer a question like this, SHRDLU need only observe the current state of its universe and report the requested information.  However, to answer at question like, “Why did you remove the pyramid from the red block?”  SHRDLU must examine the record of its recent actions and the “motivations” for its recent actions to come up with an answer such as, “In order to make room for the blue cylinder.”

Not all questions that require memory require information about motivation as, for example, “When was the blue cylinder placed on the red cube?”

Is SHRDLU self-aware?  I don’t think anyone would say so.  Is an expert system that can answer questions about its reasoning self-aware?  I don’t think anyone would say so.  Still, the fact remains that it is possible to perform a task without being able to answer questions about the way the task was performed.  Answering questions is an entirely different task.

030715

Tuesday, July 15th, 2003

030715

Hauser, Chomsky, and Fitch in their Science review article (2002) indicate that “comparative studies of chimpanzees and human infants suggest that only the latter read intentionality into action, and thus extract unobserved rational intent.” this goes along with my own conviction that internal models are significant in the phenomenon of human and self-awareness.

Hauser, Chomsky, and Fitch argue that “the computational mechanism of recursion” is critical to language ability, “is recently involved and unique to our species.”  I am well aware that many have died attempting to oppose Chomsky and his insistence that practical limitations have no place in the description of language capabilities.  I am reminded of Dennett’s discussion of the question of whether zebra is a precise term, that is, whether there exists anything that can be correctly called a zebra.  It seems fairly clear that Chomsky assumes that language exists in the abstract (much the way we naively assume that zebras exist in the abstract) and then proceeds to draw conclusions based on that assumption.  The alternative is that language, like zebras, is in the mind of the beholder, but that when language is placed under the microscope it becomes fuzzy at the boundaries precisely because it is implemented in the human brain and not in a comprehensive design document.

Uncritical acceptance of the idea that our abstract understanding of the computational mechanism of recursion is anything other than a convenient crutch for understanding the way language is implemented in human beings is misguided.  In this I vote with David Marr (1982) who believed that neither computational iteration nor computational recursion is implemented in the nervous system.

On the other hand, it is interesting that a facility which is at least a first approximation to the computational mechanism of recursion exists in human beings.  Perhaps the value of the mechanism from an evolutionary standpoint is that it does make possible the extraction of intentionality from the observed behavior of others.  I think I want to turn that around.  It seems reasonable to believe that the ability to extract intentionality from observed behavior would confer an evolutionary advantage.  In order to do that, it is necessary to have or create an internal model of the other in order to get access to the surmised state of the other.

Once such a model is available it can be used online to surmise intentionality and it can be used off line for introspection, that is, it can be used as a model of the self.  Building from Grush’s idea that mental imagery is the result of running a model in off line mode, we may ask what kind of imagery would result from running a model of a human being off line.  Does it create an image of a self?

Alternatively, since all of the other models proposed by Grush are in models of some aspect of the organism itself, it might be more reasonable to suppose that a model of the complete self could arise as a relatively simple generalization of the mechanism used in pre-existing models of aspects of the organism.

If one has a built-in model of one’s self in the same way one has a built-in model of the musculoskeletal system, then language learning may become less of a problem.  Here’s how it would work.  At birth, the built-in model is rudimentary and needs to be fine-tuned to bring it into closer correspondence with the system it models.  An infant is only capable of modeling the behavior of another infant.  Adults attempting to teach language skills to infants use their internal model to surmise what the infant is attending to and then name it for the child.  To the extent that the adult has correctly modeled the infant and the infant has correctly modeled the adult (who has tried to make it easy to be modeled), the problem of establishing what it is that a word refers to becomes less problematical.

030714

Monday, July 14th, 2003

030714

Here’s what’s wrong with Dennett’s homunculus exception.  It’s a bit misleading to discuss a flow chart for a massively parallel system.  We’re accustomed to high bandwidth interfaces between modules where high bandwidth is implemented as a high rate of transmission through a narrow pipe.  In the brain, high bandwidth is implemented as a leisurely rate of transmission through the Mississippi river delta.