Archive for the ‘concepts’ Category

120701

Sunday, July 22nd, 2012

120722

The problem with ‘veridicality’ as a criterion for ‘successful’ perception is that veridicality is an ideal that has no counterpart in the real world.  I would prefer something along the lines of ‘actionable’ to replace ‘veridical’, the idea being that good enough is good enough, and it is unnecessary to set an unattainable standard against which to measure successful representation.

Veridicality is recognized as an idealized standard.  Fodor noted that water includes stuff that may be cloudy and the stuff that is in polluted lakes.  Scientists tell us that jade is a disjunction.  Jade can be either of two minerals, jadeite and nephrite, with distinct chemical compositions.  In nature, bulk water, even H2O water, is a mixture of molecules formed of the three isotopes of hydrogen—hydrogen, deuterium, and tritium—and the pure forms of all three isotopic kinds of H2O have different physical and biological characteristics, e.g., pure deuterium water freezes at a different temperature and is poisonous.

What would be the standard of veridicality for a perception of something as water?  Surely, one would like it to be that water is present; and that pushes matters onto the (middle level) concept WATER, but the semantics of WATER then cannot be that WATER is H2O tout court.  So, we have to abandon the idea that WATER is anything but water.

We can empirically examine stuff that we agree to be water (or jade), and scientists can study the stuff and explicate the discernible variations among things that we successfully perceive to be that stuff.  I don’t think this is a intolerable.  It relieves us from having to posit a world filled with ideal exemplars that we have to conceptualize through a glass darkly.

Put another way, concepts and their formation are as much a product of evolution as is whatever ability there is to perceive stuff as of particulars of such concepts.  This is as it should be.  The organisms (us) we are interested in are the product of the interactions of organisms (our ancestors) with their environment.  That the outcome of billions of years of interaction is systems whose pitifully underdetermined proximal inputs provide them with generally actionable information about the external environment just goes to show that evolution–a really stupid process by just about any criteria I can think of—has remarkable consequences.

090226 (originally 081224) – Is all computation epiphenomenal

Thursday, February 26th, 2009

081224

Is all computation epiphenomenal?

[090226]

Is COMPUTATION a concept with no extension?  In other words does computation always require an intensional context?  Maybe this is what Searle is getting at when he insists that computation is in the mind of the beholder.  It would seem that there are quite a few such concepts, e.g., METAPHYSICS, CAUSE, TRUTH, ONTOLOGY, EPISTEMOLOGY, FREEDOM, HAPPINESS, etc.  Is it the case that only concepts whose content is physical actually have an extension?  Is even that consolation ephemeral?  Does the unending cycle that is the external environment acting upon the internal environment acting upon the external environment acting upon the internal environment … ad infinitum necessarily entail only probabilistic (Bayesian) certainties?  Or does it entail only intensional certainties (whatever that may mean)?

Fodor 2008 (n.18) says that ‘it’s probable that …’ is extensional and unable to reconstruct intensionality in any form.  “An intensional context is one in which the substitution of coextensive expressions is not valid.”  (n.1)  But isn’t it the case that ‘it’s probable that …’ becomes intensional if ‘…’ is replaced by an intensional attribute as, for example if Oedipus were to say, “It’s probable that my mother dwells many leagues hence.”

Intensionality is about invariants and irrelevancies, about fixed and free parameters that map via a characteristic transduction process to and from an environmental state that is extensional content (where coextensive expressions are indistinguishable).  Intensionality develops in both evolutionary and epigenetic time.  It is real easy to get confused about what goes on here.  That seems to be what the idea of ‘rigid designators’ is about.

In the context of computer science, the programmer has intentions, but the program has only intensions (with an s).  Or at least that is the way things seem now.[1]  The fact that we are willing to accept in this statement the attribution of intentionality to the programmer is significant because it suggests that the boundary between intentionality and intensionality (with an s) may shift about depending on the context of the polemic.  This is a puzzling thought.  Shouldn’t the intentionality / intensionality distinction be essential?  It might be, for example that Oedipus the programmer writes an algorithm to determine, based on some kind of survey data, which women are mothers and of whom.  The (incorrect) program he writes looks like the following:

For each woman w do
   If w is not married
      Then w is not a mother
   Else
      If w has children c
         Then w is the mother of c
      Else
         w is not a mother
End do

It’s not that Oedipus thinks that an unmarried woman with children is not a mother; he just writes the program incorrectly.  So, the extension of the world-at-large’s intensional concept of MOTHER-OF[2] differs from the extension of Oedipus’s intensional concept of MOTHER-OF, which differs from the extension of the intensional concept MOTHER-OF that his program implements.  This just goes to show that the wise child knows his own mother and that one person’s extension may be another’s intension.

‘WATER is H2O’ and ‘it is probable that WATER is H2O’

This is an epistemological problem. Epistemological WATER is epistemological H2O only insofar as platonic WATER and platonic H2O (if such there be) have interacted in the context of the history of the universe   that includes the evolution of human beings capable of framing the concepts and doing the science necessary to connect the two.  But the problem is the same as the one Fodor 2008 raises in relation to evolution and the essential difference between selection and selection for.  Muddy water and water containing other impurities aside, H2O arguably isn’t a natural kind, since there are three naturally occurring isotopes of hydrogen and more than three naturally occurring isotopes of oxygen and all of those can be in distinct quantum states that can, given appropriate laboratory equipment, be physically distinguished from one another.

As Fodor 2008 observes in a slightly different context, “what’s selected underdetermines what’s selected for because actual outcomes always underdetermine intentions.”  (p.6)  This is as true when doing science as it is when doing evolution: what’s observed underdetermines what happened because actual observations always underdetermine total postdiction of experimental conditions.  You can refine a bit, but you can’t pin down, especially when you try to pin down things so precisely that you are in the realm of Heisenberg uncertainty and quantum mechanical indeterminacy.  So precision as we commonly understand it is a platonic ideal without a real world correlate and, more to the point, an intensional process that doesn’t have an extension.

Fodor 2008 further observes (p.9) that “who wins a t1 versus t2 competition is massively context sensitive.”  Ditto, whether WATER is H2O or XYZ or both or neither.

===================== Notes =====================

[1]  This is the nature of many a program bug.  The programmatic identification of content from transduced data (the type that the code assigns to those data) may not accurately track the programmer’s intended identification of that content even if the transduction is accurate and the transduced data are sufficient to make the determination.  If the programmer errs in writing the type determination code, the type determination the program makes will err (from the programmer’s standpoint), but no inconsistency will be detectable within the program.

[2] Which includes that Jocasta is the mother of Oedipus.

Computations Need Not Respect Content

Wednesday, April 2nd, 2008

(Fodor 1998, Concepts: Where Cognitive Science Went Wrong, p. 10) “In a nutshell: token mental representations are symbols.  Tokens of symbols are physical objects with semantic properties.”  

(p. 11) “[I]f computation is just causation that preserves semantic values, then the thesis that thought is computation requires of mental representations only that they have semantic values and causal powers that preserve them….  [F]ollowing Turing, I’ve introduced the notion of computation by reference to such semantic notions as content and representation; a computation is some kind of content-respecting causal relation among symbols.  However, this order of explication is OK only if the notion of a symbol doesn’t itself presuppose the notion of a computation.  In particular, it’s OK only if you don’t need the notion of a computation to explain what it is for something to have semantic properties.” 

Fodor’s derivation from semantic notions (content and representation) to computation seems to be backwards.  It’s semantics that is to be derived and it’s deterministic physical processes (computation, but not using the weird definition Fodor proposes) from which semantics is to be derived.  

My notion of a symbol presupposes the notion of a computation, so both my terminology and the structure of my argument diverge from Fodor’s.  A symbol is a syntactic structure that controls a computation.  As far as I can see, syntax and syntactically controlled computation is all there is to the brain; i.e., mental representations and processes are purely syntactic.  In effect, a symbol is an algorithm whose execution has semantic value via the instantiation of the basis functions over which the algorithm is defined.  That is, the implementation of the basis functions and the implementation of the process that governs the execution of an algorithm represented within the implementation are the elements that give semantic value to an algorithm.  Semantics arises from the physical characteristics of the instantiation of the syntactical processor (the brain).  However abstract the algorithmic processes that describe the functioning of the mind, the semantics of those processes absolutely depend on the physical characteristics of the device (in the instant case, the brain) that instantiates those processes.  In short, syntax is semantics when it comes to the brain. 

The usual definition of a computation is in terms of Turing Machines.  A Turing Machines has three required elements: 1) a finite alphabet of atomic symbols; 2) a sequentially ordered mutable storage medium (a tape from which symbols of the alphabet may be written and to which they may written); 3) a set of state transition laws (a program) governing the operation of the machine.  Symbols, in this formulation, have no semantics.  Any meanings associated with individual symbols or arrangements of groups of symbols is imposed from without.  Operation of a Turing Machine proceeds absolutely without reference to any such meanings. 

When Fodor proposes that for the purposes of his definition of computation it must be the case that “the notion of a symbol does not presuppose the notion of a computation”, I hardly know what to make of it.  In order for an object to serve as a symbol in the sense required by a Turing Machine, such an object must at a minimum be a member of a specifiable finite set (the alphabet) and susceptible to reading, writing, and identity testing (identity testing is required by the state transition laws).  Thus, the class of objects that can serve as symbols in a computation is not unrestricted, and it is incumbent on Fodor’s theory to assert that the objects he proposes to use as symbols satisfy these conditions. 

The problem is that the standard model of computation is literally blind to content, so it is not sensical to assert that computation is “some kind of content-respecting causal relation among symbols” (p. 11).  Fodor says his notion follows Turing, but I can’t figure out what of Turing’s he thinks he is following.  

A computation is the execution of an algorithm.  The effect of the execution of an algorithm is determined by a causal process that is sensitive only to identity and ordering.  In other words, the execution of an algorithm is a syntactic process.  In my book, which this is, computation tout court does not in general respect content.  To assert that computation respects content presupposes a definition of content and a definition of what it would mean to respect it.  Moreover, the phrase computations that respect content (preserve truth conditions, as Fodor would have it), picks out an extraordinarily highly constrained subset of the class of computations.  Indeed, there is no good reason I can think of to believe that the class is non-empty.  Certainly, Fodor is on the hook to provide an argument as to why he thinks such computations exist.  I’ve been taking Fodor to task here, but he’s not the only one who’s been seduced by this idea.  John Searle seems to have the same peculiar notion that computation tout court preserves truth value.   

I am not arguing that people can’t think about things and come up with pretty good results—that’s what we do—but we aren’t 100% accurate, and AI attempts to write programs that are have not achieved complete accuracy either, so the notion of a computation that respects content is blurry indeed. 

What bothers me about the subsumption of truth preservation under the rubric of computation is that I think it elides an important issue, viz. what it means to preserver truth or respect content.  I am willing to allow that the brain is phylogenetically structured to facilitate the ontogenetic development of specific algorithms that pretty well track certain kinds of characteristics of the physical environment.  To a first approximation, one might, as Fodor does, say that those algorithms respect content or that they preserve truth conditions, but that still begs the question.  The problem is that whatever the brain does (and therefore the mind does), it does it algorithmically.  Preserve Truth is not an algorithm, nor is Respect Content.  To the extent that a process or computation is deterministic, the process is constrained to “respect content” in the sense that symbol identity, not content, is the only input to the process and thus the only thing that can determine what is produced.  I still don’t see describing that as somehow “preserving truth” even with the most trivial interpretation I can possibly put on the phrase.

Married Bachelors: How Compositionality Doesn’t Work

Monday, March 31st, 2008

Jerry Fodor (1998, Concepts: Where Cognitive Science Went Wrong) does a thorough job of summarizing convincingly (to me, anyway) the arguments against the theory that concepts are constituted by definitions; so you really don’t need me to tell you that KEEP doesn’t really mean CAUSE A STATE THAT ENDURES OVER TIME or that BACHELOR doesn’t really mean UNMARRIED MAN, right?  Not convinced?  Here’s what I found over the course of a morning’s empirical research:

Put ‘elephant bachelor’ into Google and you get things like:

Bulls generally live in small bachelor groups consisting of an old bull

Males live alone or in bachelor herds.

The males will sometimes come together in bachelor herds, but these are

The adult males (bulls) stay in bachelor herds or live alone.

When they are mature, male elephants leave the herd to join bachelor herds. 

Put in ‘deer bachelor’ and you get: 

During bowhunting season (late August early September) we see mule deer in bachelor groups, as many as 15 in a bunch, feeding on bloomed canola fields

Mule deer bachelor bucks are beginning to show up in preparation for the rut.

But during October, when they’re interested mostly in proving their manhood, rattling can be deadly on bachelor bucks. 

Put in ‘“bachelor wolves”’: 

surrounded in the snow by a starving and confused pack of bachelor wolves.

You can come up with a lot of names for these youngsters: rogue males, bachelor wolves, outcasts. Some call them the Lost Boys. 

Similarly, ‘bachelor’ combined with ‘walrus’, ‘whale’, ‘dolphin’, ‘penguin’, ‘swan’ (for the birds, it helps to add the term ‘nest’ to winnow the returns).

‘ethology bachelor’ yields: 

Bachelor herds refer to gatherings of (usually) juvenile male animals who are still sexually immature, or of ‘harem’-forming animals who have been thrown out of their parent families but not yet formed a new family group. Examples include seals, lions, and horses. Bachelor herds are thought to provide useful protection for social animals against more established herd competition or aggressive dominant males. Males in bachelor herds are sometimes closely related to each other. 

So bachelors don’t need to be men.  One might try to fix this by saying a BACHELOR is an UNMARRIED MALE or even an UNMARRIED ADULT MALE (to rule out babies) instead of an UNMARRIED MAN, but I struggle with the idea of UNMARRIED whales, penguins, and elephants.  Would that also cover animals that have mates, but are living together without the benefit of clergy?  Don’t worry about this too much because even MALE won’t do the trick. 

‘“bachelor females”’ returns: 

dormitory facilities, and the 7 35 or so bachelor females residing in defense housing on Civilian Hill were transferred to the renovated dormitories.

 I feel sorry for you. And yes, this was a half-fucked attempt to gain the affection of all the bachelor females in the world. 

‘“bachelor women”’ returns: 

Today, double standards still prevail in many societies: bachelor men are envied, bachelor women are pitied.

Maggie is a composite of a number of independent, “bachelor” women who influenced my formative years.

Did you know, for example, that half–exactly 50 percent–of the 1000 bachelor women surveyed say they actively are engaged at this very moment in their

independent bachelor women that is now taking place is a permanent increase. It is probably being reinforced by a considerable number of  [H. G. Wells 1916, What is Coming? A Forecast of Things after the War. Chapter 8.] 

Of particular note is the last example, specifically the fact that it dates back to 1916, before most, if not all, discussions of BACHELOR meaning UNMARRIED MAN. 

The phrase ‘“married bachelor”’ returns lots of philosophical (and theological!) treatises on whether it is meaningless, incoherent, nonsensical, or just plain impossible (for humans or for God); but, it also returns occurrences of the phrase in the wild, where it exists and is, thus, clearly possible: 

Nevertheless, a true married bachelor, we think, would have viewed his fate philosophically. “Well, anyway,” he’d say with a touch of pride,

Ever wonder what a married bachelor does on Friday Night (that is Wednesday in Saudi)? HE GOES TO BED EARLY (and dreams about his wife).

Most Chinese men in Canada before the war were denied a conjugal family life and were forced to live in a predominantly married-bachelor society.

It was one of the golden principles in services that there should be a decent interaction with fair sex on all social occasions and going “stags” (married bachelor) was looked down upon as something socially derelict or “not done”.

Peterson’s days as a married bachelor. SAN QUENTIN – According to recent reports from San Quentin, Scott Peterson is adjusting nicely to prison life.

Walter Matthau is the “dirty married bachelor“, dentist Julian who lies to his girlfriend, Toni (Goldie Hawn)by pretending that he is married.

…that her love for camping was so dominant; he thought he’d better join her and they would start their own camp or else he would be a married bachelor.

Some bad choices: sisters dissin’ sisters; no-money no-honey approach; loving the married bachelor ; or using your finance to maintain his romance.

It was just four of us – three singles and a married bachelor. As I. tasted the deep fried and cooked egg plants, dhal curry and deep fried papadams,

India is the uncomplaining sweetheart whom this married bachelor flirts with and leaves behind. Every time. And she knows it all and yet smiles

There is no object more deserving of pity than the married bachelor. Of such was Captain Nichols. I met his wife. She was a woman of twenty-eight,    [Somerset Maugham 1919, The Moon and Sixpence. Chapter 46.]

Two of these are of particular note:  The final example dates back to 1919; and the penultimate example uses the phrase metaphorically (or more metaphorically, if you prefer).

As a child, I’m sure I would have found all of these examples quite puzzling and would have asked, “If ‘bachelor’ means ‘unmarried man,’ then how can there be a ‘married bachelor?’”

The issue here is compositionality.  How do we understand the meaning of phrases like ‘the brown cow’ or ‘the married bachelor’?  It can’t be the way Fodor (1998, p. 99) explains it.  Here’s what Fodor says, except I have substituted throughout ‘married’ for ‘brown’ and ‘bachelor’ for ‘cow’.  You will note that what makes reasonable sense for ‘the brown cow’ is incoherent for ‘the married bachelor’. 

Compositionality argues that ‘the married bachelor’ picks out, a certain bachelor; viz. the married one.  It’s because ‘married’ means married that it’s the married bachelor that ‘the married bachelor’ picks out.  If English didn’t let you use ‘married’ context-independently to mean married and ‘bachelor’ context-independently to mean bachelor, it couldn’t let you use ‘the married bachelor’ to specify a married bachelor without naming it.

It’s clear that something that distinguishes the uses documented above from the more usual UNMARRIED MAN (more or less) uses.  I was tempted to say that the more usual uses are literal as opposed to figurative (metaphorical?).  Yes, but as has been pointed out, while it may be literally correct to say that the Pope is a bachelor, it feels like an incorrect usage.

Well, it just goes on and on.  At this point, of course, apoplectic sputtering occurs to the effect that these are metaphorical uses and should be swept under the rug where all inconvenient counterexamples are kept and need never be dealt with.  But speaking of KEEP, as Fodor (pp. 49-56) points out, Jackendoff’s program (though not in so many words) to accommodate things like this by proliferating definitions of KEEP.  Fodor characterizes this as just so much more messy than thinking that KEEP just means keep.  I agree.

For more about married bachelors, see also http://plato.stanford.edu/entries/analytic-synthetic/

Primitive Concepts and Innateness

Saturday, March 29th, 2008

Fodor (1998, p.15), presenting the (his) RTM view of concepts, says, “I can’t … afford to agree that the content of the concept H2O is different from the content of the concept WATER.”  At least in part, this is a consequence of his assertion that “Concepts are public; they’re the sorts of things that lots of people can, and do, share.” (p.28, italics in original) 

 

If the content of concepts is public (I, for one have no problem with this view), then nobody and everybody is responsible for them and their denoters have to be learned.  It’s easy enough to argue, following Eric Baum (2004, What Is Thought?), that our genome builds us in such a way that we all acquire categories in pretty much the same way.  I’m not sure why I insisted on “categories” in the previous sentence rather than sticking with “concepts.”  I guess it’s because I have already done a lot of thinking about concepts and I’m not sure whether I’m willing to grant concepthood to categories.     

 

A priori, there must be a set of parameterizable functions that are built-in by the genome.  When I talk about parameterization here, I’m talking about learning processes; when I talk about parameterizing models, I’m talking about the inputs to a particular content model at a moment in time.  The former takes place during concept development; the latter during concept utilization.  Taking such a set of parameterizable functions as a basis, content models can (only) be constructed from these components.  The genome thus ensures that ceteris paribus (over a reasonable range of normal human ontogenetic experience) the structure of the content model(s) epigenetically constructed will tend to converge (dare I say they will be the same up to some threshold of difference?). 

 

The convergence we expect to find looks like this: If things that are modeled by a particular content model a in creature A are pretty much the same things that are modeled by a particular content model b in creature B, and if that is true also for particular content models c, d, e, …, etc. in C, D, E, …, etc., then those content models are the content model of a concept whose satisfaction conditions include (pretty much) those things.  Moreover, the human genome is sufficiently restrictive to ensure that in the vast majority of cases (enough to ensure the functioning of language, anyway) we can take these models to implement (represent?) by definition the same concept.  That is, sameness of concepts across individuals arises from the identity of the (shared) facilities available to construct them and the identity of the (shared, lower level) processes that construct them out of (things that turn out to be) invariants these processes extract from the real world. 

 

DOG means dog because the (already strongly constrained) models the human brain automatically constructs when presented with dogs are such that across individuals the models will use identical processes in identical ways (process identity is obviously level-sensitive—I can’t possibly argue that the neural circuits are isomorphic across individuals, but I can argue that the brain is sufficiently limited in the ways it can operate that there is at some level of explanation only one way a dog model can be implemented).

 

This is similar to the poverty of the stimulus argument that argues for much of language to be innate.

 

I think we’re almost there now, but it occurs to me that I have built this on the identity of things, which may itself be tendentious.  There’s no problem with saying a particdular thing is identical to itself.  But that’s not where the problem arises.  How do we know what a thing is?  A thing is presumably something that satisfies the concept THING.  But careful examination of the reasoning above shows that I have assumed some kind of standardized figure-ground system that reliably identifies the things in an environment.  Now where are we?  Suppose the things are dogs.  Do we have to suppose that we know what dogs are?

 

Let’s try to save this by saying by substituting environments for things and then talking about world models.  That is, if the environment that is modeled by a particular world model a in creature A is pretty much the same environment that is modeled by a particular world model b in creature B, and if that is true also for particular world models c, d, e, …, etc. in C, D, E, …, etc., then those world models are the world model of a world whose satisfaction conditions include (pretty much) those environments.  Moreover, the human genome is sufficiently restrictive to ensure that in the vast majority of cases (enough to ensure the identification of things, anyway) we can take these models to be (implement, represent?) by definition the same world model.

 

As a practical matter, this does not seem to be a problem for human beings.  We learn early how to parse the environment into stable categories that we share with others in the same environment.  Somewhere in this process, we acquire thingness.  Thingness is necessary for reference, for intentionality, for aboutness.  I don’t know, and I don’t think it makes much of a difference, whether thingness is innate or (as I suspect) the acquisition of thingness requires postnatal interaction with the environment as part of the brain’s boot process.

 

Fodor (1998, p.27) and the Relational Theory of Mind (RTM) crowd have a rather similar way around this.  “[A]ll versions of RTM hold that if a concept belongs to the primitive basis from which complex mental representations are constructed, it must ipso facto be unlearned.”  This is actually several assertions.  The most important one from my point of view is:

 

There are innate (unlearned) concepts. 

 

I take it that my use of the word innate here will seem comfortably untendentious when I tell you I am explicitly ruling out the possibility that unlearned concepts are injected into us by invisible aliens when we are small children.  The only worry I have about innate concepts is that like Baum I suspect that in reality the members of the set of such innate concepts are far removed from the concepts traditionally paraded as examples of concepts, that is, I don’t think COW is innate any more than KODOMO-DRAGON.  (Baum doesn’t talk much about concepts per se, but his central position is that everything that’s innate is in our DNA and our DNA has neither room nor reason to encode any but the most primitive and productive concepts.)  Fodor is coy about COW and HORSE, but he counterdistinguishes the status of COW from the status of BROWN COW, which “could be learned by being assembled from the previously mastered concepts BROWN and COW.”

 

I don’t think Fodor really needs COW to be innate.  I think the problem is that he doesn’t want it to have constituents.  I sympathize.  I don’t want it to have constituents.  But making COW innate is not the only alternative.  All that is needed is a mechanism that allows for cows in the world to have the ability to create a new primitive COW that is (by my argument above) the same primitive COW that Little Boy Blue has and indeed the same primitive as most everybody else familiar with cows has.  In other words, what I have proposed is a mechanism that enables concepts to be public, shareable, primitive, and learnable.  I haven’t got a good story about how one could be familiar with cows and not have the same concept COW as most everybody else.  Maybe if one’s familiarity with cows was always in the context of partially obscuring bushes one might come to acquire a concept COW that meant bushes partially obscuring a cowlike animal.  But if that were the case, I’d expect that same COW concept to be created in others familiar with cows in the same context.

 

The rest of the story is that this way of making COW primitive but not innate requires reexamination of the assertion that there are innate concepts.  It looks like the things I am postulating to be innate are not properly concepts, but rather concept-building processes.  So the correct statement is:

 

There are innate (unlearned) concept-building processes that create primitive concepts.  I’d be willing to buy the so-called “universals” of language as a special case of this.

 

It will work, I think, because the putative processes exist at prior to concepts.  So, we still have primitive concepts and non-primitive concepts in such a way as to keep RTM in business for a while longer.  And we can build a robust notion of concept identity on identity of primitive concepts without requiring all primitive concepts to be innate.  This does not, of course, rule out the possibility (offered by the ethology of other species, as Fodor points out) that we also possess some innate primitive concepts.

 

 

Concept Identity vs. Concept Similarity

Thursday, February 28th, 2008

In the 2007 Pufendorf lectures, Patricia Churchland said a few things that made me stop and think. One relates to concepts and concept membership. Churchland proposed, following Rosch, that concepts are built around prototypes, that they have a “radial structure”; that concepts have “fuzzy borders (boundaries)” and that concept membership is a “similarity” relationship. I can arrive at a set of similar, but not identical (to use the two hot terms in Fodor’s polemics on concepts) conclusions; but I think the differences are worth elaborating.

By way of intellectual history (mine) background, I have long been troubled by an aporia in what I believe about concepts and concept membership:

A. Concepts have (as Churchland’s slide said) fuzzy borders, and that fuzziness certainly seems to be essential.

On the other hand,

B. I find Fodor’s argument for identity and against similarity to be compelling.

The problem, of course, is that A argues for similarity as the touchstone of concept membership and implies that identity is much too strict to be a useful criterion; whereas B argues that similarity is a meaningless criterion unless there is a preexisting underlying criterion of identity: if similarity requires identity, identity is the fundamental criterion.

It seems odd, however, to argue for a robust notion of identity in the context of the complex non-linear processes of the brain; and just saying “Well, that’s the way it has to be, learn to live with it” is hardly compelling. So, the first issue we have to deal with is where does identity come from? Here’s what I currently think.

It all goes back to a central fact of neuro-epistemology, to wit: the brain has no direct access to the outside world; all access is via transducers–receptors and effectors. I think you mentioned this in one of the lectures. Thence, via Marr, Choe, Maturana & Varela, and von Foerster, I arrive at the following. In the general case, the only thing the brain can reliably identify in the brain-environment system of which it is a component is invariances, that is, invariant states. For a state to be invariant, it must be invariant under some set of operations. The particular set of operations under which a state remains unchanged is, in a real sense, the meaning of the state insofar as the brain is concerned. Nothing else can be known with certainty. Von Foerster, writing at a higher level of generality, uses the terms “eigen states” to describe these meta-stable (stable over at least some period of time) states.

Von Foerster’s terminology derives from a result of matrix algebra. An arbitrary square matrix has the characteristic that there are families of “eigenvectors” such that if E is an eigenvector of matrix M, then multiplying E by M yields a vector of the form k times E. In other words, multiplication by M takes certain vectors (its eigenvectors) into themselves up to a multiplicative constant. Von Foerster notes that the mathematics of a dynamic system is such that it has eigen states that the system maps into themselves (they are invariants of a sort); he characterizes eigen states as the way the system is able to “classify” its environment. A key result of von Foerster’s is that the eigen states of such systems are discrete and meta-stable. In the terminology of neural networks, these states are like attractor points (I am eliding some caveats, but the assertion is correct enough for the argument to stand). Like attractor points, they give the system the ability to do pattern completion.

Self-modifying systems have the diachronic ability to adaptively create (learn) new eigen states. But synchronically eigen states always have the discreteness property. Two eigen states are either identical or different. Similarity is not a characteristic of eigen states. Remind you of Fodor?

Let’s identify a concept with an eigen state. (In certain details, I think this is an oversimplification to the point of incorrectness, but I’ll hold that polemic for another time because it’s not central to this argument.) So, here we are:

Thesis: Concept similarity is at the core of concept membership; there’s no need for concept identity.

Antithesis: Concept identity is at the core of concept membership; similarity is a nonsensical thing to hang concept membership on.

Synthesis: Concepts are eigen states (states defined by sets of operations that preserve an invariant) and as such are unique and have identity conditions. The processes that work to arrive at a particular eigen state may (and probably in the brain generally do) involve completion effects that are undeniably “similarity” effects. So, at one and the same time,

1) Concepts cannot be fuzzy because eigen states are discrete

and

2) Concepts are essentially fuzzy because completion effects are always involved in arriving at them.

If you have some large enough portion of the eigen state associated with a concept, completion effects will fill in the rest and arrive at the unique eigen state (and thus the concept) itself. To the extent that completion effects vary in response to other things going on in the brain, there can be no precise specification of which patterns will or will not complete to a particular concept. This is why the merest ripple on the surface of cat-infested waters is sufficient to cause CAT thoughts and why during an invasion of robot cats from outer space, a host of cat-like creatures emerging from a flying saucer does not cause CAT thoughts.

So much for concept similarity versus concept identity.

030718 – Self-Reporting

Friday, July 18th, 2003

030718 – Self-Reporting

Is there any advantage to an organism to be able to report its own internal state to another organism?  For that is one of the things that human beings are able to do.  Is there any advantage to an organism to be able to use language internally without actually producing an utterance?

Winograd’s SHRDLU program had the ability to answer questions about what it was doing.  Many expert system programs have the ability to answer questions about the way they reached their conclusions.  In both cases, the ability to answer questions is implemented separately from the part of the program that “does the work” so to speak.  However, in order to be able to answer questions about its own behavior, the question answering portion of the program must have access to the information required to answer the questions.  That is, the expertise required to perform the task is different from the expertise required to answer questions about the performance of the task.

In order to answer questions about a process that has been completed, there must be a record of, or a way to reconstruct, the steps in the process.  Actually, is not sufficient simply to be able to reconstruct the steps in the process.  At the very least, there must be some record that enables the organism to identify the process to be reconstructed.

Not all questions posed to SHRDLU require memory.  For example one can ask SHRDLU, “What is on the red block?”  To answer a question like this, SHRDLU need only observe the current state of its universe and report the requested information.  However, to answer at question like, “Why did you remove the pyramid from the red block?”  SHRDLU must examine the record of its recent actions and the “motivations” for its recent actions to come up with an answer such as, “In order to make room for the blue cylinder.”

Not all questions that require memory require information about motivation as, for example, “When was the blue cylinder placed on the red cube?”

Is SHRDLU self-aware?  I don’t think anyone would say so.  Is an expert system that can answer questions about its reasoning self-aware?  I don’t think anyone would say so.  Still, the fact remains that it is possible to perform a task without being able to answer questions about the way the task was performed.  Answering questions is an entirely different task.