Fodor (1998, p.15), presenting the (his) RTM view of concepts, says, I cant
afford to agree that the content of the concept H2O is different from the content of the concept WATER. At least in part, this is a consequence of his assertion that Concepts are public; theyre the sorts of things that lots of people can, and do, share. (p.28, italics in original)
If the content of concepts is public (I, for one have no problem with this view), then nobody and everybody is responsible for them and their denoters have to be learned. Its easy enough to argue, following Eric Baum (2004, What Is Thought?), that our genome builds us in such a way that we all acquire categories in pretty much the same way. Im not sure why I insisted on categories in the previous sentence rather than sticking with concepts. I guess its because I have already done a lot of thinking about concepts and Im not sure whether Im willing to grant concepthood to categories.
A priori, there must be a set of parameterizable functions that are built-in by the genome. When I talk about parameterization here, Im talking about learning processes; when I talk about parameterizing models, Im talking about the inputs to a particular content model at a moment in time. The former takes place during concept development; the latter during concept utilization. Taking such a set of parameterizable functions as a basis, content models can (only) be constructed from these components. The genome thus ensures that ceteris paribus (over a reasonable range of normal human ontogenetic experience) the structure of the content model(s) epigenetically constructed will tend to converge (dare I say they will be the same up to some threshold of difference?).
The convergence we expect to find looks like this: If things that are modeled by a particular content model a in creature A are pretty much the same things that are modeled by a particular content model b in creature B, and if that is true also for particular content models c, d, e,
, etc. in C, D, E,
, etc., then those content models are the content model of a concept whose satisfaction conditions include (pretty much) those things. Moreover, the human genome is sufficiently restrictive to ensure that in the vast majority of cases (enough to ensure the functioning of language, anyway) we can take these models to implement (represent?) by definition the same concept. That is, sameness of concepts across individuals arises from the identity of the (shared) facilities available to construct them and the identity of the (shared, lower level) processes that construct them out of (things that turn out to be) invariants these processes extract from the real world.
DOG means dog because the (already strongly constrained) models the human brain automatically constructs when presented with dogs are such that across individuals the models will use identical processes in identical ways (process identity is obviously level-sensitiveI cant possibly argue that the neural circuits are isomorphic across individuals, but I can argue that the brain is sufficiently limited in the ways it can operate that there is at some level of explanation only one way a dog model can be implemented).
This is similar to the poverty of the stimulus argument that argues for much of language to be innate.
I think were almost there now, but it occurs to me that I have built this on the identity of things, which may itself be tendentious. Theres no problem with saying a particdular thing is identical to itself. But thats not where the problem arises. How do we know what a thing is? A thing is presumably something that satisfies the concept THING. But careful examination of the reasoning above shows that I have assumed some kind of standardized figure-ground system that reliably identifies the things in an environment. Now where are we? Suppose the things are dogs. Do we have to suppose that we know what dogs are?
Lets try to save this by saying by substituting environments for things and then talking about world models. That is, if the environment that is modeled by a particular world model a in creature A is pretty much the same environment that is modeled by a particular world model b in creature B, and if that is true also for particular world models c, d, e,
, etc. in C, D, E,
, etc., then those world models are the world model of a world whose satisfaction conditions include (pretty much) those environments. Moreover, the human genome is sufficiently restrictive to ensure that in the vast majority of cases (enough to ensure the identification of things, anyway) we can take these models to be (implement, represent?) by definition the same world model.
As a practical matter, this does not seem to be a problem for human beings. We learn early how to parse the environment into stable categories that we share with others in the same environment. Somewhere in this process, we acquire thingness. Thingness is necessary for reference, for intentionality, for aboutness. I dont know, and I dont think it makes much of a difference, whether thingness is innate or (as I suspect) the acquisition of thingness requires postnatal interaction with the environment as part of the brains boot process.
Fodor (1998, p.27) and the Relational Theory of Mind (RTM) crowd have a rather similar way around this. [A]ll versions of RTM hold that if a concept belongs to the primitive basis from which complex mental representations are constructed, it must ipso facto be unlearned. This is actually several assertions. The most important one from my point of view is:
There are innate (unlearned) concepts.
I take it that my use of the word innate here will seem comfortably untendentious when I tell you I am explicitly ruling out the possibility that unlearned concepts are injected into us by invisible aliens when we are small children. The only worry I have about innate concepts is that like Baum I suspect that in reality the members of the set of such innate concepts are far removed from the concepts traditionally paraded as examples of concepts, that is, I dont think COW is innate any more than KODOMO-DRAGON. (Baum doesnt talk much about concepts per se, but his central position is that everything thats innate is in our DNA and our DNA has neither room nor reason to encode any but the most primitive and productive concepts.) Fodor is coy about COW and HORSE, but he counterdistinguishes the status of COW from the status of BROWN COW, which could be learned by being assembled from the previously mastered concepts BROWN and COW.
I dont think Fodor really needs COW to be innate. I think the problem is that he doesnt want it to have constituents. I sympathize. I dont want it to have constituents. But making COW innate is not the only alternative. All that is needed is a mechanism that allows for cows in the world to have the ability to create a new primitive COW that is (by my argument above) the same primitive COW that Little Boy Blue has and indeed the same primitive as most everybody else familiar with cows has. In other words, what I have proposed is a mechanism that enables concepts to be public, shareable, primitive, and learnable. I havent got a good story about how one could be familiar with cows and not have the same concept COW as most everybody else. Maybe if ones familiarity with cows was always in the context of partially obscuring bushes one might come to acquire a concept COW that meant bushes partially obscuring a cowlike animal. But if that were the case, Id expect that same COW concept to be created in others familiar with cows in the same context.
The rest of the story is that this way of making COW primitive but not innate requires reexamination of the assertion that there are innate concepts. It looks like the things I am postulating to be innate are not properly concepts, but rather concept-building processes. So the correct statement is:
There are innate (unlearned) concept-building processes that create primitive concepts. Id be willing to buy the so-called universals of language as a special case of this.
It will work, I think, because the putative processes exist at prior to concepts. So, we still have primitive concepts and non-primitive concepts in such a way as to keep RTM in business for a while longer. And we can build a robust notion of concept identity on identity of primitive concepts without requiring all primitive concepts to be innate. This does not, of course, rule out the possibility (offered by the ethology of other species, as Fodor points out) that we also possess some innate primitive concepts.