030105 Wants
One of the most central and most refractory problems of all theoretical models of human behavior is the problem of wants. What is a want? What makes this a difficult problem is that everybody knows what it means to want something. But from a modeling standpoint, what does it mean? Wanting is fundamental. Can there even be behavior without wants? I think not. Can non-human animals be said to have wants? I think so.
That being the case, what is different (if anything) about human wants? Wants are in many cases related to biological needs, e.g., food, water, excretion of wastes. Wants are also associated with biological imperatives that fall short of being needs (where a need must be met or the organism will perish). The only biological imperative I can think of at the moment is sex, without which an organism will perish without offspring.
Given that there is no Cartesian observer or meaner in the brain, the question of wants becomes even more important. Dennett (1991) talks about some kind of system to determine what to think about next. Jumping off from his analysis, it seems like evolution has created an on-idle loop that thinks about things whenever theres nothing urgent to deal with at the moment. The evolutionary advantage this confers [I thought there was a word con-something that would work there, but I couldnt think of it at first. Eventually, I found it, and there it is.] is that idle-time thinking may result in elaborating strategies that make the organism fitter when urgent situations do occur. That is, idle-time thinking is sort of like ongoing fire-drills, or contingency planning. You never know when having thought about something or learned something will come in handy.
Still, wanting is problematical.
A lot of AI sidesteps the problem. Programs that are designed to understand and paraphrase text want to understand and paraphrase text because that is what they are designed and programmed to do. Such programs do not produce as output, Im tired of this work, lets go out, have a few beers, and talk about life (unless of course, that is a paraphrase of some corpus of input text).
So, maybe it makes sense to try to figure out what we want AI devices to want. Self-preservation is good. (Oops, now we hit one of the problems Asimovs Laws of Robotics address: we dont want AI entities to preserve themselves at the expense of allowing humans to come to harm, although presumably we dont mind if they inveigle themselves into our affections so we are unwilling / unlikely / not disposed to turn them off.)
At least self-preservation is good in a Mars rover. It may not be good in a military robot, although military robots are presumably will continue to be expensive, so we dont want them to risk their existence casually.
Is fear what happens when the Danger-lets-get-the-hell-out-of-here subsystem is screaming at the top of its lungs and we are not getting the hell out of there?
In our universe, for an organism to exist, it must be the offspring of a previous organism. This trivial fact is called Evolution and much is made of it. Although it is incorrect to attribute volition to Evolution, it does not do violence to reality to assert that Evolution is the name we give to the continued existence of things that have been able to reproduce. Moreover, observation teaches that the more complex such things are, the more complex are the processes through which those things reproduce.
It does not make much sense to say that a bacterium or a virus wants to reproduce, although it does reproduce when conditions are favorable. For that matter, it doesnt make much sense to say that a bacterium or a virus wants to do anything. I guess that means we think of wanting as something that we are aware of: something that rises to the level of consciousnessan attribute we do not apply to bacteria or viruses. So here we are with propositional attitudes, which linguistically seem to come in at least two flavors: indicative and subjunctive.