Archive for the ‘realization (instantiation, physical implementation)’ Category

081225 – Why disjunctions can figure in laws?

Thursday, December 25th, 2008

081225

Why Disjunctions Can Figure in Laws?

Loewer 2007a[1] argues for a Non-Reductive Physicalism (NRP) as contradistinguished from plain old Reductive Physicalism (RP).  This is something of a family quarrel to begin with because both sides seem to agree that dualism is out and mentation supervenes on a purely physical substrate.

In particular, Loewer considers and dismisses a “line of thought that threatens to show that NRP is unstable” and thus not a coherent alternative to RP.

Suppose that M is a mental property and occurs in some law, say M →R (the law may be a ceteris paribus law) and so is a G-property.  Suppose that physicalism is true.  Won’t there be some … physical property Q constructed out of physical genuine properties—i.e. a disjunction of physical properties or configurations of physical properties—that is coextensive with M in all physically possible worlds?  But then won’t it be the case that Q →R is also a law?  If so, it follows that Q is a G-property since it figures in a law.  If this is correct, then NRP comes very close to collapsing into RP since either M = Q or M* = Q where M* is the property M restricted to the class of physically possible worlds.  In the first case RP holds; in the second case it is close enough to make the difference between RP and NRP look awfully trivial.

Loewer offers two counterarguments.  The first is one that he dismisses out of hand because, he says, it looks “a lot like ‘declaring victory and withdrawing’”:

If any construct out of physical properties that is coextensive (or coextensive in every physically possible world) with a G-property counts as a P-property then indeed NRP and RP come to much the same.

The problem he says is that

considerations involving functionalism and externalism show that Q will have an enormously complex characterization in terms of physics and plausibly has no characterization in terms of any of the special sciences.

In effect, Loewer invokes Occam’s razor, which says; Simpler is better; don’t complicate things unnecessarily.  In so doing, Loewer is following Fodor’s argument that complex (and sometimes potentially unbounded) disjunctions of physical properties are not natural kinds.  As Loewer summarizes Fodor, the problem is that the disjunctive properties at issue need not be kinds, and

disjunctions of physically heterogeneous properties are not kinds of physics.  [Fodor] seems to mean by this that the various [properties] can be realized by various configurations of physical entities that are made from different materials.

On the other hand, although the disjunction of the realizers of F may be physically heterogeneous (and so not a kind of physics) they may be psychologically homogenous so that F is a kind of psychology. If F is a functional natural kind of psychology its instances are psychologically homogeneous since they share the same psychological role.

Although Fodor doesn’t say this he might add that psychological properties and laws may obtain even in worlds whose fundamental properties and laws are very different from those of the actual world. In these worlds psychological properties are realized by alien fundamental properties and psychological laws by alien fundamental laws.[2]

Yates 2005[3] analyzes Fodor’s cavil as (I think properly) a question of “gerrymanderedness rather than disjunctiveness or heterogeneity.” (p. 218, original italics).  He proposes that we grant Fodor that gerrymandered disjunctions are not suitable for framing laws.  The crucial point to note now is that disjunctions of the realizers of functional kinds are not gerrymandered.  Why?  Because in order to count as realizers of a given functional property, all the disjuncts must play the causal role that defines it.  This is where Papineau’s [1985] argument comes in.  If special science properties are multiply realizable (and so irreducible), then their realizers must be heterogeneous.  But in that case, something has to explain how all the non-identical realizer properties at, say, the physical level, share the causal power constitutive of the functional properties at some special science level, say biology.  (p. 219)

The problem of evolutionary selection arises.

It would be miraculous if all the different realizer properties play the same causal roles by coincidence.  Whence a dilemma: either there is an explanation of the otherwise miraculous coincidence, or special science properties are not multiply realizable after all. (p.219)

Is this really an evolutionary problem?  I’m not sure I understand Yates’s argument here.  He talks about ‘projectibility’ just as Loewer does, and I don’t know what that is.  It may be that special science properties are indeed multiply realizable, but that there is something special about whatever realization happened to develop first.  The algorithm doesn’t care about how it is realized (implemented) just so long as an implementation of the basis functions is available.

Now, I don’t care whether RP or NRP is the right name to blazon on the banner of Truth, but I do care about making sense of things. Rather than talk about special sciences, let’s talk about algorithms and their implementations.

==================== Notes ===================

[1] Loewer, Barry.  2007a.  “Mental Causation, or Something Near Enough.” in Philosophy of Mind.

[2] Loewer, Barry.  2007b.  “Why is There Anything Except Physics?” To appear in Synthese Special Issue on Fodor (2007).

[3] Yates, David.  2005.  The Causal Argument for Physicalism.  King’s College London.  Doctoral Dissertation.

081204

Thursday, December 4th, 2008

081204

Suppose that what it means to be a particular individual in possession of a particular individual mind is to be a particular individual physical device that implements a particular individual algorithm.

What an Algorithm Is
Searle 2004 (p.67) describes an algorithm as “a method for solving a problem by going through a precise series of steps.  The steps must be finite in number, and if carried out correctly, they guarantee a solution to the problem.”

By talking about solving a problem, the description takes for granted that in order to create or recognize an algorithm we must have1) a problem, 2) a way to describe it, and 3) a way to describe its solution.  Within this formulation, an algorithm can only have so-called derived intentionality, viz. before an algorithm can come into existence somebody has to have a purpose that determines what the algorithm is to be about.  As Searle points out, a calculator (an instantiation of a calculation algorithm) doesn’t do calculations.  What it does is change states in a deterministic way in response to physical movements of some of its components (key presses) and alter its appearance (displaying results) as a side effect of its internal state changes.  A calculator is usable as a calculator by human beings only because human beings assign calculation-related meanings to key presses and display patterns.  The meaning does not reside in the calculator, it resides in the user.  Following this line of thought, Searle concludes that neither syntax nor semantics is present in an algorithm.  This he says is because syntax and semantics are constructs present only in conscious human beings.

These conclusions are warranted under the given definition of what constitutes an algorithm.  However, I will propose an alternative definition that I will argue allows for something we can still reasonably call an algorithm to have syntax and, in its instantiations, semantics without having to provide either from an external source.

I propose to consider algorithms as being about the implementation (realization) of behaviors in time.  In a sense, then, an algorithm is an abstraction that specifies a particular deterministic computer program.  More formally, an algorithm is a finite series of instructions[1] (steps) that comprise a behavior (a generalization of the idea of performing a task).  Algorithms are constructed on the basis of a set of primitive functions (the basis functions) that, taken together specify the operation of an abstract (virtual) machine (computer).  It is not possible to specify an algorithm without specifying the set of primitive functions in terms of which the algorithm is expressed, although informally the set of primitives is simply taken to contain whatever functions the specification of a particular algorithm requires.  The abstract machine implicit in the set of primitive functions can be described in terms of its computational power (the class of calculations it is capable of).  The two most common (and for our purposes most relevant) levels of computational power are 1) computations that are possible for finite state machines and 2) computations that are possible for (unrestricted) Turing machines.  The former is formally equivalent to (has identical computational power as) the subset of Turing machines having only a finite tape.

It is, in general, tedious in the extreme to express algorithms in terms of Turing machine functions.  And it is also tedious in many cases to make explicit the set of primitive functions that provide the basis for a particular algorithm or set of algorithms.  For that reason (and, one assumes, not incidentally because people thought up the list-of-instructions concept long before Turing thought up the machine formalism that bears his name) the specification of most algorithms leaves the specification of the underlying set of primitive functions implicit.  That works pretty well and we all learn (maybe now I have to say used to learn) addition, subtraction, multiplication, division, and square root algorithms in elementary school arithmetic, without belaboring or worrying overmuch about the specifics of the underlying primitive functions, e.g., the fact that the set of functions on which the addition algorithm depends includes a function that enables one to write a sort of superscript number above and to the left of a specified decimal position in the topmost number of a column of numbers to be added (the “carry”) and a function that enables one to read it back to oneself at a later time, and so on.

Special Considerations re: Primitive Functions
Without attempting a rigorous exposition, we may take a mathematical function to be a (deterministic) relation that uniquely associates each element of its domain (the set of possible input values) with an elements of its range (the set of possible output values), in other words, an input-output specification.  By definition, mathematical functions do not have side effects.  This has all sorts of good consequences for proving theories and doing mathematical logic.  However, for the specification of algorithms useful for talking about how the brain works, we need the computer science definition of a function, which generalizes the definition of function to include processes that have side effects.

Side Effects and Referential Opacity

A side effect is an event or process affected by and/or affecting the state of the environment external to the (implementation of the) function and occurring or commencing conceptually within the space or interval between the arrival of its input and the corresponding return of its output.  The most common use of computer functions with side effects is to obtain inputs from or deliver outputs to the external environment.[2]

Function side effects are problematical for mathematical theories of computation, because they introduce the unconstrained external world into an otherwise nicely circumscribed theoretical construct.  The formal response of computer science has been to expand the boundaries of the theoretical construct to include the possibility of a limited set of side effects explicitly in the domain and range of the function.  The drawback this creates is that the more the domain and range include of the external world, the more difficult it is to formally prove program correctness.  Nonetheless, functions with side effects are an integral part of the standard description of a Turing machine: specifically, the operations (functions) that read from and write to the machine’s (infinitely extensible) tape.[3]

At the most fundamental level, the issues raised by side-effects and referential opacity relate to the (theoretically, at least) arbitrary boundary between a system selected for analysis and the external environment in which it is embedded.[4]  Because a theory of the mind must I think be about the brain in the context of an (external) environment that is affected by and affects brains, we need to be able to draw a boundary between an entity in possession of a mind and the environment in which it operates.[5]  We thus need to allow for side effects in a theory of the mind, simply in recognition of the fact that not everything that happens in a mind originates in processes within the mind.  There is an outside world, and it matters.[6]

Side effects show up in the realization (instantiation, physical implementation) of an algorithm in two ways.  1) The set of basis functions for an algorithmic system may include functions that explicitly query and manipulate the physical environment.  2) The physical processes that implement the algorithm have real physical side effects that are above and beyond (outside of) the abstract description of the algorithm—outside, even, the abstractly specified side effects that may be required in the basis functions.  For example, a computer needs a power source that meets certain specifications and will operate only under a specified range of environmental conditions.

When analyzing or describing the behavior of a particular physical realization of an algorithm, we generally concentrate on side effects of the first kind and take for granted that the physical side effects of the second kind—those that ground or enable the basis functions themselves—are in place.

Significance of Side Effects
[Didn’t I say what’s in this paragraph earlier?]
The introduction of functions with open-ended side-effects has the effect of vastly complicating (often to the point of practical impossibility) any complete formal analysis of algorithmic behavior.  This, because a formal analysis can only take place within a formal description of a closed system (one in which all events occur deterministically through fully specified rules and circumstances).  To the extent that side effects bring aspects of the external world into a system, a formal description of the system must comprehend a formal description of at least those aspects of the external world.  In effect, the less constrained the range of allowable side effects, the broader must be the scope of the formal description of the system.

Sequencing and Counterfactuals
Philosophers appeal to the idea of counterfactuals in order to deal with the fact that at the macro physical (as opposed to the quantum physical) level events only happen one way, although our intuitions tell us that if things had been sufficiently different (a counterfactual condition) events would have turned out differently.  In planning for the future, where there are no facts yet, we just use conditionals (e.g., If the creek don’t rise and the sky don’t fall, I’ll visit you next Thursday).  Computer programming is a quintessential case of formal planning for the future.  The programmer’s task is to devise an algorithm that will accomplish whatever it is supposed to accomplish under constrained, but as yet undetermined conditions.

Sequencing and counterfactuals are at the heart of causal descriptions.  Sequencing and conditionals are at the heart of algorithmic descriptions.  Every entry in the state table of a Turing machine consists in a conditional (e.g., if the symbol under the read/write head on the tape is “1”), an action (e.g., write “0” on the tape and move the read/write head to the left), and a sequencing directive (e.g., go to state 275).  In the abstract, sequencing begs the question of causality.  If B must follow A, does that mean that A must cause B or is it sufficient that A must be correlated with B?  Does it matter?  In an algorithmic specification, the answer is no.  In fact, it is not even a meaningful question because sequencing is primitive (and thus opaque).  So causality is not an issue in an algorithmic specification.

It feels like we have two choices with respect to causality.  We can stumble ahead in despite of all that Hume warned us about, and we will fall into the mire he described.  Alternatively, we can take the view that informs physics, viz. systems evolve over time according to empirically reliable formulas.  On this view, the attribution of causality requires drawing arbitrary physical and temporal boundaries in the midst of an evolving process to interpret as objects of interest at particular points in time.  We then examine the state equations of the system in a small neighborhood near each selected point in time and we assign causality to the infinitesimally prior state.

In effect relativistic limits and the flow of time delimit the causes of everything.  If there is anything that counts as Hume’s necessary connexion, it is to be found in the empirically based physical theory that the state of the universe everywhere in a light-cone of radius delta-t centered on the point P at time T is what determines what will happen at P at time T plus delta-t.  The instant that one focuses attention on a proper subset of that light cone as a “cause”, necessary connexion becomes subject to ceteris paribus conditions.

If we want to say that some proper subset of the light cone centered on P at time T caused what happened at P at time T plus delta-t, we must recognize that this is a counterfactual that is falsifiable.  Such an assertion requires a ceteris paribus qualification if we are to accept it as a true statement.

====================== Notes ======================

[1] The insistence here is on the finiteness of the series of instructions, not the finiteness of the time necessary to complete the task.  Some algorithms provably finish in finite time, e.g., the well-known algorithms taught in elementary school for adding two finite integers together.  Other algorithms may continue indefinitely, e.g., the square root algorithm, which, in some cases—as when applied to the integer 2—will continue to produce additional digits indefinitely.  Of course, constraints of finite physical resources and finite time will prevent any physical instantiation of an algorithm from continuing forever.

[2] Typical examples:

 

·         A function whose outputs are deterministic, but are not completely determined by its inputs, e.g., a function whose side effect is to provide the current date and time as its output.  Such a function is said to be referentially opaque.  

·         A function that requests data from a user.  The data returned by such a function are determined by whatever the user enters; but such a function strains the mathematical definition of a function because an equivalent input-output specification cannot be pre‑specified—the input to the function is a request for data, and the output is whatever the user enters.  At best, one can pre-specify the range of the function by limiting what the user is allowed to enter. 

·         A function that delivers its input (the value(s) provided to the function) to some kind of display.  Such a function affects the state of the external environment and may ultimately affect the internal environment of the program if and when something in the external environment reacts to that change of state in a way detectable by the program.  Nonetheless, the output of such a function (the value(s) it returns) is (are) problematical.  Usually such a function simply returns a value indicating success (or failure) insofar as that can be determined by the computer; but strictly (mathematically) speaking the result of the function is the sum total of the effects on the computer of changes in its environment that occur as a result of the information it displays.

[3] Strictly speaking, a function that simply stores its input value for later retrieval and a complementary function that retrieves and provides as its output a value thus stored are both functions with side effects.  Writing acts upon the environment.  Reading queries the state of the environment.  By convention, the operations of read and write functions are specified to take place deterministically within the internal environment of the abstract machine and such side effects are simple enough not to be considered problematical.

[4] Arguably, the universe and everything in it is just one big system; and any subdivision of that system is ultimately arbitrary.  That said, Algorithmic Information Theory, briefly described a little later on provides a way of assessing the relative complexity of one subdivision vis-à-vis another.

[5] As far as I can tell, not being myself a substance dualist, there can’t be any real difference between what goes on in the brain and what goes on in the mind.  If I think of one, I’ll go back and change my terminology accordingly.

[6] Notwithstanding the skeptical hypothesis that you are really just a brain in a vat and all of your experience is illusory.  For an overview of the literature on brains in vats, see Brueckner 2004.