Indeterminism Philosophy Essay Example

1. Introduction

In most of what follows, I will speak simply of determinism, rather than of causal determinism. This follows recent philosophical practice of sharply distinguishing views and theories of what causation is from any conclusions about the success or failure of determinism (cf. Earman, 1986; an exception is Mellor 1994). For the most part this disengagement of the two concepts is appropriate. But as we will see later, the notion of cause/effect is not so easily disengaged from much of what matters to us about determinism.

Traditionally determinism has been given various, usually imprecise definitions. This is only problematic if one is investigating determinism in a specific, well-defined theoretical context; but it is important to avoid certain major errors of definition. In order to get started we can begin with a loose and (nearly) all-encompassing definition as follows:

Determinism: The world is governed by (or is under the sway of) determinism if and only if, given a specified way things are at a time t, the way things go thereafter is fixed as a matter of natural law.

The italicized phrases are elements that require further explanation and investigation, in order for us to gain a clear understanding of the concept of determinism.

The roots of the notion of determinism surely lie in a very common philosophical idea: the idea that everything can, in principle, be explained, or that everything that is, has a sufficient reason for being and being as it is, and not otherwise. In other words, the roots of determinism lie in what Leibniz named the Principle of Sufficient Reason. But since precise physical theories began to be formulated with apparently deterministic character, the notion has become separable from these roots. Philosophers of science are frequently interested in the determinism or indeterminism of various theories, without necessarily starting from a view about Leibniz' Principle.

Since the first clear articulations of the concept, there has been a tendency among philosophers to believe in the truth of some sort of determinist doctrine. There has also been a tendency, however, to confuse determinism proper with two related notions: predictability and fate.

Fatalism is the thesis that all events (or in some versions, at least some events) are destined to occur no matter what we do. The source of the guarantee that those events will happen is located in the will of the gods, or their divine foreknowledge, or some intrinsic teleological aspect of the universe, rather than in the unfolding of events under the sway of natural laws or cause-effect relations. Fatalism is therefore clearly separable from determinism, at least to the extent that one can disentangle mystical forces and gods' wills and foreknowledge (about specific matters) from the notion of natural/causal law. Not every metaphysical picture makes this disentanglement possible, of course. But as a general matter, we can imagine that certain things are fated to happen, without this being the result of deterministic natural laws alone; and we can imagine the world being governed by deterministic laws, without anything at all being fated to occur (perhaps because there are no gods, nor mystical/teleological forces deserving the titles fate or destiny, and in particular no intentional determination of the “initial conditions” of the world). In a looser sense, however, it is true that under the assumption of determinism, one might say that given the way things have gone in the past, all future events that will in fact happen are already destined to occur.

Prediction and determinism are also easy to disentangle, barring certain strong theological commitments. As the following famous expression of determinism by Laplace shows, however, the two are also easy to commingle:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence. (Laplace 1820)

In this century, Karl Popper (1982) defined determinism in terms of predictability also, in his book The Open Universe.

Laplace probably had God in mind as the powerful intelligence to whose gaze the whole future is open. If not, he should have: 19th and 20th century mathematical studies showed convincingly that neither a finite, nor an infinite but embedded-in-the-world intelligence can have the computing power necessary to predict the actual future, in any world remotely like ours. But even if our aim is only to predict a well-defined subsystem of the world, for a limited period of time, this may be impossible for any reasonable finite agent embedded in the world, as many studies of chaos (sensitive dependence on initial conditions) show. Conversely, certain parts of the world could be highly predictable, in some senses, without the world being deterministic. When it comes to predictability of future events by humans or other finite agents in the world, then, predictability and determinism are simply not logically connected at all.

The equation of “determinism”with “predictability” is therefore a façon de parler that at best makes vivid what is at stake in determinism: our fears about our own status as free agents in the world. In Laplace's story, a sufficiently bright demon who knew how things stood in the world 100 years before my birth could predict every action, every emotion, every belief in the course of my life. Were she then to watch me live through it, she might smile condescendingly, as one who watches a marionette dance to the tugs of strings that it knows nothing about. We can't stand the thought that we are (in some sense) marionettes. Nor does it matter whether any demon (or even God) can, or cares to, actually predict what we will do: the existence of the strings of physical necessity, linked to far-past states of the world and determining our current every move, is what alarms us. Whether such alarm is actually warranted is a question well outside the scope of this article (see Hoefer (2002a), Ismael (2016) and the entries on free will and incompatibilist theories of freedom). But a clear understanding of what determinism is, and how we might be able to decide its truth or falsity, is surely a useful starting point for any attempt to grapple with this issue. We return to the issue of freedom in section 6, Determinism and Human Action, below.

2. Conceptual Issues in Determinism

Recall that we loosely defined causal determinism as follows, with terms in need of clarification italicized:

Determinism: The world is governed by (or is under the sway of) determinism if and only if, given a specified way things are at a time t, the way things go thereafter is fixed as a matter of natural law.

2.1 The World

Why should we start so globally, speaking of the world, with all its myriad events, as deterministic? One might have thought that a focus on individual events is more appropriate: an event E is causally determined if and only if there exists a set of prior events {A, B, C …} that constitute a (jointly) sufficient cause of E. Then if all—or even just most—events E that are our human actions are causally determined, the problem that matters to us, namely the challenge to free will, is in force. Nothing so global as states of the whole world need be invoked, nor even a complete determinism that claims all events to be causally determined.

For a variety of reasons this approach is fraught with problems, and the reasons explain why philosophers of science mostly prefer to drop the word “causal” from their discussions of determinism. Generally, as John Earman quipped (1986), to go this route is to “… seek to explain a vague concept—determinism—in terms of a truly obscure one—causation.” More specifically, neither philosophers' nor laymen's conceptions of events have any correlate in any modern physical theory.[1] The same goes for the notions of cause and sufficient cause. A further problem is posed by the fact that, as is now widely recognized, a set of events {A, B, C …} can only be genuinely sufficient to produce an effect-event if the set includes an open-ended ceteris paribus clause excluding the presence of potential disruptors that could intervene to prevent E. For example, the start of a football game on TV on a normal Saturday afternoon may be sufficient ceteris paribus to launch Ted toward the fridge to grab a beer; but not if a million-ton asteroid is approaching his house at .75c from a few thousand miles away, nor if his phone is about to ring with news of a tragic nature, …, and so on. Bertrand Russell famously argued against the notion of cause along these lines (and others) in 1912, and the situation has not changed. By trying to define causal determination in terms of a set of prior sufficient conditions, we inevitably fall into the mess of an open-ended list of negative conditions required to achieve the desired sufficiency.

Moreover, thinking about how such determination relates to free action, a further problem arises. If the ceteris paribus clause is open-ended, who is to say that it should not include the negation of a potential disruptor corresponding to my freely deciding not to go get the beer? If it does, then we are left saying “When A, B, C, … Ted will then go to the fridge for a beer, unless D or E or F or … or Ted decides not to do so.” The marionette strings of a “sufficient cause” begin to look rather tenuous.

They are also too short. For the typical set of prior events that can (intuitively, plausibly) be thought to be a sufficient cause of a human action may be so close in time and space to the agent, as to not look like a threat to freedom so much as like enabling conditions. If Ted is propelled to the fridge by {seeing the game's on; desiring to repeat the satisfactory experience of other Saturdays; feeling a bit thirsty; etc}, such things look more like good reasons to have decided to get a beer, not like external physical events far beyond Ted's control. Compare this with the claim that {state of the world in 1900; laws of nature} entail Ted's going to get the beer: the difference is dramatic. So we have a number of good reasons for sticking to the formulations of determinism that arise most naturally out of physics. And this means that we are not looking at how a specific event of ordinary talk is determined by previous events; we are looking at how everything that happens is determined by what has gone before. The state of the world in 1900 only entails that Ted grabs a beer from the fridge by way of entailing the entire physical state of affairs at the later time.

2.2 The way things are at a time t

The typical explication of determinism fastens on the state of the (whole) world at a particular time (or instant), for a variety of reasons. We will briefly explain some of them. Why take the state of the whole world, rather than some (perhaps very large) region, as our starting point? One might, intuitively, think that it would be enough to give the complete state of things on Earth, say, or perhaps in the whole solar system, at t, to fix what happens thereafter (for a time at least). But notice that all sorts of influences from outside the solar system come in at the speed of light, and they may have important effects. Suppose Mary looks up at the sky on a clear night, and a particularly bright blue star catches her eye; she thinks “What a lovely star; I think I'll stay outside a bit longer and enjoy the view.” The state of the solar system one month ago did not fix that that blue light from Sirius would arrive and strike Mary's retina; it arrived into the solar system only a day ago, let's say. So evidently, for Mary's actions (and hence, all physical events generally) to be fixed by the state of things a month ago, that state will have to be fixed over a much larger spatial region than just the solar system. (If no physical influences can go faster than light, then the state of things must be given over a spherical volume of space 1 light-month in radius.)

But in making vivid the “threat” of determinism, we often want to fasten on the idea of the entire future of the world as being determined. No matter what the “speed limit” on physical influences is, if we want the entire future of the world to be determined, then we will have to fix the state of things over all of space, so as not to miss out something that could later come in “from outside” to spoil things. In the time of Laplace, of course, there was no known speed limit to the propagation of physical things such as light-rays. In principle light could travel at any arbitrarily high speed, and some thinkers did suppose that it was transmitted “instantaneously.” The same went for the force of gravity. In such a world, evidently, one has to fix the state of things over the whole of the world at a time t, in order for events to be strictly determined, by the laws of nature, for any amount of time thereafter.

In all this, we have been presupposing the common-sense Newtonian framework of space and time, in which the world-at-a-time is an objective and meaningful notion. Below when we discuss determinism in relativistic theories we will revisit this assumption.

2.3 Thereafter

For a wide class of physical theories (i.e., proposed sets of laws of nature), if they can be viewed as deterministic at all, they can be viewed as bi-directionally deterministic. That is, a specification of the state of the world at a time t, along with the laws, determines not only how things go after t, but also how things go before t. Philosophers, while not exactly unaware of this symmetry, tend to ignore it when thinking of the bearing of determinism on the free will issue. The reason for this is that we tend to think of the past (and hence, states of the world in the past) as done, over, fixed and beyond our control. Forward-looking determinism then entails that these past states—beyond our control, perhaps occurring long before humans even existed—determine everything we do in our lives. It then seems a mere curious fact that it is equally true that the state of the world now determines everything that happened in the past. We have an ingrained habit of taking the direction of both causation and explanation as being past—-present, even when discussing physical theories free of any such asymmetry. We will return to this point shortly.

Another point to notice here is that the notion of things being determined thereafter is usually taken in an unlimited sense—i.e., determination of all future events, no matter how remote in time. But conceptually speaking, the world could be only imperfectly deterministic: things could be determined only, say, for a thousand years or so from any given starting state of the world. For example, suppose that near-perfect determinism were regularly (but infrequently) interrupted by spontaneous particle creation events, which occur only once every thousand years in a thousand-light-year-radius volume of space. This unrealistic example shows how determinism could be strictly false, and yet the world be deterministic enough for our concerns about free action to be unchanged.

2.4 Laws of nature

In the loose statement of determinism we are working from, metaphors such as “govern” and “under the sway of” are used to indicate the strong force being attributed to the laws of nature. Part of understanding determinism—and especially, whether and why it is metaphysically important—is getting clear about the status of the presumed laws of nature.

In the physical sciences, the assumption that there are fundamental, exceptionless laws of nature, and that they have some strong sort of modal force, usually goes unquestioned. Indeed, talk of laws “governing” and so on is so commonplace that it takes an effort of will to see it as metaphorical. We can characterize the usual assumptions about laws in this way: the laws of nature are assumed to be pushy explainers. They make things happen in certain ways , and by having this power, their existence lets us explain why things happen in certain ways. (For a defense of this perspective on laws, see Maudlin (2007)). Laws, we might say, are implicitly thought of as the cause of everything that happens. If the laws governing our world are deterministic, then in principle everything that happens can be explained as following from states of the world at earlier times. (Again, we note that even though the entailment typically works in the future→past direction also, we have trouble thinking of this as a legitimate explanatory entailment. In this respect also, we see that laws of nature are being implicitly treated as the causes of what happens: causation, intuitively, can only go past→future.)

Interestingly, philosophers tend to acknowledge the apparent threat determinism poses to free will, even when they explicitly reject the view that laws are pushy explainers. Earman (1986), for example, advocates a theory of laws of nature that takes them to be simply the best system of regularities that systematizes all the events in universal history. This is the Best Systems Analysis (BSA), with roots in the work of Hume, Mill and Ramsey, and most recently refined and defended by David Lewis (1973, 1994) and by Earman (1984, 1986). (cf. entry on laws of nature). Yet he ends his comprehensive Primer on Determinism with a discussion of the free will problem, taking it as a still-important and unresolved issue. Prima facie this is quite puzzling, for the BSA is founded on the idea that the laws of nature are ontologically derivative, not primary; it is the events of universal history, as brute facts, that make the laws be what they are, and not vice-versa. Taking this idea seriously, the actions of every human agent in history are simply a part of the universe-wide pattern of events that determines what the laws are for this world. It is then hard to see how the most elegant summary of this pattern, the BSA laws, can be thought of as determiners of human actions. The determination or constraint relations, it would seem, can go one way or the other, not both.

On second thought however it is not so surprising that broadly Humean philosophers such as Ayer, Earman, Lewis and others still see a potential problem for freedom posed by determinism. For even if human actions are part of what makes the laws be what they are, this does not mean that we automatically have freedom of the kind we think we have, particularly freedom to have done otherwise given certain past states of affairs. It is one thing to say that everything occurring in and around my body, and everything everywhere else, conforms to Maxwell's equations and thus the Maxwell equations are genuine exceptionless regularities, and that because they in addition are simple and strong, they turn out to be laws. It is quite another thing to add: thus, I might have chosen to do otherwise at certain points in my life, and if I had, then Maxwell's equations would not have been laws. One might try to defend this claim—unpalatable as it seems intuitively, to ascribe ourselves law-breaking power—but it does not follow directly from a Humean approach to laws of nature. Instead, on such views that deny laws most of their pushiness and explanatory force, questions about determinism and human freedom simply need to be approached afresh.

A second important genre of theories of laws of nature holds that the laws are in some sense necessary. For any such approach, laws are just the sort of pushy explainers that are assumed in the traditional language of physical scientists and free will theorists. But a third and growing class of philosophers holds that (universal, exceptionless, true) laws of nature simply do not exist. Among those who hold this are influential philosophers such as Nancy Cartwright, Bas van Fraassen, and John Dupré. For these philosophers, there is a simple consequence: determinism is a false doctrine. As with the Humean view, this does not mean that concerns about human free action are automatically resolved; instead, they must be addressed afresh in the light of whatever account of physical nature without laws is put forward. See Dupré (2001) for one such discussion.

2.5 Fixed

We can now put our—still vague—pieces together. Determinism requires a world that (a) has a well-defined state or description, at any given time, and (b) laws of nature that are true at all places and times. If we have all these, then if (a) and (b) together logically entail the state of the world at all other times (or, at least, all times later than that given in (a)), the world is deterministic. Logical entailment, in a sense broad enough to encompass mathematical consequence, is the modality behind the determination in “determinism.”

3. The Epistemology of Determinism

How could we ever decide whether our world is deterministic or not? Given that some philosophers and some physicists have held firm views—with many prominent examples on each side—one would think that it should be at least a clearly decidable question. Unfortunately, even this much is not clear, and the epistemology of determinism turns out to be a thorny and multi-faceted issue.

3.1 Laws again

As we saw above, for determinism to be true there have to be some laws of nature. Most philosophers and scientists since the 17th century have indeed thought that there are. But in the face of more recent skepticism, how can it be proven that there are? And if this hurdle can be overcome, don't we have to know, with certainty, precisely what the laws of our world are, in order to tackle the question of determinism's truth or falsity?

The first hurdle can perhaps be overcome by a combination of metaphysical argument and appeal to knowledge we already have of the physical world. Philosophers are currently pursuing this issue actively, in large part due to the efforts of the anti-laws minority. The debate has been most recently framed by Cartwright in The Dappled World (Cartwright 1999) in terms psychologically advantageous to her anti-laws cause. Those who believe in the existence of traditional, universal laws of nature are fundamentalists; those who disbelieve are pluralists. This terminology seems to be becoming standard (see Belot 2001), so the first task in the epistemology of determinism is for fundamentalists to establish the reality of laws of nature (see Hoefer 2002b).

Even if the first hurdle can be overcome, the second, namely establishing precisely what the actual laws are, may seem daunting indeed. In a sense, what we are asking for is precisely what 19th and 20th century physicists sometimes set as their goal: the Final Theory of Everything. But perhaps, as Newton said of establishing the solar system's absolute motion, “the thing is not altogether desperate.” Many physicists in the past 60 years or so have been convinced of determinism's falsity, because they were convinced that (a) whatever the Final Theory is, it will be some recognizable variant of the family of quantum mechanical theories; and (b) all quantum mechanical theories are non-deterministic. Both (a) and (b) are highly debatable, but the point is that one can see how arguments in favor of these positions might be mounted. The same was true in the 19th century, when theorists might have argued that (a) whatever the Final Theory is, it will involve only continuous fluids and solids governed by partial differential equations; and (b) all such theories are deterministic. (Here, (b) is almost certainly false; see Earman (1986),ch. XI). Even if we now are not, we may in future be in a position to mount a credible argument for or against determinism on the grounds of features we think we know the Final Theory must have.

3.2 Experience

Determinism could perhaps also receive direct support—confirmation in the sense of probability-raising, not proof—from experience and experiment. For theories (i.e., potential laws of nature) of the sort we are used to in physics, it is typically the case that if they are deterministic, then to the extent that one can perfectly isolate a system and repeatedly impose identical starting conditions, the subsequent behavior of the systems should also be identical. And in broad terms, this is the case in many domains we are familiar with. Your computer starts up every time you turn it on, and (if you have not changed any files, have no anti-virus software, re-set the date to the same time before shutting down, and so on …) always in exactly the same way, with the same speed and resulting state (until the hard drive fails). The light comes on exactly 32 µsec after the switch closes (until the day the bulb fails). These cases of repeated, reliable behavior obviously require some serious ceteris paribus clauses, are never perfectly identical, and always subject to catastrophic failure at some point. But we tend to think that for the small deviations, probably there are explanations for them in terms of different starting conditions or failed isolation, and for the catastrophic failures, definitely there are explanations in terms of different conditions.

There have even been studies of paradigmatically “chancy” phenomena such as coin-flipping, which show that if starting conditions can be precisely controlled and outside interferences excluded, identical behavior results (see Diaconis, Holmes & Montgomery 2004). Most of these bits of evidence for determinism no longer seem to cut much ice, however, because of faith in quantum mechanics and its indeterminism. Indeterminist physicists and philosophers are ready to acknowledge that macroscopic repeatability is usually obtainable, where phenomena are so large-scale that quantum stochasticity gets washed out. But they would maintain that this repeatability is not to be found in experiments at the microscopic level, and also that at least some failures of repeatability (in your hard drive, or coin-flipping experiments) are genuinely due to quantum indeterminism, not just failures to isolate properly or establish identical initial conditions.

If quantum theories were unquestionably indeterministic, and deterministic theories guaranteed repeatability of a strong form, there could conceivably be further experimental input on the question of determinism's truth or falsity. Unfortunately, the existence of Bohmian quantum theories casts strong doubt on the former point, while chaos theory casts strong doubt on the latter. More will be said about each of these complications below.

3.3 Determinism and Chaos

If the world were governed by strictly deterministic laws, might it still look as though indeterminism reigns? This is one of the difficult questions that chaos theory raises for the epistemology of determinism.

A deterministic chaotic system has, roughly speaking, two salient features: (i) the evolution of the system over a long time period effectively mimics a random or stochastic process—it lacks predictability or computability in some appropriate sense; (ii) two systems with nearly identical initial states will have radically divergent future developments, within a finite (and typically, short) timespan. We will use “randomness” to denote the first feature, and “sensitive dependence on initial conditions” (SDIC) for the latter. Definitions of chaos may focus on either or both of these properties; Batterman (1993) argues that only (ii) provides an appropriate basis for defining chaotic systems.

A simple and very important example of a chaotic system in both randomness and SDIC terms is the Newtonian dynamics of a pool table with a convex obstacle (or obstacles) (Sinai 1970 and others). See Figure 1.

Figure 1: Billiard table with convex obstacle

The usual idealizing assumptions are made: no friction, perfectly elastic collisions, no outside influences. The ball's trajectory is determined by its initial position and direction of motion. If we imagine a slightly different initial direction, the trajectory will at first be only slightly different. And collisions with the straight walls will not tend to increase very rapidly the difference between trajectories. But collisions with the convex object will have the effect of amplifying the differences. After several collisions with the convex body or bodies, trajectories that started out very close to one another will have become wildly different—SDIC.

In the example of the billiard table, we know that we are starting out with a Newtonian deterministic system—that is how the idealized example is defined. But chaotic dynamical systems come in a great variety of types: discrete and continuous, 2-dimensional, 3-dimensional and higher, particle-based and fluid-flow-based, and so on. Mathematically, we may suppose all of these systems share SDIC. But generally they will also display properties such as unpredictability, non-computability, Kolmogorov-random behavior, and so on—at least when looked at in the right way, or at the right level of detail. This leads to the following epistemic difficulty: if, in nature, we find a type of system that displays some or all of these latter properties, how can we decide which of the following two hypotheses is true?

  1. The system is governed by genuinely stochastic, indeterministic laws (or by no laws at all), i.e., its apparent randomness is in fact real randomness.
  2. The system is governed by underlying deterministic laws, but is chaotic.

In other words, once one appreciates the varieties of chaotic dynamical systems that exist, mathematically speaking, it starts to look difficult—maybe impossible—for us to ever decide whether apparently random behavior in nature arises from genuine stochasticity, or rather from deterministic chaos. Patrick Suppes (1993, 1996) argues, on the basis of theorems proven by Ornstein (1974 and later) that “There are processes which can equally well be analyzed as deterministic systems of classical mechanics or as indeterministic semi-Markov processes, no matter how many observations are made.” And he concludes that “Deterministic metaphysicians can comfortably hold to their view knowing they cannot be empirically refuted, but so can indeterministic ones as well.” (Suppes 1993, p. 254) For more recent works exploring the extent to which deterministic and indeterministic model systems may be regarded as empirically indistinguishable, see Werndl (2016) and references therein.

There is certainly an interesting problem area here for the epistemology of determinism, but it must be handled with care. It may well be true that there are some deterministic dynamical systems that, when viewed properly, display behavior indistinguishable from that of a genuinely stochastic process. For example, using the billiard table above, if one divides its surface into quadrants and looks at which quadrant the ball is in at 30-second intervals, the resulting sequence is no doubt highly random. But this does not mean that the same system, when viewed in a different way (perhaps at a higher degree of precision) does not cease to look random and instead betray its deterministic nature. If we partition our billiard table into squares 2 centimeters a side and look at which quadrant the ball is in at .1 second intervals, the resulting sequence will be far from random. And finally, of course, if we simply look at the billiard table with our eyes, and see it as a billiard table, there is no obvious way at all to maintain that it may be a truly random process rather than a deterministic dynamical system. (See Winnie (1996) for a nice technical and philosophical discussion of these issues. Winnie explicates Ornstein's and others' results in some detail, and disputes Suppes' philosophical conclusions.)

The dynamical systems usually studied under the label of “chaos” are usually either purely abstract, mathematical systems, or classical Newtonian systems. It is natural to wonder whether chaotic behavior carries over into the realm of systems governed by quantum mechanics as well. Interestingly, it is much harder to find natural correlates of classical chaotic behavior in true quantum systems (see Gutzwiller 1990). Some, at least, of the interpretive difficulties of quantum mechanics would have to be resolved before a meaningful assessment of chaos in quantum mechanics could be achieved. For example, SDIC is hard to find in the Schrödinger evolution of a wavefunction for a system with finite degrees of freedom; but in Bohmian quantum mechanics it is handled quite easily on the basis of particle trajectories (see Dürr, Goldstein and Zhangì 1992).

The popularization of chaos theory in the relatively recent past perhaps made it seem self-evident that nature is full of genuinely chaotic systems. In fact, it is far from self-evident that such systems exist, other than in an approximate sense. Nevertheless, the mathematical exploration of chaos in dynamical systems helps us to understand some of the pitfalls that may attend our efforts to know whether our world is genuinely deterministic or not.

3.4 Metaphysical arguments

Let us suppose that we shall never have the Final Theory of Everything before us—at least in our lifetime—and that we also remain unclear (on physical/experimental grounds) as to whether that Final Theory will be of a type that can or cannot be deterministic. Is there nothing left that could sway our belief toward or against determinism? There is, of course: metaphysical argument. Metaphysical arguments on this issue are not currently very popular. But philosophical fashions change at least twice a century, and grand systemic metaphysics of the Leibnizian sort might one day come back into favor. Conversely, the anti-systemic, anti-fundamentalist metaphysics propounded by Cartwright (1999) might also come to predominate. As likely as not, for the foreseeable future metaphysical argument may be just as good a basis on which to discuss determinism's prospects as any arguments from mathematics or physics.

4. The Status of Determinism in Physical Theories

John Earman's Primer on Determinism (1986) remains the richest storehouse of information on the truth or falsity of determinism in various physical theories, from classical mechanics to quantum mechanics and general relativity. (See also his recent update on the subject, “Aspects of Determinism in Modern Physics” (2007)). Here I will give only a brief discussion of some key issues, referring the reader to Earman (1986) and other resources for more detail. Figuring out whether well-established theories are deterministic or not (or to what extent, if they fall only a bit short) does not do much to help us know whether our world is really governed by deterministic laws; all our current best theories, including General Relativity and the Standard Model of particle physics, are too flawed and ill-understood to be mistaken for anything close to a Final Theory. Nevertheless, as Earman stressed, the exploration is very valuable because of the way it enriches our understanding of the richness and complexity of determinism.

4.1 Classical mechanics

Despite the common belief that classical mechanics (the theory that inspired Laplace in his articulation of determinism) is perfectly deterministic, in fact the theory is rife with possibilities for determinism to break down. One class of problems arises due to the absence of an upper bound on the velocities of moving objects. Below we see the trajectory of an object that is accelerated unboundedly, its velocity becoming in effect infinite in a finite time. See Figure 2:

Figure 2: An object accelerates so as to reach spatial infinity in a finite time

By the time t = t*, the object has literally disappeared from the world—its world-line never reaches the t = t* surface. (Never mind how the object gets accelerated in this way; there are mechanisms that are perfectly consistent with classical mechanics that can do the job. In fact, Xia (1992) showed that such acceleration can be accomplished by gravitational forces from only 5 finite objects, without collisions. No mechanism is shown in these diagrams.) This “escape to infinity,” while disturbing, does not yet look like a violation of determinism. But now recall that classical mechanics is time-symmetric: any model has a time-inverse, which is also a consistent model of the theory. The time-inverse of our escaping body is playfully called a “space invader.”

Figure 3: A ‘space invader’ comes in from spatial infinity

Clearly, a world with a space invader does fail to be deterministic. Before t = t*, there was nothing in the state of things to enable the prediction of the appearance of the invader at t = t* +.[2] One might think that the infinity of space is to blame for this strange behavior, but this is not obviously correct. In finite, “rolled-up” or cylindrical versions of Newtonian space-time space-invader trajectories can be constructed, though whether a “reasonable” mechanism to power them exists is not clear.[3]

A second class of determinism-breaking models can be constructed on the basis of collision phenomena. The first problem is that of multiple-particle collisions for which Newtonian particle mechanics simply does not have a prescription for what happens. (Consider three identical point-particles approaching each other at 120 degree angles and colliding simultaneously. That they bounce back along their approach trajectories is possible; but it is equally possible for them to bounce in other directions (again with 120 degree angles between their paths), so long as momentum conservation is respected.)

Moreover, there is a burgeoning literature of physical or quasi-physical systems, usually set in the context of classical physics, that carry out supertasks (see Earman and Norton (1998) and the entry on supertasks for a review). Frequently, the puzzle presented is to decide, on the basis of the well-defined behavior before time t = a, what state the system will be in at t = a itself. A failure of CM to dictate a well-defined result can then be seen as a failure of determinism.

In supertasks, one frequently encounters infinite numbers of particles, infinite (or unbounded) mass densities, and other dubious infinitary phenomena. Coupled with some of the other breakdowns of determinism in CM, one begins to get a sense that most, if not all, breakdowns of determinism rely on some combination of the following set of (physically) dubious mathematical notions: {infinite space; unbounded velocity; continuity; point-particles; singular fields}. The trouble is, it is difficult to imagine any recognizable physics (much less CM) that eschews everything in the set.

Figure 4: A ball may spontaneously start sliding down this dome, with no violation of Newton's laws. (Reproduced courtesy of John D. Norton and Philosopher's Imprint)

Finally, an elegant example of apparent violation of determinism in classical physics has been created by John Norton (2003). As illustrated in Figure 4, imagine a ball sitting at the apex of a frictionless dome whose equation is specified as a function of radial distance from the apex point. This rest-state is our initial condition for the system; what should its future behavior be? Clearly one solution is for the ball to remain at rest at the apex indefinitely.

But curiously, this is not the only solution under standard Newtonian laws. The ball may also start into motion sliding down the dome—at any moment in time, and in any radial direction. This example displays “uncaused motion” without, Norton argues, any violation of Newton's laws, including the First Law. And it does not, unlike some supertask examples, require an infinity of particles. Still, many philosophers are uncomfortable with the moral Norton draws from his dome example, and point out reasons for questioning the dome's status as a Newtonian system (see e.g. Malament (2007)).

4.2 Special Relativistic physics

Two features of special relativistic physics make it perhaps the most hospitable environment for determinism of any major theoretical context: the fact that no process or signal can travel faster than the speed of light, and the static, unchanging spacetime structure. The former feature, including a prohibition against tachyons (hypothetical particles travelling faster than light)[4]), rules out space invaders and other unbounded-velocity systems. The latter feature makes the space-time itself nice and stable and non-singular—unlike the dynamic space-time of General Relativity, as we shall see below. For source-free electromagnetic fields in special-relativistic space-time, a nice form of Laplacean determinism is provable. Unfortunately, interesting physics needs more than source-free electromagnetic fields. Earman (1986) ch. IV surveys in depth the pitfalls for determinism that arise once things are allowed to get more interesting (e.g. by the addition of particles interacting gravitationally).

4.3 General Relativity (GTR)

Defining an appropriate form of determinism for the context of general relativistic physics is extremely difficult, due to both foundational interpretive issues and the plethora of weirdly-shaped space-time models allowed by the theory's field equations. The simplest way of treating the issue of determinism in GTR would be to state flatly: determinism fails, frequently, and in some of the most interesting models. Here we will briefly describe some of the most important challenges that arise for determinism, directing the reader yet again to Earman (1986), and also Earman (1995) for more depth.

4.3.1 Determinism and manifold points

In GTR, we specify a model of the universe by giving a triple of three mathematical objects, <M, g,T>. M represents a continuous “manifold”: that means a sort of unstructured space (-time), made up of individual points and having smoothness or continuity, dimensionality (usually, 4-dimensional), and global topology, but no further structure. What is the further structure a space-time needs? Typically, at least, we expect the time-direction to be distinguished from space-directions; and we expect there to be well-defined distances between distinct points; and also a determinate geometry (making certain continuous paths in M be straight lines, etc.). All of this extra structure is coded into g, the metric field. So M and g together represent space-time. T represents the matter and energy content distributed around in space-time (if any, of course).

For mathematical reasons not relevant here, it turns out to be possible to take a given model spacetime and perform a mathematical operation called a “hole diffeomorphism” h* on it; the diffeomorphism's effect is to shift around the matter content T and the metric g relative to the continuous manifold M.[5] If the diffeomorphism is chosen appropriately, it can move around T and g after a certain time t = 0, but leave everything alone before that time. Thus, the new model represents the matter content (now h*T) and the metric (h*g) as differently located relative to the points of M making up space-time. Yet, the new model is also a perfectly valid model of the theory. This looks on the face of it like a form of indeterminism: GTR's equations do not specify how things will be distributed in space-time in the future, even when the past before a given time t is held fixed. See Figure 5:

Figure 5: “Hole” diffeomorphism shifts contents of spacetime

Usually the shift is confined to a finite region called the hole (for historical reasons). Then it is easy to see that the state of the world at time t = 0 (and all the history that came before) does not suffice to fix whether the future will be that of our first model, or its shifted counterpart in which events inside the hole are different.

This is a form of indeterminism first highlighted by Earman and Norton (1987) as an interpretive philosophical difficulty for realism about GTR's description of the world, especially the point manifold M. They showed that realism about the manifold as a part of the furniture of the universe (which they called “manifold substantivalism”) commits us to an automatic indeterminism in GTR (as described above), and they argued that this is unacceptable. (See the hole argument and Hoefer (1996) for one response on behalf of the space-time realist, and discussion of other responses.) For now, we will simply note that this indeterminism, unlike most others we are discussing in this section, is empirically undetectable: our two models <M, g, T> and the shifted model <M, h*g, h*T> are empirically indistinguishable.

4.3.2 Singularities

The separation of space-time structures into manifold and metric (or connection) facilitates mathematical clarity in many ways, but also opens up Pandora's box when it comes to determinism. The indeterminism of the Earman and Norton hole argument is only the tip of the iceberg; singularities make up much of the rest of the berg. In general terms, a singularity can be thought of as a “place where things go bad” in one way or another in the space-time model. For example, near the center of a Schwarzschild black hole, curvature increases without bound, and at the center itself it is undefined, which means that Einstein's equations cannot be said to hold, which means (arguably) that this point does not exist as a part of the space-time at all! Some specific examples are clear, but giving a general definition of a singularity, like defining determinism itself in GTR, is a vexed issue (see Earman (1995) for an extended treatment; Callender and Hoefer (2001) gives a brief overview). We will not attempt here to catalog the various definitions and types of singularity.

Different types of singularity bring different types of threat to determinism. In the case of ordinary black holes, mentioned above, all is well outside the so- called “event horizon”, which is the spherical surface defining the black hole: once a body or light signal passes through the event horizon to the interior region of the black hole, it can never escape again. Generally, no violation of determinism looms outside the event horizon; but what about inside? Some black hole models have so-called “Cauchy horizons” inside the event horizon, i.e., surfaces beyond which determinism breaks down.

Another way for a model spacetime to be singular is to have points or regions go missing, in some cases by simple excision. Perhaps the most dramatic form of this involves taking a nice model with a space-like surface t = E (i.e., a well-defined part of the space-time that can be considered “the state state of the world at time E”), and cutting out and throwing away this surface and all points temporally later. The resulting spacetime satisfies Einstein's equations; but, unfortunately for any inhabitants, the universe comes to a sudden and unpredictable end at time E. This is too trivial a move to be considered a real threat to determinism in GTR; we can impose a reasonable requirement that space-time not “run out” in this way without some physical reason (the spacetime should be “maximally extended”). For discussion of precise versions of such a requirement, and whether they succeed in eliminating unwanted singularities, see Earman (1995, chapter 2).

The most problematic kinds of singularities, in terms of determinism, are naked singularities (singularities not hidden behind an event horizon). When a singularity forms from gravitational collapse, the usual model of such a process involves the formation of an event horizon (i.e. a black hole). A universe with an ordinary black hole has a singularity, but as noted above, (outside the event horizon at least) nothing unpredictable happens as a result. A naked singularity, by contrast, has no such protective barrier. In much the way that anything can disappear by falling into an excised-region singularity, or appear out of a white hole (white holes themselves are, in fact, technically naked singularities), there is the worry that anything at all could pop out of a naked singularity, without warning (hence, violating determinism en passant). While most white hole models have Cauchy surfaces and are thus arguably deterministic, other naked singularity models lack this property. Physicists disturbed by the unpredictable potentialities of such singularities have worked to try to prove various cosmic censorship hypotheses that show—under (hopefully) plausible physical assumptions—that such things do not arise by stellar collapse in GTR (and hence are not liable to come into existence in our world). To date no very general and convincing forms of the hypothesis have been proven, so the prospects for determinism in GTR as a mathematical theory do not look terribly good.

4.4 Quantum mechanics

As indicated above, QM is widely thought to be a strongly non-deterministic theory. Popular belief (even among most physicists) holds that phenomena such as radioactive decay, photon emission and absorption, and many others are such that only a probabilistic description of them can be given. The theory does not say what happens in a given case, but only says what the probabilities of various results are. So, for example, according to QM the fullest description possible of a radium atom (or a chunk of radium, for that matter), does not suffice to determine when a given atom will decay, nor how many atoms in the chunk will have decayed at any given time. The theory gives only the probabilities for a decay (or a number of decays) to happen within a given span of time. Einstein and others perhaps thought that this was a defect of the theory that should eventually be removed, by a supplemental hidden variable theory[6] that restores determinism; but subsequent work showed that no such hidden variables account could exist. At the microscopic level the world is ultimately mysterious and chancy.

So goes the story; but like much popular wisdom, it is partly mistaken and/or misleading. Ironically, quantum mechanics is one of the best prospects for a genuinely deterministic theory in modern times! Everything hinges on what interpretational and philosophical decisions one adopts. The fundamental law at the heart of non-relativistic QM is the Schrödinger equation. The evolution of a wavefunction describing a physical system under this equation is normally taken to be perfectly deterministic.[7] If one adopts an interpretation of QM according to which that's it—i.e., nothing ever interrupts Schrödinger evolution, and the wavefunctions governed by the equation tell the complete physical story—then quantum mechanics is a perfectly deterministic theory. There are several interpretations that physicists and philosophers have given of QM which go this way. (See the entry on quantum mechanics.)

More commonly—and this is part of the basis for the popular wisdom—physicists have resolved the quantum measurement problem by postulating that some process of “collapse of the wavefunction” occurs during measurements or observations that interrupts Schrödinger evolution. The collapse process is usually postulated to be indeterministic, with probabilities for various outcomes, via Born's rule, calculable on the basis of a system's wavefunction. The once-standard Copenhagen interpretation of QM posits such a collapse. It has the virtue of solving certain problems such as the infamous Schrödinger's cat paradox, but few philosophers or physicists can take it very seriously unless they are instrumentalists about the theory. The reason is simple: the collapse process is not physically well-defined, is characterised in terms of an anthropomorphic notion (measurement)and feels too ad hoc to be a fundamental part of nature's laws.[8]

In 1952 David Bohm created an alternative interpretation of non relativistic QM—perhaps better thought of as an alternative theory—that realizes Einstein's dream of a hidden variable theory, restoring determinism and definiteness to micro-reality. In Bohmian quantum mechanics, unlike other interpretations, it is postulated that all particles have, at all times, a definite position and velocity. In addition to the Schrödinger equation, Bohm posited a guidance equation that determines, on the basis of the system's wavefunction and particles' initial positions and velocities, what their future positions and velocities should be. As much as any classical theory of point particles moving under force fields, then, Bohm's theory is deterministic. Amazingly, he was also able to show that, as long as the statistical distribution of initial positions and velocities of particles are chosen so as to meet a “quantum equilibrium” condition, his theory is empirically equivalent to standard Copenhagen QM. In one sense this is a philosopher's nightmare: with genuine empirical equivalence as strong as Bohm obtained, it seems experimental evidence can never tell us which description of reality is correct. (Fortunately, we can safely assume that neither is perfectly correct, and hope that our Final Theory has no such empirically equivalent rivals.) In other senses, the Bohm theory is a philosopher's dream come true, eliminating much (but not all) of the weirdness of standard QM and restoring determinism to the physics of atoms and photons. The interested reader can find out more from the link above, and references therein.

This small survey of determinism's status in some prominent physical theories, as indicated above, does not really tell us anything about whether determinism is true of our world. Instead, it raises a couple of further disturbing possibilities for the time when we do have the Final Theory before us (if such time ever comes): first, we may have difficulty establishing whether the Final Theory is deterministic or not—depending on whether the theory comes loaded with unsolved interpretational or mathematical puzzles. Second, we may have reason to worry that the Final Theory, if indeterministic, has an empirically equivalent yet deterministic rival (as illustrated by Bohmian quantum mechanics.)

5. Chance and Determinism

Some philosophers maintain that if determinism holds in our world, then there are no objective chances in our world. And often the word ‘chance’ here is taken to be synonymous with 'probability', so these philosophers maintain that there are no non-trivial objective probabilities for events in our world. (The caveat “non-trivial” is added here because on some accounts, under determinism, all future events that actually happen have probability, conditional on past history, equal to 1, and future events that do not happen have probability equal to zero. Non-trivial probabilities are probabilities strictly between zero and one.) Conversely, it is often held, if there are laws of nature that are irreducibly probabilistic, determinism must be false. (Some philosophers would go on to add that such irreducibly probabilistic laws are the basis of whatever genuine objective chances obtain in our world.)

The discussion of quantum mechanics in section 4 shows that it may be difficult to know whether a physical theory postulates genuinely irreducible probabilistic laws or not. If a Bohmian version of QM is correct, then the probabilities dictated by the Born rule are not irreducible. If that is the case, should we say that the probabilities dictated by quantum mechanics are not objective? Or should we say that we need to distinguish ‘chance’ and ‘probabillity’ after all—and hold that not all objective probabilities should be thought of as objective chances? The first option may seem hard to swallow, given the many-decimal-place accuracy with which such probability-based quantities as half-lives and cross-sections can be reliably predicted and verified experimentally with QM.

Whether objective chance and determinism are really incompatible or not may depend on what view of the nature of laws is adopted. On a “pushy explainers” view of laws such as that defended by Maudlin (2007), probabilistic laws are interpreted as irreducible dynamical transition-chances between allowed physical states, and the incompatibility of such laws with determinism is immediate. But what should a defender of a Humean view of laws, such as the BSA theory (section 2.4 above), say about probabilistic laws? The first thing that needs to be done is explain how probabilistic laws can fit into the BSA account at all, and this requires modification or expansion of the view, since as first presented the only candidates for laws of nature are true universal generalizations. If ‘probability’ were a univocal, clearly understood notion then this might be simple: We allow universal generalizations whose logical form is something like: “Whenever conditions Y obtain, Pr(A) = x”. But it is not at all clear how the meaning of ‘Pr’ should be understood in such a generalization; and it is even less clear what features the Humean pattern of actual events must have, for such a generalization to be held true. (See the entry on interpretations of probability and Lewis (1994).)

Humeans about laws believe that what laws there are is a matter of what patterns are there to be discerned in the overall mosaic of events that happen in the history of the world. It seems plausible enough that the patterns to be discerned may include not only strict associations (whenever X, Y), but also stable statistical associations. If the laws of nature can include either sort of association, a natural question to ask seems to be: why can't there be non-probabilistic laws strong enough to ensure determinism, and on top of them, probabilistic laws as well? If a Humean wanted to capture the laws not only of fundamental theories, but also non-fundamental branches of physics such as (classical) statistical mechanics, such a peaceful coexistence of deterministic laws plus further probabilistic laws would seem to be desirable. Loewer (2004) and Frigg & Hoefer (2015) offer forms of this peaceful coexistence that can be achieved within Lewis' version of the BSA account of laws.

6. Determinism and Human Action

In the introduction, we noted the threat that determinism seems to pose to human free agency. It is hard to see how, if the state of the world 1000 years ago fixes everything I do during my life, I can meaningfully say that I am a free agent, the author of my own actions, which I could have freely chosen to perform differently. After all, I have neither the power to change the laws of nature, nor to change the past! So in what sense can I attribute freedom of choice to myself?

Philosophers have not lacked ingenuity in devising answers to this question. There is a long tradition of compatibilists arguing that freedom is fully compatible with physical determinism; a prominent recent defender is John Fischer (1994, 2012). Hume went so far as to argue that determinism is a necessary condition for freedom—or at least, he argued that some causality principle along the lines of “same cause, same effect” is required. There have been equally numerous and vigorous responses by those who are not convinced. Can a clear understanding of what determinism is, and how it tends to succeed or fail in real physical theories, shed any light on the controversy?

Physics, particularly 20th century physics, does have one lesson to impart to the free will debate; a lesson about the relationship between time and determinism. Recall that we noticed that the fundamental theories we are familiar with, if they are deterministic at all, are time-symmetrically deterministic. That is, earlier states of the world can be seen as fixing all later states; but equally, later states can be seen as fixing all earlier states. We tend to focus only on the former relationship, but we are not led to do so by the theories themselves.

Nor does 20th (21st) -century physics countenance the idea that there is anything ontologically special about the past, as opposed to the present and the future. In fact, it fails to use these categories in any respect, and teaches that in some senses they are probably illusory.[9] So there is no support in physics for the idea that the past is “fixed” in some way that the present and future are not, or that it has some ontological power to constrain our actions that the present and future do not have. It is not hard to uncover the reasons why we naturally do tend to think of the past as special, and assume that both physical causation and physical explanation work only in the past present/future direction (see the entry on thermodynamic asymmetry in time). But these pragmatic matters have nothing to do with fundamental determinism. If we shake loose from the tendency to see the past as special, when it comes to the relationships of determination, it may prove possible to think of a deterministic world as one in which each part bears a determining—or partial-determining—relation to other parts, but in which no particular part (region of space-time, event or set of events, ...) has a special, privileged determining role that undercuts the others. Hoefer (2002a) and Ismael (2016) use such considerations to argue in a novel way for the compatiblity of determinism with human free agency.


  • Batterman, R. B., 1993, “Defining Chaos,” Philosophy of Science, 60: 43–66.
  • Bishop, R. C., 2002, “Deterministic and Indeterministic Descriptions,” in Between Chance and Choice, H. Atmanspacher and R. Bishop (eds.), Imprint Academic, 5–31.
  • Butterfield, J., 1998, “Determinism and Indeterminism,” in Routledge Encyclopedia of Philosophy, E. Craig (ed.), London: Routledge.
  • Callender, C., 2000, “Shedding Light on Time,” Philosophy of Science (Proceedings of PSA 1998), 67: S587–S599.
  • Callender, C., and Hoefer, C., 2001, “Philosophy of Space-time Physics,” in The Blackwell Guide to the Philosophy of Science, P. Machamer and M. Silberstein (eds), Oxford: Blackwell, pp. 173–198.
  • Cartwright, N., 1999, The Dappled World, Cambridge: Cambridge University Press.
  • Dupré, J., 2001, Human Nature and the Limits of Science, Oxford: Oxford University Press.
  • Dürr, D., Goldstein, S., and Zanghì, N., 1992, “Quantum Chaos, Classical Randomness, and Bohmian Mechanics,” Journal of Statistical Physics, 68: 259–270. [Preprint available online in gzip'ed Postscript.]
  • Earman, J., 1984: “Laws of Nature: The Empiricist Challenge,” in R. J. Bogdan, ed.,'D.H.Armstrong', Dortrecht: Reidel, pp. 191–223.
  • –––, 1986, A Primer on Determinism, Dordrecht: Reidel.
  • –––, 1995, Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, New York: Oxford University Press.
  • Earman, J., and Norton, J., 1987, “What Price Spacetime Substantivalism: the Hole Story,” British Journal for the Philosophy of Science, 38: 515–525.
  • –––, 1998, “Comments on Laraudogoitia's ‘Classical Particle Dynamics, Indeterminism and a Supertask’,” British Journal for the Philosophy of Science, 49: 123–133.
  • Fisher, J., 1994, The Metaphysics of Free Will, Oxford: Blackwell Publishers.
  • –––, 2012, Deep Control: Essays on Free Will and Value, New York: Oxford University Press.
  • Ford, J., 1989, “What is chaos, the we should be mindful of it?” in The New Physics, P. Davies (ed.), Cambridge: Cambridge University Press, 348–372.
  • Frigg, R., and Hoefer, C., 2015, “The Best Humean System for Statistical Mechanics,” Erkenntnis, 80 (3 Supplement): 551–574.
  • Gisin, N., 1991, “Propensities in a Non-Deterministic Physics”, Synthese, 89: 287–297.
  • Gutzwiller, M., 1990, Chaos in Classical and Quantum Mechanics, New York: Springer-Verlag.
  • Hitchcock, C., 1999, “Contrastive Explanation and the Demons of Determinism,” British Journal of the Philosophy of Science, 50: 585–612.
  • Hoefer, C., 1996, “The Metaphysics of Spacetime Substantivalism,” The Journal of Philosophy, 93: 5–27.
  • –––, 2002a, “Freedom From the Inside Out,” in Time, Reality and Experience, C. Callender (ed.), Cambridge: Cambridge University Press, pp. 201–222.
  • –––, 2002b, “For Fundamentalism,” Philosophy of Science v. 70, no. 5 (PSA 2002 Proceedings), pp. 1401–1412.
  • Hutchison, K. 1993, “Is Classical Mechanics Really Time-reversible and Deterministic?” British Journal of the Philosophy of Science, 44: 307–323.
  • Ismael, J. 2016, How Physics Makes Us Free, Oxford: Oxford University Press.
  • Laplace, P., 1820, Essai Philosophique sur les Probabilités forming the introduction to his Théorie Analytique des Probabilités, Paris: V Courcier; repr. F.W. Truscott and F.L. Emory (trans.), A Philosophical Essay on Probabilities, New York: Dover, 1951 .
  • Leiber, T., 1998, “On the Actual Impact of Deterministic Chaos,” Synthese, 113: 357–379.
  • Lewis, D., 1973,Counterfactuals, Oxford: Blackwell.
  • –––, 1994, “Humean Supervenience Debugged,” Mind, 103: 473–490.
  • Loewer, B., 2004, “Determinism and Chance,” Studies in History and Philosophy of Modern Physics, 32: 609–620.
  • Malament, D., 2008, “Norton's Slippery Slope,” Philosophy of Science, vol. 75, no. 4, pp. 799–816.
  • Maudlin, T. 2007, The Metaphysics Within Physics, Oxford: Oxford University Press.
  • Melia, J. 1999, “Holes, Haecceitism and Two Conceptions od Determinism,” British Journal of the Philosophy of Science, 50: 639–664.
  • Mellor, D. H. 1995, The Facts of Causation, London: Routledge.
  • Norton, J.D., 2003, “Causation as Folk Science,” Philosopher's Imprint, 3 (4): [Available online].
  • Ornstein, D. S., 1974, Ergodic Theory, Randomness, and Dynamical Systems, New Haven: Yale University Press.
  • Popper, K. 1982, The Open Universe: an argument for indeterminism, London: Rutledge (Taylor & Francis Group).
  • Ruelle, D., 1991, Chance and Chaos, London: Penguin.
  • Russell, B., 1912, “On the Notion of Cause,” Proceedings of the Aristotelian Society, 13: 1–26.
  • Shanks, N., 1991, “Probabilistic physics and the metaphysics of time,” South African Journal of Philosophy, 10: 37–44.
  • Sinai, Ya.G., 1970, “Dynamical systems with elastic reflections,” Russ. Math. Surveys 25: 137–189.
  • Suppes, P., 1993, “The Transcendental Character of Determinism,” Midwest Studies in Philosophy, 18: 242–257.
  • –––, 1999, “The Noninvariance of Deterministic Causal Models,” Synthese, 121: 181–198.
  • Suppes, P. and M. Zanotti, 1996, Foundations of Probability with Applications. New York: Cambridge University Press.
  • van Fraassen, B., 1989, Laws and Symmetry, Oxford: Clarendon Press.
  • Van Kampen, N. G., 1991, “Determinism and Predictability,” Synthese, 89: 273–281.
  • Werndl, C. 2016, The Oxford Handbook of Philosophy of Science. Oxford: Oxford University Press. Online at, December 2015.
  • Winnie, J. A., 1996, “Deterministic Chaos and the Nature of Chance,” in The Cosmos of Science—Essays of Exploration, J. Earman and J. Norton (eds.), Pittsburgh: University of Pitsburgh Press, pp. 299–324.
  • Xia, Z., 1992, “The existence of noncollision singularities in newtonian systems,” Annals of Mathematics, 135: 411–468.


The author would like to acknowledge the invaluable help of John Norton in the preparation of this entry. Thanks also to A. Ilhamy Amiry for bringing to my attention some errors in an earlier version of this entry.

1. Rational Deliberation

1.1 Free Will as Choosing on the Basis of One's Desires

On a minimalist account, free will is the ability to select a course of action as a means of fulfilling some desire. David Hume, for example, defines liberty as “a power of acting or of not acting, according to the determination of the will.” (1748, sect.viii, part 1). And we find in Jonathan Edwards (1754) a similar account of free willings as those which proceed from one's own desires.

One reason to deem this insufficient is that it is consistent with the goal-directed behavior of some animals whom we do not suppose to be morally responsible agents. Such animals lack not only an awareness of the moral implications of their actions but also any capacity to reflect on their alternatives and their long-term consequences. Indeed, it is plausible that they have little by way of a self-conception as an agent with a past and with projects and purposes for the future. (See Baker 2000 on the ‘first-person perspective.’)

1.2 Free Will as deliberative choosing on the basis of desires and values

A natural suggestion, then, is to modify the minimalist thesis by taking account of (what may be) distinctively human capacities and self-conception. And indeed, philosophers since Plato have commonly distinguished the ‘animal’ and ‘rational’ parts of our nature, with the latter implying a great deal more psychological complexity. Our rational nature includes our ability to judge some ends as ‘good’ or worth pursuing and value them even though satisfying them may result in considerable unpleasantness for ourselves. (Note that such judgments need not be based in moral value.) We might say that we act with free will when we act upon our considered judgments/valuings about what is good for us, whether or not our doing so conflicts with an ‘animal’ desire. (See Watson 2003a for a subtle development of this sort of view.) But this would seem unduly restrictive, since we clearly hold many people responsible for actions proceeding from ‘animal’ desires that conflict with their own assessment of what would be best in the circumstances. More plausible is the suggestion that one acts with free will when one's deliberation is sensitive to one's own judgments concerning what is best in the circumstances, whether or not one acts upon such a judgment.

Here we are clearly in the neighborhood of the ‘rational appetite’ accounts of will one finds in the medieval Aristotelians. The most elaborate medieval treatment is Thomas Aquinas's.[1] His account involves identifying several distinct varieties of willings. Here I note only a few of his basic claims. Aquinas thinks our nature determines us to will certain general ends ordered to the most general goal of goodness. These we will of necessity, not freely. Freedom enters the picture when we consider various means to these ends, none of which appear to us either as unqualifiedly good or as uniquely satisfying the end we wish to fulfill. There is, then, free choice of means to our ends, along with a more basic freedom not to consider something, thereby perhaps avoiding willing it unavoidably once we recognized its value. Free choice is an activity that involves both our intellectual and volitional capacities, as it consists in both judgment and active commitment. A thorny question for this view is whether will or intellect is the ultimate determinant of free choices. How we understand Aquinas on this point will go a long ways towards determining whether or not he is a sort of compatibilist about freedom and determinism. (See below. Good expositions of Aquinas' account are Donagan 1985, MacDonald 1998, Stump 2003, ch.9, and Pasnau 2002, Ch.7.)

There are two general worries about theories of free will that principally rely on the capacity to deliberate about possible actions in the light of one's conception of the good. First, there are agents who deliberately choose to act as they do but who are motivated to do so by a compulsive, controlling sort of desire. (And there seems to be no principled bar to a compulsive desire's informing a considered judgment of the agent about what the good is for him.) Such agents are not willing freely. (Wallace 2003 and Levy 2007, Ch.6, offer accounts of the way addiction impairs the will.) Secondly, we can imagine a person's psychology being externally manipulated by another agent (via neurophysiological implant, say), such that the agent is caused to deliberate and come to desire strongly a particular action which he previously was not disposed to choose. The deliberative process could be perfectly normal, reflective, and rational, but seemingly not freely made. The agent's freedom seems undermined or at least greatly diminished by such psychological tampering (Mele 1995).

1.3 Self-mastery, Rightly-Ordered Appetite

Some theorists are much impressed by cases of inner, psychological compulsion and define freedom of will in contrast to this phenomenon. For such thinkers, true freedom of the will involves liberation from the tyranny of base desires and acquisition of desires for the Good. Plato, for example, posits rational, spirited, and appetitive aspects to the soul and holds that willings issue from the higher, rational part alone. In other cases, one is dominated by the irrational desires of the two lower parts.[2] This is particularly characteristic of those working in a theological context—for example, the New Testament writer St. Paul, speaking of Christian freedom (Romans vi-viii; Galatians v), and those influenced by him on this point, such as Augustine. (The latter, in both early and later writings, allows for a freedom of will that is not ordered to the good, but maintains that it is of less value than the rightly-ordered freedom. See, for example, the discussion in Books II-III of On Free Choice.) More recently, Susan Wolf (1990) defends an asymmetry thesis concerning freedom and responsibility. On her view, an agent acts freely only if he had the ability to choose the True and the Good. For an agent who does so choose, the requisite ability is automatically implied. But those who reject the Good choose freely only if they could have acted differently. This is a further substantive condition on freedom, making freedom of will a more demanding condition in cases of bad choices.

In considering such rightly-ordered-appetites views of freedom, I again focus on abstract features common to all. It explicitly handles the inner-compulsion worry facing the simple deliberation-based accounts. The other, external manipulation problem could perhaps be handled through the addition of an historical requirement: agents will freely only if their willings are not in part explicable by episodes of external manipulation which bypass their critical and deliberative faculties (Mele 1995, 2003). But another problem suggests itself: an agent who was a ‘natural saint,’ always and effortlessly choosing the good with no contrary inclination, would not have freedom of will among his virtues. Doubtless we would greatly admire such a person, but would it be an admiration suffused with moral praise of the person or would it, rather, be restricted to the goodness of the person's qualities? (Cf. Kant, 1788.) The appropriate response to such a person, it seems, is on an analogy with aesthetic appreciation of natural beauty, in contrast to the admiration of the person who chooses the good in the face of real temptation to act selfishly. Since this view of freedom of will as orientation to the good sometimes results from theological reflections, it is worth noting that other theologian-philosophers emphasize the importance for human beings of being able to reject divine love: love of God that is not freely given—given in the face of a significant possibility of one's having not done so—would be a sham, all the more so since, were it inevitable, it would find its ultimate and complete explanation in God Himself.

2. Ownership

Harry Frankfurt (1982) presents an insightful and original way of thinking about free will. He suggests that a central difference between human and merely animal activity is our capacity to reflect on our desires and beliefs and form desires and judgments concerning them. I may want to eat a candy bar (first-order desire), but I also may want not to want this (second-order desire) because of the connection between habitual candy eating and poor health. This difference, he argues, provides the key to understanding both free action and free will. (These are quite different, in Frankfurt's view, with free will being the more demanding notion. Moreover, moral responsibility for an action requires only that the agent acted freely, not that the action proceeded from a free will.)

On Frankfurt's analysis, I act freely when the desire on which I act is one that I desire to be effective. This second-order desire is one with which I identify: it reflects my true self. (Compare the addict: typically, the addict acts out of a desire which he does not want to act upon. His will is divided, and his actions proceed from desires with which he does not reflectively identify. Hence, he is not acting freely.) My will is free when I am able to make any of my first-order desires the one upon which I act. As it happens, I will to eat the candy bar, but I could have willed to refrain from doing so.

With Frankfurt's account of free will, much hangs on what being able to will otherwise comes to, and on this Frankfurt is officially neutral. (See the related discussion below on ability to do otherwise.) But as he connects moral responsibility only to his weaker notion of free action, it is fitting to consider its adequacy here. The central objection that commentators have raised is this: what's so special about higher-order willings or desires? (See in particular Watson 2003a.) Why suppose that they inevitably reflect my true self, as against first-order desires? Frankfurt is explicit that higher-order desires need not be rooted in a person's moral or even settled outlook (1982, 89, n.6). So it seems that, in some cases, a first-order desire may be much more reflective of my true self (more “internal to me,” in Frankfurt's terminology) than a weak, faint desire to be the sort of person who wills differently.

In later writings, Frankfurt responds to this worry first by appealing to “decisions made without reservations” (“Identification and Externality” and “Identification and Wholeheartedness” in Frankfurt, 1988) and then by appealing to higher-order desires with which one is “satisfied,” such that one has no inclination to make changes to them (1992). But the absence of an inclination to change the desire does not obviously amount to the condition of freedom-conferring identification. It seems that such a negative state of satisfaction can be one that I just find myself with, one that I neither approve nor disapprove (Pettit, 2001, 56).

Furthermore, we can again imagine external manipulation consistent with Frankfurt's account of freedom but inconsistent with freedom itself. Armed with the wireless neurophysiology-tampering technology of the late 21st century, one might discreetly induce a second-order desire in me to be moved by a first-order desire—a higher-order desire with which I am satisfied—and then let me deliberate as normal. Clearly, this desire should be deemed “external” to me, and the action that flows from it unfree.

3. Causation and Control

Our survey of several themes in philosophical accounts of free will suggests that a—perhaps the—root issue is that of control. Clearly, our capacity for deliberation and the potential sophistication of some of our practical reflections are important conditions on freedom of will. But any proposed analysis of free will must also ensure that the process it describes is one that was up to, or controlled by, the agent.

Fantastic scenarios of external manipulation and less fantastic cases of hypnosis are not the only, or even primary, ones to give philosophers pause. It is consistent with my deliberating and choosing ‘in the normal way’ that my developing psychology and choices over time are part of an ineluctable system of causes necessitating effects. It might be, that is, that underlying the phenomena of purpose and will in human persons is an all-encompassing, mechanistic world-system of ‘blind’ cause and effect. Many accounts of free will are constructed against the backdrop possibility (whether accepted as actual or not) that each stage of the world is determined by what preceded it by impersonal natural law. As always, there are optimists and pessimists.

3.1 Free Will as Guidance Control

John Martin Fischer (1994) distinguishes two sorts of control over one's actions: guidance and regulative. A person exerts guidance control over his own actions insofar as they proceed from a ‘weakly’ reasons-responsive (deliberative) mechanism. This obtains just in case there is some possible scenario where the agent is presented with a sufficient reason to do otherwise and the mechanism that led to the actual choice is operative and it issues in a different choice, one appropriate to the imagined reason. In Fischer and Ravizza (1998), the account is elaborated and refined. They require, more strongly, that the mechanism be the person's own mechanism (ruling out external manipulation) and that it be ‘moderately’ responsive to reasons: one that is “regularly receptive to reasons, some of which are moral reasons, and at least weakly reactive to reason” (82, emphasis added). Receptivity is evinced through an understandable pattern of reasons recognition—beliefs of the agent about what would constitute a sufficient reason for undertaking various actions. (For details, see Fischer and Ravizza 1998, 69–73, and Fischer's contribution to Fischer et al. 2007.)

None of this, importantly, requires ‘regulative’ control: a control involving the ability of the agent to choose and act differently in the actual circumstances. Regulative control requires alternative possibilities open to the agent, whereas guidance control is determined by characteristics of the actual sequence issuing in one's choice. Fischer allows that there is a notion of freedom that requires regulative control but does not believe that this kind of freedom is required for moral responsibility. (In this, he is persuaded by a form of argument originated by Harry Frankfurt. See Frankfurt 1969 and Fischer 1994, Ch.7 for an important development of the argument. The argument has been debated extensively in recent years. See Widerker and McKenna 2003 for a representative sampling. For very recent work, see Franklin 2009 and Fischer 2010 and the works they cite.)

3.2 Free Will as Ultimate Origination (Ability to do Otherwise)

Many do not follow Fischer here, however, and maintain the traditional view that the sort of freedom required for moral responsibility does indeed require that the agent could have acted differently. As Aristotle put it, “…when the origin of the actions is in him, it is also up to him to do them or not to do them” (NE, Book III).[3]

A flood of ink has been spilled, especially in the modern era, on how to understand the concept of being able to do otherwise. On one side are those who maintain that it is consistent with my being able to do otherwise that the past (including my character and present beliefs and desires) and the basic laws of nature logically entail that I do what I actually do. These are the ‘compatibilists,’ holding that freedom and causal determinism are compatible. (For discussion, see O'Connor, 2000, Ch.1; Kapitan 2001; van Inwagen 2001; Haji 2009; compatibilism; and incompatibilism: arguments for.) Conditional analyses of ability to do otherwise have been popular among compatibilists. The general idea here is that to say that I am able to do otherwise is to say that I would do otherwise if it were the case that … , where the ellipsis is filled by some elaboration of “I had an appropriately strong desire to do so, or I had different beliefs about the best available means to satisfy my goal, or … .” In short: something about my prevailing character or present psychological states would have differed, and so would have brought about a different outcome in my deliberation.

Incompatibilists think that something stronger is required: for me to act with free will requires that there are a plurality of futures open to me consistent with the past (and laws of nature) being just as they were—that I be able ‘to add to the given past’ (Ginet 1990). I could have chosen differently even without some further, non-actual consideration's occurring to me and ‘tipping the scales of the balance’ in another direction. Indeed, from their point of view, the whole scale-of-weights analogy is wrongheaded: free agents are not mechanisms that respond invariably to specified ‘motive forces.’ They are capable of acting upon any of a plurality of motives making attractive more than one course of action. Ultimately, the agent must determine himself this way or that.

We may distinguish two broad families of ‘incompatibilist’ or ‘indeterminist’ self-determination accounts. The more radical group holds that the agent who determines his own will is not causally influenced by external causal factors, including his own character. Descartes, in the midst of exploring the scope and influence of ‘the passions,’ declares that “the will is by its nature so free that it can never be constrained” (PWD, v.I, 343). And as we've seen, he believed that such freedom is present on every occasion when we make a conscious choice—even, he writes, “when a very evident reason moves us in one direction….” (PWD, v.III, 245). More recently, Jean-Paul Sartre notoriously held that human beings have ‘absolute freedom’: “No limits to my freedom can be found except freedom itself, or, if you prefer, we are not free to cease being free” (567). His views on freedom flowed from his radical conception of human beings as lacking any kind of positive nature. Instead, we are ‘non-beings’ whose being, moment to moment, is simply to choose:

For human reality, to be is to choose oneself; nothing comes to it either from the outside or from within which it can receive or accept….it is entirely abandoned to the intolerable necessity of making itself be, down to the slightest details. Thus freedom…is the being of man, i.e., his nothingness of being. (568–9)

The medieval philosopher Scotus and mid-twentieth century philosopher C.A. Campbell both appear to agree with Descartes and Sartre on the lack of direct causal influence on the activity of free choice while allowing that the scope of possibilities for what I might thus will may be more or less constricted. So while Scotus holds that “nothing other than the will is the total cause” of its activity, he grants (with Aquinas and other medieval Aristotelians) that we are not capable of willing something in which we see no good, nor of positively repudiating something which appears to us as unqualifiedly good. Contrary to Sartre, we come with a ‘nature’ that circumscribes what we might conceivably choose, and our past choices and environmental influences also shape the possibilities for us at any particular time. But if we are presented with what we recognize as an unqualified good, we still can choose to refrain from willing it. As for Campbell, while he holds that character cannot explain a free choice, he supposes that “[t]here is one experiential situation, and one only, … in which there is any possibility of the act of will not being in accordance with character; viz. the situation in which the course which formed character prescribes is a course in conflict with the agent's moral ideal: in other words, the situation of moral temptation” (1967, 46). (Van Inwagen 1994 and 1995 is another proponent of the idea that free will is exercised in but a small subset of our choices, although his position is less extreme on this point than Campbell's. Fischer and Ravizza 1992, O'Connor 2000, Ch.5, and Clarke 2003, Ch.7 all criticize van Inwagen's argument for this position.)

A more moderate grouping within the self-determination approach to free will allows that beliefs, desires, and external factors all can causally influence the act of free choice itself. But theorists within this camp differ sharply on the metaphysical nature of those choices and of the causal role of reasons. We may distinguish three varieties. I will discuss them only briefly, as they are explored at length in incompatibilist (nondeterministic) theories of free will.

First is a noncausal (or ownership) account (Ginet 1990, 2002; McCann 1998; Pink 2004; Goetz 2002). According to this view, I control my volition or choice simply in virtue of its being mine—its occurring in me. I do not exert a special kind of causality in bringing it about; instead, it is an intrinsically active event, intrinsically something I do. While there may be causal influences upon my choice, there need not be, and any such causal influence is wholly irrelevant to understanding why it occurs. Reasons provide an autonomous, non-causal form of explanation. Provided my choice is not wholly determined by prior factors, it is free and under my control simply in virtue of being mine.

Proponents of the event-causal account (e.g. Nozick 1995; Ekstrom 2001; and Franklin forthcoming) would say that uncaused events of any kind would be random and uncontrolled by anyone, and so could hardly count as choices that an agent made. They hold that reasons influence choices precisely by causing them. Choices are free insofar as they are not deterministically caused, and so might not have occurred in just the circumstances in which they did occur. (See nondeterministic theories of free will and probabilistic causation.) A special case of the event-causal account of self-determination is Kane (1996 and his contribution to Fischer et al., 2007). Kane believes that the free choices of greatest significance to an agent's autonomy are ones that are preceded by efforts of will within the process of deliberation. These are cases where one's will is conflicted, as when one's duty or long-term self-interest compete with a strong desire for a short-term good. As one struggles to sort out and prioritize one's own values, the possible outcomes are not merely undetermined, but also indeterminate: at each stage of the struggle, the possible outcomes have no specific objective probability of occurring. This indeterminacy, Kane believes, is essential to freedom of will.

Finally, there are those who believe freedom of will consists in a distinctively personal form of causality, commonly referred to as “agent causation.” The agent himself causes his choice or action, and this is not to be reductively analyzed as an event within the agent causing the choice. (Compare our ready restatement of “the rock broke the window” into the more precise “the rock's having momentum M at the point of contact with the window caused the window's subsequent shattering.”) This view is given clear articulation by Thomas Reid:

I grant, then, that an effect uncaused is a contradiction, and that an event uncaused is an absurdity. The question that remains is whether a volition, undetermined by motives, is an event uncaused. This I deny. The cause of the volition is the man that willed it. (Letter to James Gregory, in 1967, 88)

Roderick Chisholm advocated this view of free will in numerous writings (e.g., 1982 and 1976). And recently it has been developed in different forms by Randolph Clarke (1993, 1996, 2003) and O'Connor (2000, 2005, 2008a, and 2010). Nowadays, many philosophers view this account as of doubtful coherence (e.g., Dennett 1984). For some, this very idea of causation by a substance just as such is perplexing (Ginet 1997 and Clarke 2003, Ch.10). Others see it as difficult to reconcile with the causal role of reasons in explaining choices. (See Feldman and Buckareff 2003 and Hiddleston 2005. Clarke and O'Connor devote considerable effort to addressing this concern.) And yet others hold that, coherent or not, it is inconsistent with seeing human beings as part of the natural world of cause and effect (Pereboom 2001, 2004, and 2005).

3.3 Do We Have Free Will?

A recent trend is to suppose that agent causation accounts capture, as well as possible, our prereflective idea of responsible, free action. But the failure of philosophers to work the account out in a fully satisfactory and intelligible form reveals that the very idea of free will (and so of responsibility) is incoherent (Strawson 1986) or at least inconsistent with a world very much like our own (Pereboom 2001). Smilansky (2000) takes a more complicated position, on which there are two ‘levels’ on which we may assess freedom, ‘compatibilist’ and ‘ultimate’. On the ultimate level of evaluation, free will is indeed incoherent. (Strawson, Pereboom, and Smilansky all provide concise defenses of their positions in Kane 2002.)

The will has also recently become a target of empirical study in neuroscience and cognitive psychology. Benjamin Libet (2002) conducted experiments designed to determine the timing of conscious willings or decisions to act in relation to brain activity associated with the physical initiation of behavior. Interpretation of the results is highly controversial. Libet himself concludes that the studies provide strong evidence that actions are already underway shortly before the agent wills to do it. As a result, we do not consciously initiate our actions, though he suggests that we might nonetheless retain the ability to veto actions that are initiated by unconscious psychological structures. Wegner (2002) amasses a range of studies (including those of Libet) to argue that the notion that human actions are ever initiated by their own conscious willings is simply a deeply-entrenched illusion and proceeds to offer an hypothesis concerning the reason this illusion is generated within our cognitive systems. Mele (2009) and O'Connor (2009b) argue that the data adduced by Libet, Wegner, and others wholly fail to support their revisionary conclusions.

4. Theological Wrinkles

A large portion of Western philosophical writing on free will was and is written within an overarching theological framework, according to which God is the ultimate source and sustainer of all else. Some of these thinkers draw the conclusion that God must be a sufficient, wholly determining cause for everything that happens; all suppose that every creaturely act necessarily depends on the explanatorily prior, cooperative activity of God. It is also presumed that human beings are free and responsible (on pain of attributing evil in the world to God alone, and so impugning His perfect goodness). Hence, those who believe that God is omni-determining typically are compatibilists with respect to freedom and (in this case) theological determinism. Edwards (1754) is a good example. But those who suppose that God's sustaining activity (and special activity of conferring grace) is only a necessary condition on the outcome of human free choices need to tell a more subtle story, on which omnipotent God's cooperative activity can be (explanatorily) prior to a human choice and yet the outcome of that choice be settled only by the choice itself. For important medieval discussions—the period of the apex of treatments of philosophical/theological matters—see the relevant portions of Aquinas BW and Scotus QAM. For an example of a more recent discussion, see Quinn 1983.

Another issue concerns the impact on human freedom of knowledge of God, the ultimate Good. Many philosophers, especially the medieval Aristotelians, were drawn to the idea that human beings cannot but will that which they take to be an unqualified good. (Duns Scotus appears to be an important exception to this consensus.) Hence, in the afterlife, when humans ‘see God face to face,’ they will inevitably be drawn to Him. Murray (1993, 2002) argues that a good God would choose to make His existence and character less than certain for human beings, for the sake of their freedom. (He will do so, the argument goes, at least for a period of time in which human beings participate in their own character formation.) If it is a good for human beings that they freely choose to respond in love to God and to act in obedience to His will, then God must maintain an ‘epistemic distance’ from them lest they be overwhelmed by His goodness and respond out of necessity, rather than freedom. (See also the other essays in Howard-Snyder and Moser 2002.)

Finally, there is the question of the freedom of God himself. Perfect goodness is an essential, not acquired, attribute of God. God cannot lie or be in any way immoral in His dealings with His creatures. Unless we take the minority position on which this is a trivial claim, since whatever God does definitionally counts as good, this appears to be a significant, inner constraint on God's freedom. Did we not contemplate immediately above that human freedom would be curtailed by our having an unmistakable awareness of what is in fact the Good? And yet is it not passing strange to suppose that God should be less than perfectly free?

One suggested solution to this puzzle begins by reconsidering the relationship of two strands in (much) thinking about freedom of will: being able to do otherwise and being the ultimate source of one's will. Contemporary discussions of free will often emphasize the importance of being able to do otherwise. Yet it is plausible (Kane 1996) that the core metaphysical feature of freedom is being the ultimate source, or originator, of one's choices, and that being able to do otherwise is closely connected to this feature. For human beings or any created persons who owe their existence to factors outside themselves, the only way their acts of will could find their ultimate origin in themselves is for such acts not to be determined by their character and circumstances. For if all my willings were wholly determined, then if we were to trace my causal history back far enough, we would ultimately arrive at external factors that gave rise to me, with my particular genetic dispositions. My motives at the time would not be the ultimate source of my willings, only the most proximate ones. Only by there being less than deterministic connections between external influences and choices, then, is it be possible for me to be an ultimate source of my activity, concerning which I may truly say, “the buck stops here.”

As is generally the case, things are different on this point in the case of God. Even if God's character absolutely precludes His performing certain actions in certain contexts, this will not imply that some external factor is in any way a partial origin of His willings and refrainings from willing. Indeed, this would not be so even if he were determined by character to will everything which He wills. For God's nature owes its existence to nothing. So God would be the sole and ultimate source of His will even if He couldn't will otherwise.

Well, then, might God have willed otherwise in any respect? The majority view in the history of philosophical theology is that He indeed could have. He might have chosen not to create anything at all. And given that He did create, He might have created any number of alternatives to what we observe. But there have been noteworthy thinkers who argued the contrary position, along with others who clearly felt the pull of the contrary position even while resisting it. The most famous such thinker is Leibniz (1710), who argued that God, being both perfectly good and perfectly powerful, cannot fail to will the best possible world. Leibniz insisted that this is consistent with saying that God is able to will otherwise, although his defense of this last claim is notoriously difficult to make out satisfactorily. Many read Leibniz, malgre lui, as one whose basic commitments imply that God could not have willed other than He does in any respect.

On might challenge Leibniz's reasoning on this point by questioning the assumption that there is a uniquely best possible Creation (an option noted by Adams 1987, though he challenges instead Leibniz's conclusion based on it). One way this could be is if there is no well-ordering of worlds: some worlds are sufficiently different in kind that they are incommensurate with each other (neither is better than the other, nor are they equal). Another way this could be is if there is no upper limit on goodness of worlds: for every possible world God might have created, there are others (infinitely many, in fact) which are better. If such is the case, one might argue, it is reasonable for God to arbitrarily choose which world to create from among those worlds exceeding some threshold value of overall goodness.

However, William Rowe (2004) has countered that the thesis that there is no upper limit on goodness of worlds has a very different consequence: it shows that there could not be a morally perfect Creator! For suppose our world has an on-balance moral value of n and that God chose to create it despite being aware of possibilities having values higher than n that He was able to create. It seems we can now imagine a morally better Creator: one having the same options who chooses to create a better world. For critical replies to Rowe, see Almeida (2008), Ch.1; O'Connor 2008b; and Kray (2010).

Finally, Norman Kretzmann (1997, 220–25) has argued in the context of Aquinas's theological system that there is strong pressure to say that God must have created something or other, though it may well have been open to Him to create any of a number of contingent orders. The reason is that there is no plausible account of how an absolutely perfect God might have a resistible motivation—one consideration among other, competing considerations—for creating something rather than nothing. (It obviously cannot have to do with any sort of utility, for example.) The best general understanding of God's being motivated to create at all—one which in places Aquinas himself comes very close to endorsing—is to see it as reflecting the fact that God's very being, which is goodness, necessarily diffuses itself. Perfect goodness will naturally communicate itself outwardly; God who is perfect goodness will naturally create, generating a dependent reality that imperfectly reflects that goodness. (Wainwright (1996) is a careful discussion of a somewhat similar line of thought in Jonathan Edwards. See also Rowe 2004.)

Further Reading

Pereboom (2009) samples a number of important historical and contemporary writers on free will. Bourke (1964) and Dilman (1999) provide critical overviews of many historically-significant writers. Fischer, Kane, Pereboom, and Vargas (2007) provide a readable while careful debate that sets out some main views by four leading thinkers. For thematic treatments, see Fischer (1994); Kane (1996), esp. Ch.1–2; 5–6; Ekstrom (2001); Watson (2003b); and the outstanding collection of lengthy survey articles in Kane (2002, with an updated version due to appear in 2011). Finally, for a topically comprehensive set of important contemporary essays on free will, see the four-volume Fischer (2005).


  • Adams, Robert (1987). “Must God Create the Best?,” in The Virtue of Faith and Other Essays in Philosophical Theology. New York: Oxford University Press, 51–64.
  • Almeida, Michael (2008). The Metaphysics of Perfect Beings. New York: Routledge.
  • Aquinas, Thomas (BW / 1945). Basic Writings of Saint Thomas Aquinas (2 vol.). New York: Random House.
  • ––– (SPW / 1993). Selected Philosophical Writings, ed. T. McDermott. Oxford: Oxford University Press.
  • Aristotle (NE / 1985). Nicomachean Ethics, translated by Terence Irwin. Indianapolis: Hackett Publishing, 1985.
  • Augustine (FCW / 1993). On the Free Choice of the Will, tr. Thomas Williams. Indianapolis: Hackett Publishing.
  • Ayer, A.J. (1982). “Freedom and Necessity,” in Watson (1982b), ed., 15–23.
  • Baker, Lynne (2000). Persons and Bodies: A Constitution View. Cambridge: Cambridge University Press.
  • Botham, Thad (2008). Agent-causation Revisited. Saarbrucken: VDM Verlag Dr. Mueller.
  • Bourke, Vernon (1964). Will in Western Thought. New York: Sheed and Ward.
  • Campbell, C.A. (1967). In Defence of Free Will & other essays. London: Allen & Unwin Ltd.
  • Campbell, Joseph Keim (2007). “Free Will and the Necessity of the Past,” Analysis 67 (294), 105–111.
  • Chisholm, Roderick (1976). Person and Object. LaSalle: Open Court.
  • ––– (1982). “Human Freedom and the Self,” in Watson (1982b), 24–35.
  • Clarke, Randolph (1993). “Toward a Credible Agent-Causal Account of Free Will,” in O'Connor (1995), ed., 201–15.
  • ––– (1995). “Indeterminism and Control,” American Philosophical Quarterly 32, 125–138.
  • ––– (1996). “Agent Causation and Event Causation in the Production of Free Action,” Philosophical Topics 24 (Fall), 19–48.
  • ––– (2003). Libertarian Accounts of Free Will. Oxford: Oxford University Press.
  • ––– (2005). “Agent Causation and the Problem of Luck,” Pacific Philosophical Quarterly 86 (3), 408-421.
  • ––– (2009). “Dispositions, Abilities to Act, and Free Will: The New Dispositionalism,” Mind 118 (470), 323-351.
  • Dennett, Daniel (1984). Elbow Room: The Varieties of Free Will Worth Having. Cambridge. MA: MIT Press.
  • Descartes, René (PWD / 1984). Meditations on First Philosophy [1641] and Passions of the Soul [1649], in The Philosophical Writings of Descartes, vol. I-III, translated by Cottingham, J., Stoothoff, R., & Murdoch, D.. Cambridge: Cambridge University Press.
  • Donagan, Alan (1985). Human Ends and Human Actions: An Exploration in St. Thomas's Treatment. Milwaukee: Marquette University Press.
  • Dilman, Ilham (1999). Free Will: An Historical and Philosophical Introduction. London: Routledge.
  • Double, Richard (1991). The Non-Reality of Free Will. New York: Oxford University Press.
  • Duns Scotus, John (QAM / 1986). “Questions on Aristotle's Metaphysics IX, Q.15” in Duns Scotus on the Will and Morality [selected and translated by Allan B. Wolter, O.F.M.]. Washington: Catholic University of America Press, 1986.
  • Edwards, Jonathan (1754 / 1957). Freedom of Will, ed. P. Ramsey. New Haven: Yale University Press.
  • Ekstrom, Laura (2000). Free Will: A Philosophical Study. Boulder, CO: Westview Press.
  • ––– (2001). “Libertarianism and Frankfurt-Style Cases,” in Kane 2001, 309-322.
  • ––– (2003). “Free Will, Chance, and Mystery,” Philosophical Studies, 113, 153–180.
  • Farrer, Austin (1958). The Freedom of the Will. London: Adam & Charles Black.
  • Fischer, John Martin (1994). The Metaphysics of Free Will. Oxford: Blackwell.
  • ––– (1999). “Recent Work on Moral Responsibility,” Ethics 110, 93–139.
  • ––– (2001). “Frankfurt-type Examples and Semi-Compatibilism,” in Kane (2001), 281-308.
  • ––– (2005), ed. Free Will: Critical Concepts in Philosophy, Vol.I-IV. London: Routledge.
  • ––– (2006). My Way: Essays on Moral Responsibility. New York: Oxford University Press.
  • ––– (2007). “The Importance of Frankfurt-Style Argument,” Philosophical Quarterly 57 (228), 464–471.
  • ––– (2010). “The Frankfurt Cases: The Moral of the Stories,” Philosophical Review 119 (3), 315–316.
  • Fischer, John Martin, Kane, Robert, Pereboom, Derk, and Vargas, Manuel (2007). Four Views on Free Will. Walden, MA: Blackwell Publishing.
  • Fischer, John Martin and Ravizza, Mark. (1992). “When the Will is Free,” in O'Connor (1995), ed., 239–269.
  • ––– (1998) Responsibility and Control. Cambridge: Cambridge University Press.
  • Frankfurt, Harry (1969). “Alternate Possibilities and Moral Responsibility,” Journal of Philosophy 66, 829–39.
  • ––– (1982). “Freedom of the Will and the Concept of a Person,” in Watson (1982), ed., 81–95.
  • ––– (1988). The Importance of What We Care About. Cambridge: Cambridge University Press.
  • ––– (1992). “The Faintest Passion,” Proceedings and Addresses of the American Philosophical Association 66, 5–16.
  • Franklin, Christopher (2009). “Neo-Frankfurtians and Buffer Cases: the New Challenge to the Principle of Alternative Possibilities,” Philosophical Studies, forthcoming.
  • ––– (2010). “Farewell to the Luck (and Mind) Argument,” Philosophical Studies, forthcoming.
  • –––(forthcoming). “The Problem of Enhanced Control,” Australasian Journal of Philosophy.
  • Ginet, Carl (1990). On Action. Cambridge: Cambridge University Press.
  • ––– (1997). “Freedom, Responsibility, and Agency,” The Journal of Ethics 1, 85–98.
  • ––– (2002) “Reasons Explanations of Action: Causalist versus Noncausalist Accounts,” in Kane, ed., (2002), 386–405.
  • Ginet, Carl and Palmer, David (2010). “On Mele and Robb's Indeterministic Frankfurt-Style Case,” Philosophy and Phenomenological Research 80 (2), 440-446.
  • Goetz, Stewart C. (2002). “Review of O'Connor, Persons and Causes,” Faith and Philosophy 19, 116–20.
  • ––– (2005). “Frankfurt-Style Counterexamples and Begging the Question,” Midwest Studies in Philosophy 29 (1), 83-105.
  • Haji, Ishtiyaque (2004). “Active Control, Agent-Causation, and Free Action,” Philosophical Explorations 7(2), 131-48.
  • –––(2009). Incompatibilism's Allure. Peterborough, Ontario: Broadview Press.
  • Hiddleston, Eric (2005). “Critical Notice of Timothy O'Connor, Persons and Causes,” Noûs 39 (3), 541–56.
  • Hobbes, Thomas and Bramhall, John (1999) [1655–1658]. Hobbes and Bramhall on Liberty and Necessity, ed. V. Chappell. Cambridge: Cambridge University Press.
  • Honderich, Ted (1988). A Theory of Determinism. Oxford: Oxford University Press.
  • Howard-Snyder, Daniel and Moser, Paul, eds. (2002). Divine Hiddenness: New Essays. Cambridge: Cambridge University Press.
  • Hume, David (1748 /1977). An Enquiry Concerning Human Understanding. Indianapolis: Hackett Publishing.
  • Kane, Robert (1995). “Two Kinds of Incompatibilism,” in O'Connor (1995), ed., 115–150.
  • ––– (1996). The Significance of Free Will. New York: Oxford University Press.
  • Kane, Robert, ed., (2002). Oxford Handbook on Free Will. New York: Oxford University Press.
  • ––– (2005). A Contemporary Introduction to Free Will. New York: Oxford University Press.
  • Kant, Immanuel (1788 / 1993). Critique of Practical Reason, tr. by Lewis White Beck. Upper Saddle River, NJ: Prentice-Hall Inc.
  • Kapitan, Tomis (2001). “A Master Argument for Compatibilism?” in Kane 2001, 127–157.
  • Kraay, Klaas J. (2010). “The Problem of No Best World,” in Charles Taliaferro and Paul Draper (eds.), A Companion to Philosophy of Religion, 2nd edition. Oxford: Blackwell, 491–99.
  • Kretzmann, Norman (1997). The Metaphysics of Theism: Aquinas's Natural Theology in Summa Contra Gentiles I. Oxford: Clarendon Press.
  • Leibniz, Gottfried (1710 / 1985). Theodicy. LaSalle, IL: Open Court.
  • Levy, Neil (2007). Neuroethics. Cambridge: Cambridge University Press.
  • Levy, Neil and McKenna, Michael (2009). “Recent Work on Free Will and Moral Responsibility,” Philosophy Compass 4(1), 96–133.
  • Libet, Benjamin (2002). “Do We Have Free Will?” in Kane, ed., (2002), 551–564.
  • Lowe, E.J. (2008). Personal Agency: The Metaphysics of Mind and Action. Oxford: Oxford University Press.
  • MacDonald, Scott (1998). “Aquinas's Libertarian Account of Free Will,” Revue Internationale de Philosophie, 2, 309–328.
  • Magill, Kevin (1997). Freedom and Experience. London: MacMillan.
  • McCann, Hugh (1998). The Works of Agency: On Human Action, Will, and Freedom. Ithaca: Cornell University Press.
  • McKenna, Michael (2008). “Frankfurt's Argument Against Alternative Possibilities: Looking Beyond the Examples,” Noûs 42 (4), 770–793.
  • Mele, Alfred (1995). Autonomous Agents (New York: Oxford University Press).
  • ––– (2003). Motivation and Agency. Oxford: Oxford University Press.
  • ––– (2006). Free Will and Luck. Oxford: Oxford University Press.
  • ––– (2009). Effective Intentions: The Power of Conscious Will. Oxford: Oxford University Press.
  • Morris, Thomas (1993). “Perfection and Creation,” in E. Stump. (1993), ed., 234–47
  • Murray, Michael (1993). “Coercion and the Hiddenness of God,” American Philosophical Quarterly 30, 27–38.
  • ––– (2002). “Deus Absconditus,” in Howard-Snyder amd Moser (2002), 62–82.
  • Nozick, Robert (1995). “Choice and Indeterminism,” in O'Connor (1995), ed., 101–14.
  • O'Connor, Timothy (1993). “Indeterminism and Free Agency: Three Recent Views,” Philosophy and Phenomenological Research, 53, 499–526.
  • –––, ed., (1995). Agents, Causes, and Events: Essays on Indeterminism and Free Will. New York: Oxford University Press.
  • ––– (2000). Persons and Causes: The Metaphysics of Free Will. New York: Oxford University Press.
  • ––– (2005). “Freedom With a Human Face,” Midwest Studies in Philosophy, Fall 2005, 207–227.
  • ––– (2008a). “Agent-Causal Power,” in Toby Handfield (ed.), Dispositions and Causes, Oxford: Clarendon Press, 189-214.
  • ––– (2008b). Theism and Ultimate Explanation: The Necessary Shape of Contingency. Oxford: Blackwell.
  • ––– (2009a). “Degrees of Freedom,” Philosophical Explorations 12 (2), 119-125.
  • ––– (2009b). “Conscious Willing and the Emerging Sciences of Brain and Behavior,” in George F. R. Ellis, Nancey Murphy, and Timothy O'Connor, eds., Downward Causation And The Neurobiology Of Free Will. New York: Springer Publications, 2009, 173-186.
  • ––– (2010). “Agent-Causal Theories of Freedom,” in Robert Kane (ed.) Oxford Handbook on Free Will, 2nd edition. New York: Oxford University Press, forthcoming.
  • Pasnau, Robert (2002). Thomas Aquinas on Human Nature. Cambridge University Press.
  • Pereboom, Derk (2001). Living Without Free Will. Cambridge: Cambridge University Press.
  • ––– (2004). “Is Our Concept of Agent-Causation Coherent?” Philosophical Topics 32, 275-86.
  • ––– (2005). “Defending Hard Incompatibilism,” Midwest Studies in Philosophy 29, 228-47.
  • –––, ed., (2009). Free Will. Indianapolis: Hackett Publishing.
  • Pettit, Philip (2001). A Theory of Freedom. Oxford: Oxford University Press.
  • Pink, Thomas (2004). Free Will: A Very Short Introduction. Oxford: Oxford University Press.
  • Plato (CW / 1997). Complete Works, ed. J. Cooper. Indianapolis: Hackett Publishing.
  • Quinn, Phillip (1983). “Divine Conservation, Continuous Creation, and Human Action,” in A. Freddoso, ed. The Existence and Nature of God. Notre Dame: Notre Dame University Press.
  • Reid, Thomas (1969). Essays on the Active Powers of the Human Mind, ed. B. Brody. Cambridge: MIT Press.
  • Rowe, William (1995). “Two Concepts of Freedom,” in O'Connor (1995), ed. 151–71.
  • ––– (2004). Can God Be Free?. Oxford: Oxford University Press.
  • Sartre, Jean-Paul (1956). Being and Nothingness. New York: Washington Square Press.
  • Schopenhauer, Arthur (1839 / 1999). Prize Essay on the Freedom of the Will, ed. G. Zoller. Cambridge: Cambridge University Press.
  • Schlosser, Markus E. (2008). “Agent-Causation and Agential Control,” Philosophical Explorations 11 (1), 3-21.
  • ––– (1994) [1297-99]. Contingency and Freedom: Lectura I 39, tr. Vos Jaczn et al. Dordrecht: Kluwer Academic Publishers.
  • Shatz, David (1986). “Free Will and the Structure of Motivation,” Midwest Studies in Philosophy 10, 451–482.
  • Smilansky, Saul (2000). Free Will and Illusion. Oxford: Oxford University Press.
  • Speak, Daniel James (2007). “The Impertinence of Frankfurt-Style Argument,” Philosophical Quarterly 57 (226), 76-95.
  • ––– (2005). “Papistry: Another Defense,” Midwest Studies in Philosophy 29 (1), 262-268.
  • Strawson, Galen (1986). Freedom and Belief. Oxford: Clarendon Press.
  • Strawson, Peter (1982). “Freedom and Resentment,” in Watson (1982), ed., 59–80.
  • Stump, Eleonore, ed., (1993). Reasoned Faith. Ithaca: Cornell University Press.
  • ––– (1996). “Persons: Identification and Freedom,” Philosophical Topics 24, 183–214.
  • ––– (2003). Aquinas. London: Routledge.
  • Timpe, Kevin (2006). “The Dialectic Role of the Flickers of Freedom,” Philosophical Studies 131 (2), 337–368.
  • Todd, Patrick and Neal Tognazzini (2008). “A Problem for Guidance Control,” Philosophical Quarterly, 58 (233), 685–92.
  • van Inwagen, Peter (1983). An Essay on Free Will. Oxford: Oxford University Press.
  • ––– (1994). “When the Will is Not Free,” Philosophical Studies, 75, 95–113.
  • ––– (1995). “When Is the Will Free?” in O'Connor (1995), ed., 219–238.
  • van Inwagen, Peter (2001) “Free Will Remains a Mystery,” in Kane (2001), 158–179.
  • Wainwright, William (1996). “Jonathan Edwards, William Rowe, and the Necessity of Creation,” in J. Jordan and D. Howard-Snyder, eds., Faith Freedom, and Rationality. Lanham: Rowman and Littlefield, 119–133.
  • Wallace, R. Jay (2003). “Addiction as Defect of the Will: Some Philosophical Reflections,” in Watson, ed., (2003b), 424–452.
  • Watson, Gary (1987). “Free Action and Free Will,” Mind 96, 145–72.
  • ––– (2003a). “Free Agency,” in Watson, ed., 1982b.
  • –––, ed., (2003b). Free Will. 2nd ed. Oxford: Oxford University Press.
  • Wegner, Daniel (2002). The Illusion of Conscious Will. Cambridge, MA: MIT Press.
  • Widerker, David (2005). “Agent-Causation and Control,” Faith and Philosophy 22 (1), 87-98.
  • ––– (2006). “Libertarianism and the Philosophical Significance of Frankfurt Scenarios,” Journal of Philosophy 103 (4), 163-187.
  • Widerker, David and McKenna, Michael, eds., (2003). Moral Responsibility and Alternative Possibilities. Aldershot: Ashgate Publishing.
  • Wolf, Susan (1990). Freedom Within Reason. Oxford: Oxford University Press.

0 thoughts on “Indeterminism Philosophy Essay Example

Leave a Reply

Your email address will not be published. Required fields are marked *