Science includes many principles at least once thought to be laws of nature: Newton’s law of gravitation, his three laws of motion, the ideal gas laws, Mendel’s laws, the laws of supply and demand, and so on. Other regularities important to science were not thought to have this status. These include regularities that, unlike laws, were (or still are) thought by scientists to stand in need of explanation. These include the regularity of the ocean tides, the perihelion of Mercury’s orbit, the photoelectric effect, that the universe is expanding, and so on. Scientists also use laws but not other regularities to sort out what is possible: It is based on their consistency with Einstein’s laws of gravity that cosmologists recognize the possibility that our universe is closed and the possibility that it is open (Maudlin 2007, 7–8). In statistical mechanics, the laws of an underlying physical theory are used to determine the dynamically possible trajectories through the state space of the system (Roberts 2008, 12–16).
Philosophers of science and metaphysicians address various issues about laws, but the basic question is: What is it to be a law? Two influential answers are David Lewis’s systems approach (1973, 1983, 1986, 1994) and David Armstrong’s universals approach (1978, 1983, 1991, 1993). Other treatments include antirealist views (van Fraassen 1989, Giere 1999, Ward 2002, Mumford 2004) and antireductionist views (Carroll 1994 and 2008, Lange 2000 and 2009, Maudlin 2007). Besides the basic question, the recent literature has also focused on
(i) whether laws supervene on matters of fact,
(ii) the role laws play in the problem of induction,
(iii) whether laws involve metaphysical necessity, and
(iv) the role of laws in physics and how that contrasts with the role of laws in the special sciences.
Here are four reasons philosophers examine what it is to be a law of nature: First, as indicated above, laws at least appear to have a central role in scientific practice. Second, laws are important to many other philosophical issues. For example, sparked by the account of counterfactuals defended by Roderick Chisholm (1946, 1955) and Nelson Goodman (1947), and also prompted by Carl Hempel and Paul Oppenheim’s (1948) deductive-nomological model of explanation, philosophers have wondered what makes counterfactual and explanatory claims true, have thought that laws must play some part, and so also have wondered what distinguishes laws from nonlaws. Third, Goodman famously suggested that there is a connection between lawhood and confirmability by an inductive inference. So, some sympathetic to Goodman’s idea come to the problem of laws as a result of their interest in the problem of induction. Fourth, philosophers love a good puzzle. Suppose that everyone here is seated (cf., Langford 1941, 67). Then, trivially, that everyone here is seated is true. Though true, this generalization does not seem to be a law. It is just too accidental. Einstein’s principle that no signals travel faster than light is also a true generalization but, in contrast, it is thought to be a law; it is not nearly so accidental. What makes the difference?
This may not seem like much of a puzzle. That everyone here is seated is spatially restricted in that it is about a specific place; the principle of relativity is not similarly restricted. So, it is easy to think that, unlike laws, accidentally true generalizations are about specific places. But that is not what makes the difference. There are true nonlaws that are not spatially restricted. Consider the unrestricted generalization that all gold spheres are less than one mile in diameter. There are no gold spheres that size and in all likelihood there never will be, but this is still not a law. There also appear to be generalizations that could express laws that are restricted. Galileo’s law of free fall is the generalization that, on Earth, free-falling bodies accelerate at a rate of 9.8 meters per second squared. The perplexing nature of the puzzle is clearly revealed when the gold-sphere generalization is paired with a remarkably similar generalization about uranium spheres:
- All gold spheres are less than a mile in diameter.
- All uranium spheres are less than a mile in diameter.
Though the former is not a law, the latter arguably is. The latter is not nearly so accidental as the first, since uranium’s critical mass is such as to guarantee that such a large sphere will never exist (van Fraassen 1989, 27). What makes the difference? What makes the former an accidental generalization and the latter a law?
One popular answer ties being a law to deductive systems. The idea dates back to John Stuart Mill (1947 [f.p. 1843]), but has been defended in one form or another by Frank Ramsey (1978 [f.p. 1928]), Lewis (1973, 1983, 1986, 1994), John Earman (1984) and Barry Loewer (1996). Deductive systems are individuated by their axioms. The logical consequences of the axioms are the theorems. Some true deductive systems will be stronger than others; some will be simpler than others. These two virtues, strength and simplicity, compete. (It is easy to make a system stronger by sacrificing simplicity: include all the truths as axioms. It is easy to make a system simple by sacrificing strength: have just the axiom that 2 + 2 = 4.) According to Lewis (1973, 73), the laws of nature belong to all the true deductive systems with a best combination of simplicity and strength. So, for example, the thought is that it is a law that all uranium spheres are less than a mile in diameter because it is, arguably, part of the best deductive systems; quantum theory is an excellent theory of our universe and might be part of the best systems, and it is plausible to think that quantum theory plus truths describing the nature of uranium would logically entail that there are no uranium spheres of that size (Loewer 1996, 112). It is doubtful that the generalization that all gold spheres are less than a mile in diameter would be part of the best systems. It could be added as an axiom to any system, but it would bring little or nothing of interest in terms of strength and adding it would sacrifice something in terms of simplicity. (Lewis later made significant revisions to his account in order to address problems involving physical probability. See his 1986 and his 1994.)
Many features of the systems approach are appealing. For one thing, it appears to deal with a challenge posed by vacuous laws. Some laws are vacuously true: Newton’s first law of motion — that all inertial bodies have no acceleration — is a law, even though there are no inertial bodies. But there are also lots of vacuously true nonlaws: all plaid pandas weigh 5 lbs., all unicorns are unmarried, etc. With the systems approach, there is no exclusion of vacuous generalizations from the realm of laws, and yet only those vacuous generalizations that belong to the best systems qualify (cf., Lewis 1986, 123). Furthermore, it is reasonable to think that one goal of scientific theorizing is the formulation of true theories that are well balanced in terms of their simplicity and strength. So, the systems approach seems to underwrite the truism that an aim of science is the discovery of laws (Earman 1978, 180; Loewer 1996, 112). One last aspect of the systems view that is appealing to many (though not all) is that it is in keeping with broadly Humean constraints on a sensible metaphysics. There is no overt appeal to closely related modal concepts (e.g., the counterfactual conditional) and no overt appeal to modality-supplying entities (e.g., universals or God; for the supposed need to appeal to God, see Foster 2004). Indeed, the systems approach is the centerpiece of Lewis’s defense of Humean supervenience, “the doctrine that all there is in the world is a vast mosaic of local matters of particular fact, just one little thing and then another” (1986, ix).
Other aspects of the systems approach have made philosophers wary. (See, especially, Armstrong 1983, 66–73; van Fraassen 1989, 40–64; Carroll 1990, 197–206.) Some argue that this approach will have the untoward consequence that laws are inappropriately mind-dependent in virtue of the account’s appeal to the concepts of simplicity, strength and best balance, concepts whose instantiation seems to depend on cognitive abilities, interests, and purposes. The appeal to simplicity raises further questions stemming from the apparent need for a regimented language to permit reasonable comparisons of the systems. (See Lewis 1983, 367.) More recently, John Roberts questions the systems approach at a point sometimes thought to be a strength of the view: “We have no practice of weighing competing virtues of simplicity and information content for the purpose of choosing one deductive system over others, where all are presumed to be true” (2008, 10). There is the practice of curve-fitting, which involves weighing the competing virtues of simplicity and closeness of fit, but this is a practice that is part of the process of discovering what is true. Tim Maudlin (2007, 16) and Roberts (2008, 23) also charge that the systems approach is ill-suited to rule out widespread and striking regularities as laws, even those that are clearly determined by the initial conditions. That the universe is closed, that entropy generally increases, that the planets of our solar system are co-planar, and others (if true) could be added to any true deductive system, greatly increasing the strength of the system, with only a small cost in terms of simplicity. Interestingly, sometimes the systems view is abandoned because it satisfies the broadly Humean constraints on an account of laws of nature; some argue that what generalizations are laws is not determined by local matters of particular fact. (See Section 4 below.) Though Humeans like Lewis generally favor realism to any form of anti-realism (Section 5 below), Nora Berenstain and James Ladyman (2012) have argued that scientific realism is incompatible with Humeanism because realism requires a notion of natural necessity not susceptible to Humean analysis.
In the late 1970s, there emerged a competitor for the systems approach and all other Humean attempts to say what it is to be a law. Led by Armstrong (1978, 1983, 1991, 1993), Fred Dretske (1977), and Michael Tooley (1977, 1987), the rival approach appeals to universals to distinguish laws from nonlaws.
Focusing on Armstrong’s development of the view, here is one of his concise statements of the framework characteristic of the universals approach:
Suppose it to be a law that Fs are Gs. F-ness and G-ness are taken to be universals. A certain relation, a relation of non-logical or contingent necessitation, holds between F-ness and G-ness. This state of affairs may be symbolized as ‘N(F,G)’ (1983, 85).
This framework promises to address familiar puzzles and problems: Maybe the difference between the uranium-spheres generalization and the gold-spheres generalization is that being uranium does necessitate being less than one mile in diameter, but being gold does not. Worries about the subjective nature of simplicity, strength and best balance do not emerge; there is no threat of lawhood being mind-dependent so long as necessitation is not mind-dependent. Some (Armstrong 1991, Dretske 1977) think that the framework supports the idea that laws play a special explanatory role in inductive inferences, since a law is not just a universal generalization, but is an entirely different creature — a relation holding between two other universals. The framework is also consistent with lawhood not supervening on local matters of particular fact; the denial of Humean supervenience often accompanies acceptance of the universals approach.
For there truly to be this payoff, however, more has to be said about what N is. This is the problem Bas van Fraassen calls the identification problem. He couples this with a second problem, what he calls the inference problem (1989, 96). The essence of this pair of problems was captured early on by Lewis with his usual flair:
Whatever N may be, I cannot see how it could be absolutely impossible to have N(F,G) and Fawithout Ga. (Unless N just is constant conjunction, or constant conjunction plus something else, in which case Armstrong’s theory turns into a form of the regularity theory he rejects.) The mystery is somewhat hidden by Armstrong’s terminology. He uses ‘necessitates’ as a name for the lawmaking universal N; and who would be surprised to hear that if F‘necessitates’ G and a has F, then a must have G? But I say that N deserves the name of ‘necessitation’ only if, somehow, it really can enter into the requisite necessary connections. It can’t enter into them just by bearing a name, any more than one can have mighty biceps just by being called ‘Armstrong’ (1983, 366).
Basically, there needs to be a specification of what the lawmaking relation is (the identification problem). Then, there needs to be a determination of whether it is suited to the task (the inference problem): Does N’s holding between F and G entail that Fs are Gs? Does its holding support corresponding counterfactuals? Do laws really turn out not to supervene, to be mind-independent, to be explanatory? Armstrong does say more about what his lawmaking relation is. He states in reply to van Fraassen:
It is at this point that, I claim, the Identification problem has been solved. The required relation is the causal relation, … now hypothesized to relate types not tokens (1993, 422).
Questions remain about the nature of this causal relation understood as a relation that relates both token events and universals. (See van Fraassen 1993, 435–437, and Carroll 1994, 170–174.)
Rather than trying to detail all the critical issues that divide the systems approach and the universals approach, we will do better to focus our attention on the especially divisive issue of supervenience. It concerns whether Humean considerations really determine what the laws are. There are some important examples that appear to show that they do not.
Tooley (1977, 669) asks us to suppose that there are ten different kinds of fundamental particles. So, there are fifty-five possible kinds of two-particle interactions. Suppose that fifty-four of these kinds have been studied and fifty-four laws have been discovered. The interaction of X and Y particles have not been studied because conditions are such that they never will interact. Nevertheless, it seems that it might be a law that, when X particles and Y particles interact, P occurs. Similarly it might be a law that when X and Y particles interact, Q occurs. There seems to be nothing about the local matters of particular fact in this world that fixes which of these generalizations is a law.
The failure of supervenience suggested by Tooley’s example arises in other cases. Consider the possibility that there is a lone particle traveling through otherwise empty space at a constant velocity of, say, one meter per second. It seems that this might just be a nearly empty Newtonian universe in which it is accidentally true that all bodies have a velocity of one meter per second; it just so happens that there is nothing to alter the particle’s motion. But, it might also be the case that this world is not Newtonian and that it is a law that all bodies have velocity at one meter per second; it could be that this generalization is not accidental and would have held true even if there were other bodies slamming into the lone particle. (See especially Earman 1986, 100; Lange 2000, 85–90.)
Maudlin presses the case against the Humeans by focusing on the common practice among physicists of considering models of a theory’s laws.
Minkowski space-time, the space-time of Special Relativity, is a model of the field equations of General Relativity (in particular, it is a vacuum solution). So an empty Minkowski space-time is one way the world could be if it is governed by the laws of General Relativity. But is Minkowski space-time a model only of the General Relativistic laws? Of course not! One could, for example, postulate that Special Relativity is the complete and accurate account of space-time structure, and produce another theory of gravitation, which would still have the vacuum Minkowski space-time as a model. So under the assumption that no possible world can be governed by the laws of General Relativity and by a rival theory of gravity, the total physical state of the world cannot always determine the laws (2007, 67).
The suggestion here is that there is the possibility of a matter-less universe with the laws of General Relativity and another with laws of a conflicting theory of gravitation. (For additional examples, see Carroll 1994, 60–80). What Maudlin sees as a consequence of standard scientific reasoning, Humeans will see as an example exposing the absurdity of nonsupervenience.
Humeans contend that the various pairs of so-called possible worlds are not really possible. Sometimes this contention turns on the issue of whether laws govern, sometimes on epistemological or ontological concerns, and sometimes on concerns about about how our language works. One objection to the nonsupervenience arguments from the Humean camp comes from Helen Beebee (2000). Her idea is that, if one comes to the debate with the governing conception in mind, one is likely to find the antisupervenience examples convincing, but using this conception to reject Humean analyses of lawhood is somehow to beg the question or to otherwise be unconvincing because it is a conception Humeans reject. (Also see Loewer 1996 and Roberts 1998.) In contrast, Susan Schneider (2007), Barry Ward 2007, and Roberts (2008) are sympathetic to aspects of Humeanism and aspects of the governing conception.
In two papers, Earman and Roberts (2005a and b) first address how to best formulate the thesis of Humean supervenience, then they argue based on skeptical considerations that their brand of Humean supervenience is true. Jonathan Schaffer (2008, 96–97, 94–99) rejects skeptical concerns (also see Carroll 2008, 75–79), but presses an ontological concern to the effect that nonsupervening laws are ungrounded entities (84–85).
Roberts (2008, 357–61) offers an original manner of responding to apparent counterexamples to supervenience. In the lone-particle example reported above, there is a world with the lone particle traveling at one meter per second, though it is not a law that all particles travel at that speed. There is also a world with the lone particle traveling at one meter per second, though it is a law that all particles are traveling at that speed. For Roberts, this reasoning does not contradict supervenience because of the context sensitivity of the predicate, ‘is a law’. Though the sentence, ‘It is a law that all particles travel at one meter per second’ is (i) true relative to one context/world pair and (ii) false relative to a context/world pair, this difference in truth-value could merely be the result of a difference between two contexts.
Without going into much detail about his metatheoretic account of lawhood, for Roberts, for a possible world w in which there exist only a single particle traveling at constant velocity throughout all of history and relative to a context in which the salient theory is, say, Newtonian Mechanics, ‘It is a law that all particles have a constant velocity of one meter per second’ is true just in case the reference of the ‘that’ clause plays the law role in the salient theory, which it doesn’t. It might play the law role relative to some other theory, but this would be bound to be a different context because the salient theory would have to be different. According to Roberts, a single generalization cannot both play the law role and also not play the law role relative to a single theory, and so a different salient theory and so a different context is required for ‘It is a law that all bodies travel at one meter per second’ to be true.
What is enticing about this reply is that it does not reject any intuitive claim about the laws in the various possible worlds (Roberts 2008, 360). The antisupervience judgments about what are the laws are reasonable claims given the way the contexts are. It is just that there is a failure to recognize the influence of context. So, for example, Maudlin’s so-called two possibilities would be seen by Roberts as descriptions of a single possibility that are made relative to two contexts with different salient theories: General Relativity and some rival theory of gravity. (Parallel points could be made about Tooley’s examples involving the 10 different kinds of fundamental particles.) The key here is the context sensitivity that Roberts builds into the truth conditions of lawhood sentences. Other views that take lawhood sentences to be context sensitive might also be able to avail themselves of Roberts’ manner of challenge to the antisupervenience examples.
The majority of contemporary philosophers are realists about laws; they believe that some reports of what the laws are succeed in describing reality. There are, however, some antirealists who disagree.
For example, van Fraassen, Ronald Giere, and also Stephen Mumford believe that there are no laws. Van Fraassen finds support for his view in the problems facing accounts like Lewis’s and Armstrong’s, and the perceived failure of Armstrong and others to describe an adequate epistemology that permits rational belief in laws (1989, 130 and 180–181). Giere appeals to the origins of the use of the concept of law in the history of science (1999 [f.p. 1995], 86–90) and contends that the generalizations often described as laws are not in fact true (90–91). Mumford’s reasons are more metaphysical; he maintains that, in order to govern, laws must be external to the properties they govern, but, to be external in this way, the governed properties must lack proper identity conditions (2004, 144–145). Others adopt a subtly different sort of antirealism. Though they will utter sentences like ‘It is a law that no signals travel faster than light’, they are antirealists in virtue of thinking that such sentences are not (purely) fact-stating. Whether this Einsteinian generalization is a law is not a fact about the universe; it is not something out there waiting to be discovered. Reports of what are laws only project a certain attitude (in addition to belief) about the contained generalizations. For example, Ward (2002, 197) takes the attitude to be one regarding the suitability of the generalization for prediction and explanation. (Also see Blackburn 1984 and 1986.)
The challenge for antirealism is to minimize the havoc lawless reality would play with our folk and scientific practices. Regarding science, the examples and uses of laws described at the start of this entry attest to ‘law’ having a visible role in science that scientists seem prepared to take as factive. Regarding our folk practices, though ‘law’ is not often part of run-of-the-mill conversations, an antirealism about lawhood would still have wide-ranging consequences. This is due to lawhood’s ties to other concepts, especially the nomic ones, concepts like the counterfactual conditional, dispositions, and causation. For example, it seems that, for there to be any interesting counterfactual truths, there must be at least one law of nature. Would an ordinary match in ordinary conditions light if struck? It seems it would, but only because we presume nature to be regular in certain ways. We think this counterfactual is true because we believe there are laws. Were there no laws, it would not be the case that, if the match were struck, it would light. As a result, it would also not be the case that the match was disposed to ignite, nor the case that striking the match would cause it to light.
Could an antirealist deflect this challenge by denying the connections between lawhood and other concepts? Would this allow one to be an antirealist about laws and still be a realist about, say, counterfactuals? The danger lurking here is that the resulting position seems bound to be ad hoc. Concepts like the counterfactual conditional, dispositions, and causation exhibit many of the same puzzling features that lawhood does; there are parallel philosophical questions and puzzles about these concepts. It is hard to see what would warrant antirealism about lawhood, but not the other nomic concepts.
John Carroll (1994, 2008), Marc Lange (2000, 2009), and Maudlin (2007) advocate antireductionist, antisupervenience views. (Also see Ismael 2015 and Woodward 1992.) Regarding the question of what it is to be a law, they reject the answers given by Humeans like Lewis, they deny Humean supervenience, and they see no advantage in an appeal to universals. They reject all attempts to say what it is to be a law that do not appeal to nomic concepts. Yet they still believe that there really are laws of nature; they are not antirealists.
Maudlin (2007, 17–18) takes lawhood to be a primitive status and laws to be ontological primitives — fundamental entities in our ontology. Then his project is to show what work laws can do, defining physical possibility in terms of laws and sketching law-based accounts of the counterfactual conditional and of explanation.
Carroll (2008) sketches an analysis of lawhood in terms of causal/explanatory concepts. The starting point is the intuition that laws are not accidental, that they are not coincidences. Not being a coincidence, however, is not all there is to being a law. For example, it might be true that there are no gold spheres greater than 1000 miles in diameter because there is so little gold in the universe. In that case, strictly speaking, that generalization would be true, suitably general, and not a coincidence. Nevertheless, that would not be a law. Arguably, what blocks this generalization from being a law is that something in nature — really, an initial condition of the universe, the limited amount of gold — accounts for the generalization. Contrast this with the law that inertial bodies have no acceleration. With this and other laws, it seems that it holds because of nature (itself).
Lange’s (2000, 2009) treatment includes an account of what it is to be a law in terms of a counterfactual notion of stability. The overall account is intricate, but the basic idea is this: Call a logically closed set of true propositions stable if and only if the members of the set would remain true given any antecedent that is consistent with the set itself. So, for example, the set of logical truths is trivially stable, because logical truths would be true no matter what. A set that included the accidental generalization that all the people in the room are sitting but is consistent with the proposition that someone in the room shouts ‘Fire!’ would not be a stable set; if someone were to shout ‘Fire‘, then someone in the room would not be sitting. Lange argues (2009, 34) that no stable set of sub-nomic facts — except maybe the set of all truths — contains an accidental truth. “By identifying the laws as the members of at least one non-maximal stable set, we discover how a sub-nomic fact’s lawhood is fixed by the sub-nomic facts and the subjunctive facts about them” (2009, 43).
Attempts to undermine antireductionism often include challenges to antisupervenience like those mentioned at the end of Section 4. Tyler Hildebrand (2013) challenges Carroll’s and Maudlin’s antireductionisms based on the failure of primitive laws to explain the uniformity of nature. A symposium on Lange’s (2009) Laws and Lawmakers includes, along with Lange’s replies, a variety of criticisms from Carroll, Loewer, and James Woodward. (See Lange et al., 2011.) Heather Demerest (2012) raises three challenges to Lange’s antireductionism all centered on whether subjunctives are suited to play the role of lawmakers.
Goodman thought that the difference between laws of nature and accidental truths was linked inextricably with the problem of induction. In his “The New Riddle of Induction” (1983, [f.p. 1954], 73), Goodman says,
Only a statement that is lawlike — regardless of its truth or falsity or its scientific importance — is capable of receiving confirmation from an instance of it; accidental statements are not.
(Terminology: P is lawlike only if P is a law if true.) Goodman claims that, if a generalization is accidental (and so not lawlike), then it is not capable of receiving confirmation from one of its instances.
This has prompted much discussion, including some challenges. For example, suppose there are ten flips of a fair coin, and that the first nine land heads (Dretske 1977, 256–257). The first nine instances — at least in a sense — confirm the generalization that all the flips will land heads; the probability of that generalization is raised from (.5)10 up to .5. But this generalization is not lawlike; if true, it is not a law. It is standard to respond to such an example by arguing that this is not the pertinent notion of confirmation (that it is mere “content-cutting”) and by suggesting that what does require lawlikeness is confirmation of the generalization’s unexamined instances. Notice that, in the coin case, the probability that the tenth flip will land heads does not change after the first nine flips land heads. There are, however, examples that generate problems for this idea too.
Suppose the room contains one hundred men and suppose you ask fifty of them whether they are third sons and they reply that they are; surely it would be reasonable to at least increase somewhat your expectation that the next one you ask will also be a third son (Jackson and Pargetter 1980, 423)
It does no good to revise the claim to say that no generalization believed to be accidental is capable of confirmation. About the third-son case, one would know that the generalization, even if true, would not be a law. The discussion continues. Frank Jackson and Robert Pargetter have proposed an alternative connection between confirmation and laws on which certain counterfactual truths must hold: observation of As that are F-and-B confirms that all non-F As are Bs only if the As would still have been both A and B if they had not been F. (This suggestion is criticized by Elliott Sober 1988, 97–98.) Lange (2000, 111–142) uses a different strategy. He tries to refine further the relevant notion of confirmation, characterizing what he takes to be an intuitive notion of inductive confirmation, and then contends that only generalizations that are not believed not to be lawlike can be (in his sense) inductively confirmed.
Sometimes the idea that laws have a special role to play in induction serves as the starting point for a criticism of Humean analyses. Dretske (1977, 261–262) and Armstrong (1983, 52–59, and 1991) adopt a model of inductive inference that involves an inference to the best explanation. (Also see Foster 1983 and 2004.) On its simplest construal, the model describes a pattern that begins with an observation of instances of a generalization, includes an inference to the corresponding law (this is the inference to the best explanation), and concludes with an inference to the generalization itself or to its unobserved instances. The complaint lodged against Humeans is that, on their view of what laws are, laws are not suited to explain their instances and so cannot sustain the required inference to the best explanation.
This is an area where work on laws needs to be done. Armstrong and Dretske make substantive claims on what can and can’t be instance confirmed: roughly, Humean laws can’t, laws-as-universals can. But, at the very least, these claims cannot be quite right. Humean laws can’t? As the discussion above illustrates, Sober, Lange and others have argued that even generalizations known to be accidental can be confirmed by their instances. Dretske and Armstrong need some plausible and suitably strong premise connecting lawhood to confirmability and it is not clear that there is one to be had. Here is the basic problem: As many authors have noticed (e.g., Sober 1988, 98; van Fraassen 1987, 255), the confirmation of a hypothesis or its unexamined instances will always be sensitive to what background beliefs are in place. So much so that, with background beliefs of the right sort, just about anything can be confirmed irrespective of its status as a law or whether it is lawlike. Thus, stating a plausible principle describing the connection between laws and the problem of induction will be difficult.
Philosophers have generally held that some contingent truths are (or could be) laws of nature. Furthermore, they have thought that, if it is a law that all Fs are Gs, then there need not be any (metaphysically) necessary connection between F-ness and G-ness, that it is (metaphysically) possible that something be F without being G. For example, any possible world that, as a matter of law, obeys the general principles of Newtonian physics is a world in which Newton’s first is true, and a world containing accelerating inertial bodies is a world in which Newton’s first is false. The latter world is also a world where inertia is instantiated but does not necessitate zero acceleration. Some necessitarians, however, hold that all laws are necessary truths. (See Shoemaker 1980 and 1998, Swoyer 1982, Fales 1990, Bird 2005. See Vetter 2012 for criticism of Bird 2005 from within the dispositional essentialist camp.) Others have held something that is only slightly different. Maintaining that some laws are singular statements about universals, they allow that some laws are contingently true. So, on this view, an F-ness/G-ness law could be false if F-ness does not exist. Still, this difference is minor. These authors think that, for there to be an F-ness/G-ness law, it must be necessarily true that all Fs are Gs. (See Tweedale 1984, Bigelow, Ellis, and Lierse 1992, Ellis and Lierse 1994, and Ellis 2001, 203-228; 2009, 51-72.)
Two reasons can be given for believing that being a law does not depend on any necessary connection between properties. The first reason is the conceivability of it being a law in one possible world that all Fs are Gs even though there is another world with an F that is not G. The second is that there are laws that can only be discovered in an a posteriori manner. If necessity is always associated with laws of nature, then it is not clear why scientists cannot always get by with a priori methods. Naturally, these two reasons are often challenged. The necessitarians argue that conceivability is not a guide to possibility. They also appeal to Saul Kripke’s (1972) arguments meant to reveal certain a posteriori necessary truths in order to argue that the a-posteriori nature of some laws does not prevent their lawhood from requiring a necessary connection between properties. In further support of their own view, the necessitarians argue that their position is a consequence of their favored theory of dispositions, according to which dispositions have their causal powers essentially. So, for example, on this theory, charge has as part of its essence the power to repel like charges. Laws, then, are entailed by the essences of dispositions (cf., Bird 2005, 356). As necessitarians see it, it is also a virtue of their position that they can explain why laws are counterfactual-supporting; they support counterfactuals in the same way that other necessary truths do (Swoyer 1982, 209; Fales 1990, 85–87).
The primary worry for necessitarians concerns their ability to sustain their dismissals of the traditional reasons for thinking that some laws are contingent. The problem (cf., Sidelle 2002, 311) is that they too make distinctions between necessary truths and contingent ones, and even seem to rely on considerations of conceivability to do so. Prima facie, there is nothing especially suspicious about the judgment that it is possible that an object travel faster than light. How is it any worse than the judgment that it is possible that it is raining in Paris? Another issue for necessitarians is whether their essentialism regarding dispositions can sustain all the counterfactuals that are apparently supported by laws of nature (Lange 2004).
Two separate (but related) questions have received much recent attention in the philosophical literature surrounding laws. Neither has much to do with what it is to be a law. Instead, they have to do with the nature of the generalizations scientists try to discover. First: Does any science try to discover exceptionless regularities in its attempt to discover laws? Second: Even if one science — fundamental physics — does, do others?
Philosophers draw a distinction between strict generalizations and ceteris-paribus generalizations. The contrast is supposed to be between universal generalizations of the sort discussed above (e.g., that all inertial bodies have no acceleration) and seemingly less formal generalizations like that, other things being equal, smoking causes cancer. The idea is that the former would be contradicted by a single counterinstance, say, one accelerating inertial body, though the latter is consistent with there being one smoker who never gets cancer. Though in theory this distinction is easy enough to understand, in practice it is often difficult to distinguish strict from ceteris-paribus generalizations. This is because many philosophers think that many utterances which include no explicit ceteris-paribus clause implicitly do include such a clause.
For the most part, philosophers have thought that if scientists have discovered any exceptionless regularities that are laws, they have done so at the level of fundamental physics. A few philosophers, however, are doubtful that there are exceptionless regularities at even this basic level. For example, Nancy Cartwright has argued that the descriptive and the explanatory aspects of laws conflict. “Rendered as descriptions of fact, they are false; amended to be true, they lose their fundamental explanatory force” (1980, 75). Consider Newton’s gravitational principle, F = Gmm′/r2. Properly understood, according to Cartwright, it says that for any two bodies the force between them is Gmm′/r2. But if that is what the law says then the law is not an exceptionless regularity. This is because the force between two bodies is influenced by other properties than just their mass and the distance between them, by properties like the charge of the two bodies as described by Coulomb’s law. The statement of the gravitational principle can be amended to make it true, but that, according to Cartwright, at least on certain standard ways of doing so, would strip it of its explanatory power. For example, if the principle is taken to hold only that F = Gmm′/r2 if there are no forces other than gravitational forces at work, then though it would be true it would not apply except in idealized circumstances. Lange (1993) uses a different example to make a similar point. Consider a standard expression of the law of thermal expansion: ‘Whenever the temperature of a metal bar of length L0 changes by T, the length of the bar changes by L = kL0T,’ where k is a constant, the thermal expansion coefficient of the metal. If this expression were used to express the strict generalization straightforwardly suggested by its grammar, then such an utterance would be false since the length of a bar does not change in the way described in cases where someone is hammering on the ends of the bar. It looks like the law will require provisos, but so many that the only apparent way of taking into consideration all the required provisos would be with something like a ceteris-paribus clause. Then the concern becomes that the statement would be empty. Because of the difficulty of stating plausible truth conditions for ceteris-paribus sentences, it is feared that ‘Other things being equal, L = kL0T’ could only mean ‘L = kL0T provided that L = kL0T.’
Even those who agree with the arguments of Cartwright and Lange sometimes disagree about what ultimately the arguments say about laws. Cartwright believes that the true laws are not exceptionless regularities, but instead are statements that describe causal powers. So construed, they turn out to be both true and explanatory. Lange ends up holding that there are propositions properly adopted as laws, though in doing so one need not also believe any exceptionless regularity; there need not be one. Giere (1999) can usefully be interpreted as agreeing with Cartwright’s basic arguments but insisting that law-statements don’t have implicit provisos or implicit ceteris-paribus clauses. So, he concludes that there are no laws.
Earman and Roberts hold that there are exceptionless and lawful regularities. More precisely, they argue that scientists doing fundamental physics do attempt to state strict generalizations that are such that they would be strict laws if they were true:
Our claim is only that … typical theories from fundamental physics are such that if they were true, there would be precise proviso free laws. For example, Einstein’s gravitational field law asserts — without equivocation, qualification, proviso, ceteris paribus clause — that the Ricci curvature tensor of spacetime is proportional to the total stress-energy tensor for matter-energy; the relativistic version of Maxwell’s laws of electromagnetism for charge-free flat spacetime asserts — without qualification or proviso — that the curl of the E field is proportional to the partial time derivative, etc. (1999, 446).
About Cartwright’s gravitational example, they think (473, fn. 14) that a plausible understanding of the gravitational principle is as describing only the gravitational force between the two massive bodies. (Cartwright argues that there is no such component force and so thinks such an interpretation would be false. Earman and Roberts disagree.) About Lange’s example, they think the law should be understood as having the single proviso that there be no external stresses on the metal bar (461). In any case, much more would need to be said to establish that all the apparently strict and explanatory generalizations that have been or will be stated by physicists have turned or will turn out to be false. (Earman, et al., 2003 includes more recent papers by both Cartwright and Lange, and also many other papers on ceteris-paribus laws.)
Supposing that physicists do try to discover exceptionless regularities, and even supposing that our physicists will sometimes be successful, there is a further question of whether it is a goal of any science other than fundamental physics — any so-called special science — to discover exceptionless regularities and whether these scientists have any hope of succeeding. Consider an economic law of supply and demand that says that, when demand increases and supply is held fixed, price increases. Notice that, in some places, the price of gasoline has sometimes remained the same despite an increase in demand and a fixed supply, because the price of gasoline was government regulated. It appears that the law has to be understood as having a ceteris-paribus clause in order for it to be true. This problem is a very general one. As Jerry Fodor (1989, 78) has pointed out, in virtue of being stated in a vocabulary of a special science, it is very likely that there will be limiting conditions — especially underlying physical conditions — that will undermine any interesting strict generalization of the special sciences, conditions that themselves could not be described in the special-science vocabulary. Donald Davidson prompted much of the recent interest in special-science laws with his “Mental Events” (1980 [f.p. 1970], 207–225). He gave an argument specifically directed against the possibility of strict psycho-physical laws. More importantly, he made the suggestion that the absence of such laws may be relevant to whether mental events ever cause physical events. This prompted a slew of papers dealing with the problem of reconciling the absence of strict special-science laws with the reality of mental causation (e.g., Loewer and Lepore 1987 and 1989, Fodor 1989, Schiffer 1991, Pietroski and Rey 1995).
Progress on the problem of provisos depends on three basic issues being distinguished. First, there is the question of what it is to be a law, which in essence is the search for a necessarily true completion of: “P is a law if and only if …”. Obviously, to be a true completion, it must hold for all P, whether P is a strict generalization or a ceteris-paribus one. Second, there is also a need to determine the truth conditions of the generalization sentences used by scientists. Third, there is the a posteriori and scientific question of which generalizations expressed by the sentences used by the scientists are true. The second of these issues is the one where the action needs to be.
On this score, it is striking how little attention is given to the possible effects of context. Mightn’t it be that, when the economist utters a certain strict generalization sentence in an “economic setting” (say, in an economics textbook or at an economics conference), context-sensitive considerations affecting its truth conditions will have it turn out that the utterance is true? This might be the case despite the fact that the same sentence uttered in a different context (say, in a discussion among fundamental physicists or better yet in a philosophical discussion of laws) would result in a clearly false utterance. These changing truth conditions might be the result of something as plain as a contextual shift in the domain of quantification or perhaps something less obvious. Whatever it is, the important point is that this shift could be a function of nothing more than the linguistic meaning of the sentence and familiar rules of interpretation (e.g., the rule of accommodation).
Consider a situation where an engineering professor utters, “When a metal bar is heated, the change in its length is proportional to the change in its temperature” and suppose a student offers, “Not when someone is hammering on both ends of the bar”. Has the student shown that the teacher’s utterance was false? Maybe not. Notice that the student comes off sounding a bit insolent. In all likelihood, such an unusual situation as someone hammering on both ends of a heated bar would not have been in play when the professor said what he did. In fact, the reason the student comes off sounding insolent is because it seems that he should have known that his example was irrelevant. Notice that the professor’s sentence needn’t include some implicit ceteris-paribus clause in order for his utterance to be true; as this example illustrates, in ordinary conversations, plain old strict generalization sentences are not always used to cover the full range of actual cases. Indeed, they are rarely used in this way.
If special scientists do make true utterances of generalization sentences (sometimes ceteris-paribus generalization sentences, sometimes not), then apparently nothing stands in the way of them uttering true special-science lawhood sentences. The issue here has been the truth of special-science generalizations, not any other requirements of lawhood.
How will matters progress? How can philosophy advance beyond the current disputes about laws of nature? Five issues are especially interesting and important ones. The first concerns the need for further work on whether laws govern the universe and how that affects our understanding of lawhood. The second concerns whether lawhood is a part of the content of scientific theories. This is a question often asked about causation, but less frequently addressed about lawhood. Roberts offers an analogy in support of the thought that it is not:
It is a postulate of Euclidean geometry that two points determine a line. But it is not part of the content of Euclidean geometry that this proposition is a postulate. Euclidean geometry is not a theory about postulates; it is a theory about points, lines, and planes … (2008, 92).
Roberts draws the conclusion that lawhood is not part of scientific theories and goes on to describe what he thinks the role of lawhood in science is. This may be a plausible first step toward understanding the absence of ‘law’ and some other nomic terms from the formal statements of scientific theories. The third is the issue of whether there are any contingent laws of nature. Necessitarians continue to work feverishly on filling in their view, while Humeans and others pay relatively little attention to what they are up to; new work needs to explain the source of the underlying commitments that divide these camps. Fourth, though the issue goes back at least to Armstrong (1983, 40), there has been a recent flurry of publications on to what extent certain sorts of laws (e.g., Humean vs. Necessitarian) explain. (See Loewer 1996 and 2012, Lange 2009b and 2013, Hildebrand 2013 and 2014, Marshall 2015, and Miller 2015). Finally, more attention needs to be paid to the language used to report what are the laws and the language used to express the laws themselves. It is clear that recent disputes about generalizations in physics and the special sciences turn on precisely these matters, but exploring them may also pay dividends on central matters regarding ontology, realism vs. antirealism, and supervenience.
First published Tue Apr 29, 2003; substantive revision Tue Aug 2, 2016