- Home
- Michael Kaplan
Chances Are Page 10
Chances Are Read online
Page 10
The more hazardous or unpredictable the work we do, the more magical the precautions we take. Sailors (though their touch brings luck to others) are hemmed in by objects of ill fortune: stormy petrels, crossed eyes, eggs, rabbits, swans. Fishermen dare not mention ministers, wash scales from their boats, or utter the true name of the Red Fish. Baseball players put their right shoes on first, and actors must never wish each other luck or mention Macbeth. Do you walk under ladders? Probably not without a feeling of defiance, at least.
Sense and superstition are not entirely separate kingdoms, though, and not all traditional mitigators of misfortune are matters of omen and taboo. Some are intelligent anticipations of risk—such as the practice among many hunter-gatherer societies of leaving behind a little of all found treasures, from berries to buffalo. This is effectively insurance: paying a premium into an environmental policy to tide you through the bad times.
The other traditional form of insurance is having many children. “Blessed is he who haveth his quiver full of them: they shall not be ashamed, but they shall speak with the enemies in the gate,” says the Psalmist (euphemistically describing a practice still, sadly, current in his country). Each child is a separate policy, diversifying our genetic risk over time and space, increasing the chances our line will survive any one disaster. Somalis grade and name their famines by how much of this investment must be written off: “Leave the Baby” is the first degree, followed by “Leave the Children,” “Leave the Wife,” and “Leave the Animals.” The logic—and the cold-bloodedness—parallels the advice a consultant would give a failing company: last hire/first fire, slash the workforce, default on the backers, shut the plant.
In the United States, non-life insurance claims average close to a billion dollars every day, and cover the whole range of human misfortune: the predictable winds of the Gulf of Mexico and the uncertain ground of California; Nebraska hailstones the size of golf balls; roaring Arizona brushfires; defaulting New York debtors. Physical, moral, or financial collapse, explosion, subsidence, combustion, consumption, and rot—no matter what the disaster, you’ll find a company somewhere willing to bet it happens less often than you think it does.
That bet is a curious one when reduced to its essentials: a parceling out of the world’s misfortune into individual bags, one of which you can buy. Take out fire insurance and your premium is, in theory, your fair share of the loss from all the world’s insured fires this year. In 2001, that total loss was roughly $36 billion. Like all global numbers, it’s so large as to be meaningless; but it is the equivalent of the entire gross domestic product of Ecuador having been blasted to smoking ruin in a single fireball.
Seen in that blazing light, your premium might appear relatively small—especially when you know that it also includes your share of the salaries of thousands of sleek executives and rent on impressive offices in a host of busy cities. So if the ratio of your premium to the value of your house is really an accurate reflection of the probability of your losing everything in a fire, perhaps the world, seen as a whole, is less dangerous than it seems. This apparent discrepancy between premium and loss reflects a deep, old, and natural psychological imbalance. We think we are more prone to misfortune than simple probability suggests—because the evil thing that happens to us is intrinsically worse than the same thing happening to someone else. “Who has a sorrow like unto my sorrow?” When misfortune strikes me or those I love, it ceases to be a statistic and becomes a unique tragedy, too imaginable in all its terrible detail. It is no longer Chance—it is Destiny.
Put in purely probabilistic terms, insurance is no more than the even redistribution of risk: dividing unbearable trouble into bearable doses. In legal terms, it is the substitution of corporate responsibility for personal loss. In emotional terms, though, it will always appear as something more complicated, which may explain the typical grandiloquence of its advocates. In the 1911 edition of the Encyclopaedia Britannica, the contributors Charlton Lewis and Thomas Ingram wrote:
The direct contribution of insurance is made not in visible wealth, but in the intangible and immeasurable forces of character on which civilization itself is founded. It has done more than all gifts of impulsive charity to foster a sense of human brotherhood and of common interests. It has done more than all repressive legislation to destroy the gambling spirit. It is impossible to conceive of our civilization in its full vigour and progressive power without this principle which unites the fundamental law of practical economy, that he best serves humanity who best serves himself, with the golden rule of religion, “Bear ye one another’s burdens.”
If we strip away the Edwardian veneer and look at the solid woodwork underneath, what do we see? Laplace’s 8th, 9th and 10th Principles and, binding them together, Bernoulli’s Weak Law of Large Numbers.
Like their near-contemporaries the Bach family, the Bernoullis were a constellation of professional talent: twelve made significant discoveries in mathematics and no fewer than five worked on probability. Staring out, ruffed and ringleted, from his portrait of 1687, Jakob Bernoulli looks self-assured, even arrogant. But there is also something in that flat eye and downward-turning mouth of the “bilious and melancholly temper” mentioned by his biographer. Bernoulli spent his life as a professor at Basel; after his death, his papers revealed he had been puzzling for twenty years over the problem of uncertainty, or a posteriori probability—that is, how to assign a likelihood to a future event purely on the basis of having observed past events. For Pascal, probability had been an a priori game, where we knew the rules if not the flow of play: all those coin tosses and card turnings expressed simple, preexisting axioms recognized by us, the players. But what if, as in most natural sciences, we do not know the rules? How and when do repeated observations or experiments, each with its own degree of circumstantial error, tell us whether Nature is playing with a full deck or a fair coin?
Bernoulli’s was an appropriate worry for a century that was making the seismic shift from science as a chain of logical deductions based on principle, to science as a body of conclusions based on observation. Pascal’s a priori vision of chance was, like his theology, axiomatic, eternal, true before and after any phenomena—and therefore ultimately sterile. Even in the most rule-constrained situations in real life, however, knowing the rules is rarely enough.
Clearly, probability needed a way to work with the facts, not just the rules: to make sense of things as and after they happen. To believe that truth simply arises through repeated observation is to fall into a difficulty so old that its earliest statement was bequeathed by Heraclitus to the temple of Artemis at Ephesus more than two thousand years ago: “You cannot step into the same river twice.” Life is flow, life is change; the fact that something has occurred cannot itself guarantee that it will happen again. As the Scottish skeptic David Hume insisted, the sun’s having risen every day until now makes no difference whatever to the question of whether it will rise tomorrow; nature simply does not operate on the principle of “what I tell you three times is true.”
Rules, however beautiful, do not allow us to conclude from facts; observation, however meticulous, does not in itself ensure truth. So are we at a dead end? Bernoulli’s Ars Conjectandi—The Art of Hypothesizing—sets up arrows toward a way out. His first point was that for any given phenomenon, our uncertainty about it decreases as the number of our observations of it increases.
This is actually more subtle than it appears. Bernoulli noticed that the more observations we make, the less likely it is that any one of them would be the exact thing we are looking for: shoot a thousand times at a bull’s-eye, and you greatly increase the number of shots that are near but not in it. What repeated observation actually does is refine our opinion, offering readings that fall within progressively smaller and smaller ranges of error: If you meet five people at random, the proportion of the sexes cannot be more even than 3 to 2, a 10 percent margin of error; meet a thousand and you can reduce your expected error to, say, 490 to 510: a 0
.1 percent margin.
As so often happens in mathematics, a convenient re-statement of a problem brings us suddenly up against the deepest questions of knowledge. Instinctively, we want to know what the answer—the ratio, the likelihood—really is. But no matter how carefully we set up our experiment, we know that repeated observation never reveals absolute truth. If, however, we change the problem from “What is it really?” to “How wrong about it can I bear to be?”—from God’s truth to our own fallibility—Bernoulli has the answer. Here it is:
This formula, like all others, is a kind of bouillon cube, the result of intense, progressive evaporation of a larger, diffuse mix of thought. Like the cube, it can be very useful—but it isn’t intended to be consumed raw. You want to determine an unknown proportion p: say, the proportion of the number of houses that actually burn down in a given year to the total number of houses. This law says that you can specify a number of observations, N, for which the likelihood P that the difference between the proportion you observe (X houses known to have burned down out of N houses counted: X/N) and p will be within an arbitrary degree of accuracy, ε, is more than c times greater than the likelihood that the difference will be outside that degree of accuracy. The c can be a number as large as you like.
This is the Weak Law of Large Numbers—the basis for all our dealings with repeated phenomena of uncertain cause—and it states that for any given degree of accuracy, there is a finite number of observations necessary to achieve that degree of accuracy. Moreover, there is a method for determining how many further observations would be necessary to achieve 10, or 100, or 1,000 times that degree of accuracy. Absolute certainty can never be achieved, though; the aim is what Bernoulli called “moral certainty”—in essence, being as sure of this as you can be of anything.
Bernoulli was a contemporary of Newton and, like him, worked on the foundations of calculus. One might say Bernoulli’s approach to deriving “moral certainty” from multiple examples looks something like the notion of “limit” in calculus: if, for example, you want to know the slope of a smooth curve at a given point—well, you can’t. What you can do is begin with the slope of a straight line between that point and another nearby point on the curve and then observe how the slope changes as you move the nearby point toward the original point. You achieve any desired degree of accuracy by moving your second point sufficiently close to your first.
It wasn’t just the fact that moral certainty could be achieved that interested Bernoulli; he wanted to know how many cases were necessary to achieve a given degree of certainty—and his law offers a solution.
As so often in the land of probability, we are faced with an urn containing a great many white and black balls, in the ratio (though we don’t know it) of 3 black to 2 white. We are in 1700, so we can imagine a slightly dumpy turned pearwood urn with mannerist dragon handles, the balls respectively holly and walnut. How many times must we draw from that urn, replacing the ball each time, to be willing to guess the ratio between black and white? Maybe a hundred times? But we still might not be very sure of our guess. How many times would we have to draw to be 99.9 precent sure of the ratio? Bernoulli’s answer, for something seemingly so intangible, is remarkably precise: 25,550. 99.99 percent certain? 31,258. 99.999 percent? 36,966. Not only is it possible to attain “moral certainty” within a finite number of drawn balls, but each order of magnitude of greater certainty requires fewer and fewer extra draws. So if you are seventy years (or, rather, 25,550 days) old, you can be morally certain the sun will rise tomorrow—whatever David Hume may say.
This discovery had two equally important but opposite implications, depending in part on whether you think 25,550 is a small or large number. If it seems small, then you will see how the justification for gathering mass data was now fully established. Until Bernoulli’s theorem, there was no reason to consider that looking closely at death lists or tax receipts or how many houses burn in London was other than idle curiosity, as there was no proof that frequent observations could be more valid than ingenious assumptions. Now, though, there was the Grail of moral certainty—the promise that enough observations could make one 99.9 percent sure of conjectures that the finest wits could not have teased from first principles. If 25,550 seems large to you, however, then you will have a glimpse of the vast prairies of scientific drudgery that the Weak Law of Large Numbers brought under its dominion. The law is a devourer of data; it must be fed to produce its certainties. Think how many poor scriveners, inspectors, census-takers, and graduate students have given the marrow of their lives to preparing consistent series of facts to serve this tyrannical theorem: mass fact, even before mass production, made man a machine. And the data, too, must be standardized, for if any two observations in the series are not directly comparable, the term X/N in the formula has no meaning. The Law collapses, and we are back bantering absolute truth with Aristotle and Hume. The fact that we now have moral certainty of so many scientific assertions is a monument to the humility and patience, not just the genius, of our forebears.
Genius, though, must have its place; every path through probability must stop for a moment to recognize another aspect of the genius of Laplace. His work binds together the a priori rules of frequency developed by Pascal and the a posteriori observations foreshadowed by Bernoulli into a single, consistent discipline: a calculus of probabilities, based on ten Principles. But Laplace did not stop at unifying the theory: he took it further, determining not just how likely a predetermined matter like a coin toss might be, nor how certain we can be of something based on observation—but how we ought to act upon that degree of certainty or uncertainty.
His own career in public office ended disastrously after only six weeks (“He brought into the administration,” complained Napoleon, “the spirit of the infinitesimals”), but Laplace retained a strong interest in the moral and political value of his work: “The most important questions of life . . . are indeed for the most part only problems of probability.” His 5th Principle, for instance, determines the absolute probability of an expected event linked to an observed one (such as, say, the likelihood of tossing heads with a loaded coin, given the observed disproportion of heads to tails in past throws). In explaining it, Laplace moved quickly from the standard example of coin tossing to shrewd and practical advice: “Thus in the conduct of life constant happiness is a proof of competency which should induce us to employ preferably happy persons.”
Laplace’s calculus brought order to insurance in his lecture “Concerning Hope,” where he introduced a vital element that had previously been missing from the formal theory of chance: the natural human desire that things should come out one way rather than another. It added a new layer—what Laplace called “mathematical hope”—to the question of probability: an event had not only a given likelihood but a value, for good or ill, to the observer. The way these two layers interact was described in three Principles—and one conclusion—that are worth examining in detail.
8th Principle: When the advantage depends on several events it is obtained by taking the sum of the products of the probability of each event by the benefit attached to its occurrence.
This is the gambling principle you will remember from Chapter 4: If the casino offers you two chips if you flip heads on the first attempt and four chips if you flip heads only on the second attempt, you should multiply each chance of winning by its potential gains and then add them together: (1/2 × 2) + (1/4 × 4) = 2. This means, if it costs fewer than two chips to play, you should take the chance. If the price is more—well, then you know you’re in a real casino.
Of course, the same arithmetic applies to losses as it does to gains: mathematical fear is just the inverse of mathematical hope. So, if you felt you had a 1-in-2 chance of losing $2 million and a 1-in-4 chance of losing $4 million, you would happily (or at least willingly) pay a total premium of up to $2 million to insure against your whole potential loss. It is this ability to bundle individual chances into one overall risk that makes it pos
sible to insure large enterprises.
9th Principle: In a series of probable events of which the ones produce a benefit and the others a loss, we shall have the advantage which results from it by making a sum of the products of the probability of each favorable event by the benefit it procures, and subtracting from this sum that of the products of the probability of each unfavorable event by the loss which is attached to it. If the second sum is greater than the first, the benefit becomes a loss and hope is changed to fear.
Now this really does begin to sound like real life, where not everything that happens is a simple bet on coin flips—and where good and bad fortune are never conveniently segregated. Few sequences of events are pleasure unalloyed: every Christmas present has its thank-you letter. So how do we decide whether, on balance, this course is a better choice than that? Laplace says here that probability, like addition, doesn’t care about order—just as you know that 1 - 3 + 2 - 4 will give you the same result as 1 + 2 - 3 - 4, you can comb out any tangled skein of probable events into positive and negative (based on their individual likelihood multiplied by their potential for gain or loss) and then add each column up and compare the sums, revealing whether you should face this complex future with joy or despair.