Chances Are Read online




  Table of Contents

  PENGUIN BOOKS

  Title Page

  Copyright Page

  Dedication

  Acknowledgements

  Chapter 1 - Thinking

  Chapter 2 - Discovering

  Chapter 3 - Elaborating

  Chapter 4 - Gambling

  Chapter 5 - Securing

  Chapter 6 - Figuring

  Chapter 7 - Healing

  Chapter 8 - Judging

  Chapter 9 - Predicting

  Chapter 10 - Fighting

  Chapter 11 - Being

  Index

  PENGUIN BOOKS

  CHANCES ARE . . .

  Michael Kaplan studied European history at Harvard and Oxford. After a stint as producer/director at WGBH, he has been an award-winning writer and filmmaker for corporations, governments, museums, and charities. He lives near Edinburgh, Scotland, with his wife and son.

  Ellen Kaplan trained as a classical archaeologist and has taught math, biology, Greek, Latin, and history. She and her husband, Robert, run the Math Circle, a nonprofit foundation dedicated to joyous participatory learning, and are the authors of The Art of the Infinite.

  Michael Kaplan and Ellen Kaplan | Penguin Books

  PENGUIN BOOKS

  Published by the Penguin Group

  Penguin Group (USA) Inc., 375 Hudson Street, New York, New York 10014, U.S.A. • Penguin

  Group (Canada), 90 Eglinton Avenue East, Suite 700, Toronto, Ontario, Canada M4P 2Y3

  (a division of Pearson Penguin Canada Inc.) • Penguin Books Ltd, 80 Strand, London WC2R 0RL,

  England • Penguin Ireland, 25 St Stephen’s Green, Dublin 2, Ireland (a division of Penguin Books

  Ltd) • Penguin Group (Australia), 250 Camberwell Road, Camberwell, Victoria 3124, Australia

  (a division of Pearson Australia Group Pty Ltd) • Penguin Books India Pvt Ltd, 11 Community

  Centre, Panchsheel Park, New Delhi - 110 017, India • Penguin Group (NZ), 67 Apollo Drive,

  Mairangi Bay, Auckland 1311, New Zealand (a division of Pearson New Zealand Ltd) • Penguin

  Books (South Africa) (Pty) Ltd, 24 Sturdee Avenue, Rosebank, Johannesburg 2196, South Africa

  Penguin Books Ltd, Registered Offices: 80 Strand, London WC2R 0RL, England

  First published in the United States of America by Viking Penguin,

  a member of Penguin Group (USA) Inc. 2006

  Published in Penguin Books 2007

  Copyright © Michael Kaplan and Ellen Kaplan, 2006

  All rights reserved

  Grateful acknowledgment is made for permission

  to reprint excerpts from the following copyrighted works:

  “Law Like Love” from Collected Poems by W. H. Auden. Copyright 1940

  and renewed 1968 by W. H. Auden. Used by permission of Random House, Inc.

  “Choruses from ‘The Rock’ ” from Collected Poems 1909-1962 by T. S. Eliot.

  Copyright 1936 by Harcourt, Inc. and renewed 1964 by T. S. Eliot. . and Faber and Faber Ltd.

  “Hope” by Randall Jarrell. Copyright 1945 by Randall Jarrell, renewed 1990 by Mary von

  Schrader Jarrell. .

  eISBN : 978-1-440-68451-7

  1. Probabilities—Popular works. I. Kaplan, Ellen. II. Title.

  QA273.15.K37 2006

  519.2—dc22 2005058471

  The scanning, uploading and distribution of this book via the Internet or via any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions, and do not participate in or encourage electronic piracy of copyrighted materials. Your support of the author’s rights is appreciated.

  http://us.penguingroup.com

  To Jane, who likes probability,

  Bob, who likes chance,

  and Felix, who likes risk

  Acknowledgments

  We live in wonderful times, where shared interests can make new friends instantly across the globe. We want most of all to thank the many people who agreed to be interviewed for this book, or who offered their particular expertise in person or by telephone, letter, or e-mail. We came to regard the enthusiastic help of strangers as one certainty in an uncertain world.

  Peter Ginna, friend to both generations, provided the impetus that set the work going. Each of us relied to such a degree on the talents available in our own homes that this should be considered the work of an extended family.

  Rick Kot and his associates at Viking saw the manuscript through to its final form with enthusiasm, professionalism, and dispatch. They, like the others, helped give the book its virtues; its faults are ours alone

  1

  Thinking

  The present is a fleeting moment, the past is no more; and our prospect of futurity is dark and doubtful. This day may possibly be my last: but the laws of probability, so true in general, so fallacious in particular, still allow about fifteen years.

  —Gibbon, Memoirs

  We search for certainty and call what we find destiny. Everything is possible, yet only one thing happens—we live and die between these two poles, under the rule of probability. We prefer, though, to call it Chance: an old familiar embodied in gods and demons, harnessed in charms and rituals. We remind one another of fortune’s fickleness, each secretly believing himself exempt. I am master of my fate; you are dicing with danger; he is living in a fool’s paradise.

  Until the 1660s, when John Graunt, a bankrupt London draper, proposed gauging the vitality of his city by its Bills of Mortality, there were only two ways of understanding the world: inductively, by example; or deductively, by axiom. Truths were derived either from experience—and thus hostages to any counterexample lying in ambush—or were beautiful abstractions: pure, consistent, crystalline, but with no certain relevance to the world of mortals. These two modes of reasoning restricted not just the answers we had about life, but the questions we could ask. Beyond, all else was chance, fortune, fate—the riddle of individual existence.

  Graunt was the first to unearth truths from heaps of data. His invention, eventually known as statistics, avoided alike the basic question of Being (“all things are possible”) and the uniqueness of individual existence (“only one thing happens”). It got around the problem of uncertainty by asking: “Well, exactly how right do you need to be just now?”

  In that same inventive decade, Blaise Pascal was working both on dice-throwing puzzles and on his own, far more compelling, problem: “What shall I do to be saved?” Again, neither induction nor deduction could provide the answers: God and the dice alike refuse to be bound by prior behavior. And yet, and yet . . . in the millennia since Creation, the world has tended to be a certain way, just as, over a thousand throws of a die, six tends to come up a certain proportion of times. Pascal was the first to see that there could be laws of probability, laws neither fit for Mosaic tablets nor necessarily true for any one time and place, but for life en masse; not for me today, but for mankind through all of time.

  The combination of the tool of statistics and the theory of probability is the underpinning of almost all modern sciences, from meteorology to quantum mechanics. It provides justification for almost all purposeful group activity, from politics to economics to medicine to commerce to sports. Once we leave pure mathematics, philosophy, or theology behind, it is the unread footnote to every concrete statement.

  And yet it goes against all human instinct. Our natural urge in seeking knowledge is either for deductive logical truth (“Happiness is the highest good”) or for inductive truths based on experience (“Never play cards with a man called Doc”). We want questions to fall into one of these categories, which is one of many reasons most of us find probability alien and statisticians unappealing. They don’t tell us
what we want to know: the absolute truth. Their science is right everywhere but wrong in any one place: like journalism, it is true except in any case of which we have personal knowledge. And, while we may be willing to take a look at the numbers, we rebel at the idea of being one—a “mere statistic.”

  But there are people in the world who can dance with data, people for whom this mass of incomplete uncertainties falls beautifully into place, in patterns as delightful and revealing as a flock of migrating swans. Graunt, Pascal, the Reverend Thomas Bayes, Francis Galton, R. A. Fisher, John von Neumann—all are figures a little to the side of life, perhaps trying to puzzle their way toward a grasp of human affairs that the more sociable could acquire glibly through maxim and proverb. The people who use probability today—market-makers, cardplayers, magicians, artificial-intelligence experts, doctors, war-game designers—have an equally interesting and oblique view of the human affairs they analyze and, to an extent, control.

  If you have ever taken a long car journey with an inquisitive child, you will know most of the difficulties with formal reasoning. Questions like “How do you know that?” and “What if it’s not like that?” pose real problems—problems that have kept philosophy hard at work for over two thousand years. “How do you know?” is particularly tricky: you “know” that protons are composed of quarks, or that the President spoke in Duluth because he’s courting the union vote—but is this “knowing” the same as knowing that the angles of a triangle add up to 180 degrees, or that you have a slight itch behind your left ear? Intuition says they are not the same—but how do you know?

  This was the great question in Plato’s time, particularly because the Sophists insisted that it was no question at all: their idea was that persuasion was the basis of knowledge, and that therefore rhetoric was the form of proof. The Sophist Gorgias promised to give his students “such absolute readiness for speaking, that they should be able to convince their audience independently of any knowledge of the subject.” Conviction was enough, since, he believed, nothing actually existed; or if it did, it could not be known; or if it could, it was inexpressible. This view offered the advantage that we could know everything the same way—protons to presidents—but had the disadvantage that we knew nothing very well.

  Plato and his circle hated the Sophists for their tort-lawyer cockiness and their marketing of wisdom, but most of all for their relativism. Platonists never accept that things are so just because someone has had the last word; some things are so because they have to be. A well-constructed pleading does not make 3 equal 5. Plato’s student Euclid arranged his books of geometry into definitions of objects; axioms, the basic statements of their relations; and theorems, statements that can be proved by showing how they are only logical extensions of axioms. A demonstration from Euclid has a powerful effect on any inquiring mind; it takes a statement, often difficult to believe, and in a few steps turns this into knowledge as certain as “I am.”

  So why can’t all life be like Euclid? After all, if we could express every field of inquiry as a consistent group of axioms, theorems, and proofs, even a president’s speech would have one, incontrovertible meaning. This was Aristotle’s great plan. The axioms of existence, he said, were matter and form. All things were the expression of form on matter, so the relationship between any two things could be defined by the forms they shared. Mortality, dogness, being from Corinth, or being the prime mover of the universe—all were aspects of being that could be set in their proper, nested order by logical proof. Thus, in the famous first syllogism of every textbook:

  All men are mortal.

  Socrates is a man.

  Therefore Socrates is mortal.

  This must be so; the conclusion is built into the definitions. Aristotle’s syllogisms defined the science of reasoning from his own time right up to the beginning of the seventeenth century. But there is an essential flaw in deductive reasoning: the difference between the valid and the true. The rules for constructing a syllogism tell you whether a statement is logically consistent with the premises, but they tell you nothing about the premises themselves. The Kamchatkans believe that volcanoes are actually underground feasting places where demons barbecue whales: if a mountain is smoking, the demons are having a party; there is nothing logically wrong with this argument. So deductive logic is confined to describing relations between labels, not necessarily truths about things. It cannot make something from nothing; like a glass of water for Japanese paper flowers, it simply allows existing relationships to unfold and blossom. Today, its most widespread application is in the logic chip of every computer, keeping the details of our lives from crashing into contradiction. But, as computer experts keep telling us, ensuring that the machines are not fed garbage is our responsibility, not theirs. The premises on which automated logic proceeds are themselves the result of human conclusions about the nature of the world—and those conclusions cannot be reached through deduction alone.

  You’ll remember that the other awkward question from the back seat was “What if it’s not like that?” Instinctively, we reason from example to principle, from objects to qualities. We move from seeing experience as a mere bunch of random stuff to positing the subtly ordered web of cause and effect that keeps us fascinated through a lifetime. But are we justified in doing so? What makes our assumptions different from mere prejudice?

  Sir Francis Bacon fretted over this question at the turn of the seventeenth century, projecting a new science, cut loose from Aristotle’s apron strings and ready to see, hear, feel, and conclude for itself using a method he called “induction.” Bacon was Lord Chancellor, the senior judge of the realm, and he proceeded in a lawyerly way, teasing out properties from experience, then listing each property’s positive and negative instances, its types and degrees. By cutting through experience in different planes, he hoped to carve away all that was inessential. Science, in his scheme, was like playing “twenty questions” or ordering a meal in a foreign language: the unknown relation was defined by indirection, progressively increasing information by attempting to exclude error.

  Induction actually has three faces, each turned a slightly different way. The homely village version is our most natural form of reasoning: the proverb. “Don’t insult an alligator until you’re over the creek”; “A friend in power is no longer your friend.” Everything your daddy told you is a feat of induction, a crystal of permanent wisdom drawn out of the saturated solution of life.

  Induction’s second, more exalted face is mathematical: a method of amazing power that allows you to fold up the infinite and put it in your pocket. Let’s say you want to prove that the total of the first n odd numbers, starting from 1, is n2. Try it for the first three: 1 + 3 + 5 = 9 = 32; so far, so good. But you don’t want to keep checking individual examples; you want to know if this statement is true or false over all examples—the first billion odd numbers, the first googol odd numbers.

  Why not start by proving the case for the first odd number, 1? Easy: 1 = 12. Now assume that the statement is true for an abstract number n; that is: 1 + 3 + 5 + . . . up to n odd numbers will equal n2. It would probably help if we defined what the nth odd number is: well, the nth even number would be 2n (since the evens are the same as the 2 times table), so the nth odd number is 2n—1 (since the first odd, 1, comes before the first even, 2). Now we need to show that if the statement is true for the nth odd number, it will also be true for the n + 1st; that is:

  Assuming that

  show that

  Let’s look more closely at that (n + 1)2 on the right. If we do the multiplication, it comes out as n2 + 2n + 1. But wait a minute: that’s the same as n2, the sum of the first n odd numbers, plus 2n + 1, the next odd number. So if our statement is true for n odd numbers it must be true for n + 1.

  But, you may be asking, aren’t you just proving a relation between two imaginary things? How is this different from deduction? It’s different because we already know the statement is true for the first odd number, 1. Set n equal to 1; no
w we know it’s true for the second odd, 3; so we can set n equal to 2, proving the statement for the next odd, 5—and so on. We don’t need to look at every example, because all the examples are equivalent; we have constructed a rule that governs them all under the abstract title n. Away they go, like a row of dominoes, rattling off to infinity.

  The third, inscrutable, face of induction is scientific. Unfortunately, very little in the observable world is as easily defined as an odd number. Science would be so much simpler if we could consider protons, or prions, or pandas under an abstract p and show that anything true for one is bound to be true for p + 1. But of course we can’t—and this is where probability becomes a necessity: the things we are talking about, the forms applied to matter, are, like Aristotle’s axioms, defined not by themselves but by us. A number or a geometrical form is its own definition—a panda isn’t.

  Our approach to science follows Bacon’s: look and see, question and test. But there are deep questions hiding below these simple instructions. What are you looking for? Where should you look? How will you know you’ve found it? How will you find it again? Every new observation brings with it a freight of information: some of it contains the vital fact we need for drawing conclusions, but some is plain error. How do we distinguish the two? By getting a sense of likely variation.

  This makes scientific induction a journey rather than an arrival; while every example we turn up may confirm the assumption we have made about a cause, we will still never reach ultimate truth. Without repetition we could never isolate qualities from experience, but repetition on its own proves nothing. Simply saying “The sun is bright” requires, in all honesty, the New Englander’s reply “Yep—so far.”