REVIEWING

The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t

Nate Silver, Penguin Press, 2012

After the tremendous success of Nate Silver’s model in correctly predicting the outcome of the 2012 presidential election, his book The Signal and the Noise published by Penguin in the same year, achieved a great deal of well-deserved publicity. In it, he accessibly describes how to develop, use, and evaluate models, with emphasis on this last point.

The late statistician George Box was famous for many things, not least for his statement that “essentially, all models are wrong, but some are useful.” All statisticians, whether formally trained like Box or informally trained like Silver, know this to be true. There are many ways for any given model to be wrong—it can omit important variables, it can include too many variables, it can misidentify the relationships between variables—and everyone can think of additional ways in which models cannot possibly be entirely accurate. It is important to bear in mind that, when we make models, we set out in advance to create a description of reality that by definition will be less complex than the reality we hope to describe, and so all models must, in some sense, be “wrong.”

The rightness or wrongness of models is thus not of great importance, odd though this may sound. The point of making models is to use the model to describe salient features of reality, or to predict some features of reality that may help us to make sensible decisions about future actions. Successful models often deal with small bits of reality, and answer questions such as, if I run this chemical reaction at a slightly higher temperature, will it produce more or less of the desired product? Or, if I plant seeds slightly closer together, will the resulting plants still be the same size? Good models answer questions such as these with a great deal of accuracy and precision.

Usually, when we ask questions that encompass larger portions of reality, modeling gets exponentially more difficult, and this is one of the key points of Silver’s book. If we ask questions such as, where on Earth will the next large earthquake hit? we can get an accurate, but imprecise, answer by saying that it will probably be somewhere near the Pacific Rim. Naturally, this is not a useful answer. When we ask where the next earthquake to hit California will be, we don’t even have a model that allows us to answer this question with any useful accuracy or precision, though we can predict that there will be one. We can even predict, with some degree of confidence, that there are some areas of California that have a higher probability of suffering from a large earthquake than some other areas. This is somewhat useful.

Similarly, when we ask hard questions about climate change or overpopulation, our models are often not able to answer these important questions with any helpful precision or accuracy. That is the nature of hard questions. Silver gives many examples in his book of such hard questions: “when will the next catastrophic earthquake strike?” or “when will the next killer flu happen?”

He presents, in a lively and engaging manner, the many ways we have not been able to make much progress in answering these hard questions, although he also tells some encouraging stories about how some hard problems are being addressed in productive ways. His example of our increased ability to predict where hurricanes will make landfall is a case in point. A generation ago, we could reasonably well predict landfall for a hurricane within a 350-mile radius of the actual landfall. Now, we can do this within a 100-mile radius—a tremendously useful advance. Progress in modeling tends to come in small, but important, steps like these.

Silver’s discussion of climate change is perhaps most interesting. Though his points are obvious, that does not detract from either their usefulness or his sophistication in getting them in focus. Indeed, that is the substance of the title of his book. Picking out the “signal” that may reside in data from the “noise” is a critical skill for successful model builders and users.

Essentially, what Silver says is that, although predictions made from climate-change models have not been especially accurate, this fact should, in truth, not surprise us. They have been close enough to observed facts that we cannot conclude that the climate-warming hypothesis is wrong. This is analogous to the hurricane-prediction model: it is not invalidated if it predicts that a hurricane will be centered on New Orleans, or 50 miles on each side, but it turns out to be centered on a point 45 miles east of New Orleans.

What this does bring up, a point Silver makes very clearly in his book, is that models are best used to describe reality as it now is, or how it will be at a time not very far into the future (if things stay much the same as they are now). That if is important! In terms of climate change, this should produce proper humility in those trying to predict the future using very complex models that are known not to be entirely accurate. On the other hand, we would also be ill advised to ignore the warnings from these models. We should base our decisions on how we face the future with an understanding of what the risks of our action or inaction are likely to be. These risks alone make attempts to reduce global warming sensible.

It is even more important to note that it makes no sense to use even the very best models to predict far out into the future. Distant prediction moves us very far from where the data on which the model is based can provide useful information. A trivial example of this error is modeling a young baby’s weight gain in the first few weeks of life and concluding, correctly, that it is likely to be around 300 grams per week. This does not mean that, at the end of 50 years of growth, the baby will weigh 780 kilograms more than it does today.

Even when predictions are made about the future using “good” models, they come with built-in warnings that the further ahead we try to predict, the less certain we are about the quality of our prediction until it becomes essentially no better than an informed guess. It is very important for all of us who are concerned with any issue, be it economic forecasting or climate prediction, to be aware that care is needed when making and understanding future predictions. It is, of course, especially concerning when people with opposing views argue that because a model is not exactly correct, it must be completely wrong. This “all or nothing” argument characterizes too many discussions about serious issues and is neither correct nor useful, except as a rhetorical device.

One of the other crucial topics raised by Silver is uncertainty, in the form of probability or the likelihood of things happening. It is well known that probability is an extremely difficult concept for us to understand and use well. In fact, humans seem not to have accurate intuition about risk, especially small risks. (Even more difficult is the comparison of two small risks.) Most of us appear to have no mechanism for making rational decisions about these small risks, and so the need for good training in statistics and probability is vital. A well-known example of the kind of thinking that almost all of us get wrong, without being taught how to get it right, involves the very difficult idea of conditional probability. For example, suppose that a company tests all its job applicants for drug use. They use high-quality blood tests, with the following characteristics: (1) the probability that the blood test is negative when testing a real drug user’s blood is 3 percent (called a false negative); (2) the probability that the blood test is positive when testing a non–drug user’s blood is 2 percent (called a false positive); (3) about 4 percent of all job applicants at this company actually use drugs. So far, so good!

Now, the really interesting question: suppose that a job applicant’s blood tests come back positive from the lab. What is the probability that this job applicant is a drug user? The surprising answer is that this probability is only 67 percent. This number is much lower than most people would think. If you want to find out how to calculate it, I refer you to Silver’s very clear explanation in the book. In practice, it means that one-third of the job applicants with positive blood tests are not in fact drug users. If they are not hired by the company because of this misleading positive, is the testing fair?

Overall, the book is an interesting and welcome one. Silver understands that life is filled with variability, and that one very good way of making sense of things in the face of this uncertainty is to collect lots of good data about the things that interest us. Based on this data, we can make some good models about reality, and often we can use them to our advantage. Although this may be a disappointingly small claim for those who look for triumphalism in human achievements, it is a realistic claim of which we can be both proud and confident. Silver is to be commended for understanding this and presenting it in such a witty and sensible way.

Brian Jersky

Brian Jersky currently serves as dean of the College of Science at Cal Poly Pomona. Prior to that, he was dean of the School of Science at Saint Mary’s College of California, after which he spent two...

Leave a comment

Your email address will not be published. Required fields are marked *