Matt Ridley is wrong: progress is all about predicting the future
I’m a huge fan of Matt Ridley’s work, but his most recent column on experts struck me as being a little off-base.
You’re better off ignoring ‘expert’ predictions about the future, he says, pointing to so-called experts’ failed predictions about the FTSE dropping after the Brexit vote, and bad (sometimes terrible) forecasts by the Met Office.
In contrast, there are the plumbers, doctors and mechanics we rely on regularly:
There are no experts on the future. Explaining the present and the past requires expertise: ‘it’s your carburettor/prostate’. In forecasting the future, experts are generally no better than everybody else. They might be worse.
But a plumber or a doctor is an expert on the future. I don’t hire plumbers to tell me why I got a leak, I hire the ones who can correctly predict which pipe they need to replace to stop it from leaking in the future.
Doctors who blamed miasma for plagues were explaining the present and past. The ones who figured out how to stop plagues in the future were the ones we actually needed.
You can fit any number of theories on past experiences. The useful theory is the one that can predict new experiences.
Consider Feynman on the scientific method:
First we guess a new law. (“Don’t laugh, that’s really true!”) Then we compute the consequences of the guess to see what that guess would imply. Then we compare those results to nature, experiments or experience. If it disagrees with our experiments, it’s wrong. If it cannot predict what we observe, it’s false.
Without predictions, in other words, we’re not doing science at all.
In Trial & Error, Madsen points out that theories that successfully predict events in this way cannot be said to be true, but they are useful. They allow us to shape the universe in a way that is more to our liking.
Matt is a little unfair to the Met Office. It’s not confusing to say that weather forecasts are “based on probabilities”, nor is it untestable. If they predict an 80% chance of rain and four times out of five it rains, that looks like a useful method. If it only rains two times out of five, it looks like a useless method – one that is systematically wrong.
If I say that you have a five-in-six chance of rolling between 1 and 5 on a single roll of a die, rolling a six does not make me a bad forecaster. The examples he gives of bad forecasts are a little like that. Overall, the Met Office does well at short-term forecasts. Our inability to rigorously test its climate models are cause to be sceptical about those models, as Rupert Darwall points out. They don’t mean we should throw our hands up and accept that all predictions, everywhere, are worthless.
Matt mentions the fascinating “superforecasters” project, made up of people who consistently beat the odds in predicting geopolitical events. They are more foxes than hedgehogs, willing to adapt their models to new information rather than sticking to the same one come what may. But their existence rather blows a hole in his basic claim, which is that complexity renders predictions useless. People seem to just need to change their methods.
Similarly, if consistently accurate predictions are impossible, then stock markets and bookies should be easy to consistently beat. Since it is actually very difficult to beat the market or the betting odds repeatedly, it seems as if these are quite good at predicting things.
And while it’s true that there’s an awful lot we can never know in economics, we can and do make accurate predictions about it quite a lot too. Impose a tax on something and with some exceptions you’ll usually get less of it. Put price floors on something and you’ll get too much of it; put price ceilings in and you’ll get too little.
It’s impossible to make a policy argument without a prediction like this. If we didn’t have good, reliable predictions that communism ruins everything, the case against communism would be weaker. How could we possibly say that one policy or another is bad without implicitly predicting that, if implemented, it would be likely to go badly?
I think what Matt is really against is bad predictions based on false ideas. And there is no shortage of these, nor of experts willing to peddle them to an unsuspecting public. But they are often easy to spot precisely because their predictions are so bad. Put Seumas Milne in charge of the Treasury and I wager that I can make a good prediction about the state of the economy in a few years’ time.
Identifying these bad predictions and their advocates is what we should be doing, not denying that anyone can make predictions at all. That’s why we should ask people to write clearly, to be precise about their claims, and to admit it when they’re wrong. (They’ll definitely tell us when they’re right.)
I’m biased, of course. We at the ASI love to make predictions: we think the world is getting better, that technology is advancing (we even have a few ideas about how), and that making people freer makes their lives better. We’re just as frustrated as Matt is about phoney “experts” and their predictions. But without useful predictions, tested against reality, we wouldn’t have a leg to stand on.