Adam Smith Institute

View Original

Common sense, Science and Nonscience

The government’s response to the Covid pandemic has thrown common sense, science and nonscience into stark relief. The seven strains of coronavirus that infect humans belong to the same family as colds and flu.  We have been battling their various pandemics for 140 years and should have learned how to do it by now.  The assumption by the Department of Health and Social Care (DHSC) in 2005 that the next pandemic would be like the flu was not wide of the mark. Contagious bovine pleuropneumonia, caught from cattle, as the name indicates, was the first historically recorded.  We culled the cattle for no good reason because contagion was, after the first, human to human.  The human mortality rate was high then. Imperial’s Professor Ferguson’s recommendation to cull the cattle during the mad cow disease period (1994/6) was similarly wrong; eating the meat we usually eat was no risk. As a result of each case, immunity grew to a greater or lesser extent and ceased to be life-threatening. 

After all these years, we know to stay away from people with flu-like symptoms and we do not thank them for approaching us.  If one has just had flu, then there is little risk in being close to someone developing those symptoms. Testing and tracing are obvious ways of keeping the carriers away from the rest of the public. If hospital admissions are escalating dramatically, it makes sense to step up distancing in case they exceed capacity. No sane person should challenge government advice to keep apart and wash our hands. This is common sense, not science. 

Questions arise, however, when ministers, most of whom graduated, ironically enough, in PPE rather than science, treat people in white coats with excessive reverence and fail to distinguish science from nonscience or relevant from irrelevant expertise.  Outside her own field, a scientist is no more expert than anyone else. “Models” have been especially venerated, probably because they are usually presented as impressive equations. Since the 19th century, mathematical models have been used to depict scientific thinking. The scientific method is, usually, conjecture -> model -> empirical testing -> revision of conjecture and then cycling on until the model becomes a useful depiction of reality. As statistician George Box pointed out, “all models are wrong but some are useful”.[1] 

A map is a model; following one will reliably bring you to where you want to get. That is because the model has been empirically verified for generations. A thousand years ago, European maps showed terra incognita but they would not bring you to the area now called Washington DC.  Models of the unknown are conjectures, i.e. guesswork. For whatever reason, the government has been economical in sharing the evidence from Sage, even its membership initially. The nonsense excuse was that it would compromise the academics’ freedom to publish.  Their work was mostly funded by the taxpayer. 

The models fall into two types: prediction and analysis of the data so far. Neither of these are science: the first category requires conjectures about the future. We now know that some of these predictions were wide of the mark. “According to Herodotus, erring soothsayers were clapped in irons and laid in bracken-filled oxcarts which were then set alight. Whether this improved the quality of forecasts is not known: most likely it did, at least, reduce the quantity of speculative and baseless prediction.”  To be called “science” the models of both types should have been peer reviewed and tested using fresh empirical data.  They were not. The concluding paragraph of one of the Sage papers is revealing: “As with all modelling, it is impossible to capture the full complexity of an epidemic. In this model, the major assumptions are that we have assumed that there is no change in behaviour during the course of the epidemic…We have not included any age-effects…we are not able to investigate the impact of school closures or the impact of the summer holidays, which had a large impact on the H1N1 influenza pandemic in 2009.”

Two of the best qualified critics of Sage are Doctors John Lee and Mike Yeadon. Here is Lee in June: “There’s really no clear signal (apart from modelling, which doesn’t count) that these interventions [lockdowns and social distancing] have had any significant effects on the epidemic curves, either on the way in or the way out of these rules, in many different variants and in many different countries.” And in July: “But how does modelling relate to ‘the science’ we heard so much about? An important point — often overlooked — is that modelling is not science, for the simple reason that a prediction made by a scientist (using a model or not) is just opinion.”  

Yeadon has made two strong critiques. They were little reported, probably because the articles were not written to academic standards and were published in a prejudiced medium.  His review of the composition of the Sage group concluded it had no one with “a biology degree [or] a post-doctoral qualification in immunology. A few medics, sure. Several people from the humanities including sociologists, economists, psychologists and political theorists. No clinical immunologists. What there were in profusion – seven in total – were mathematicians.” Professor Ferguson has been a lead mathematical modeller and is described as a “mathematical biologist” but there is no such speciality: cutting edge biology is now conducted by teams including biologists and mathematicians, with different skills. In the US, mathematics is usually considered a science whereas in Europe it is considered one of the liberal arts.  It is a language and a way of depicting natural phenomena and in my view, as an Oxford mathematician, the European view is the more correct. Yeadon’s point is that the Sage committee had no scientist with the relevant expertise.  

His next issue is Sage’s assumption that, as Covid 19 is a new virus, no one had any immunity. Whilst the levels of total or partial (cold-like symptoms) immunity provided by the T cells resulting from other members of the coronavirus family, has yet to be established, José Mateus (Center for Infectious Disease and Vaccine Research, La Jolla Institute for Immunology) et al. concluded that they are “a contributing factor to variations in COVID19 patient disease outcomes, but at present this is highly speculative.”

Yeadon’s second challenge is that Sage has grossly underestimated the number infected so far and therefore overstated the risk. Working back from the IFR (infection fatality ratio) which has been widely studied around the world, e.g. Ioannidis (2020), he calculates Covid-19 has so far infected “32% of our population of 67 million. That estimate might be a little high, but I’m confident it’s a great deal closer to the real number than SAGE’s 7%.” (p.10) 

Quite apart from doubts about the reliability of the science, or nonscience, guiding government is the science they should have done during all these months. For example, two comparable towns should have been chosen in the summer, one where pubs were shut and one where pubs, following all Covid protection measures, stayed open. Is the hospitality sector correct to claim that pubs following the guidance are safer than closing them and allowing unruly mixing elsewhere? Test and trace should have been used to analyse the days when over-70s must have acquired their infections. After all, they are the ones at most risk. Deaths purely from Covid 19 should have been analysed separately, as there is some indication that the severity of the disease is linked to the extent of the exposure to carriers, e.g. hospital nurse deaths. 

The House of Commons debated the revised tier system on 1st December.  Few ministers made more than token appearances. They would not have enjoyed hearing MP after MP castigate the Government for its lack of evidence in support of their proposals and the lack of logic in the tier boundaries.  The Prime Minister insisted that county boundaries must be used and then put Slough in tier 3 with the rest of Berkshire in tier 2. The economic impact assessment was described as a cut and paste job, not worth the paper it was written on.  

MPs are entitled, even more than the rest of us, to a clear exposition of the evidence both scientific and economic.  They should be able to specify revisions and what further evidence is needed before decisions are made.  They need to distinguish common sense, which we should all accept, from uncertainties where evidential, quality science is required.  And they should stop being guided by nonscience.  MPs will need to hear from peer reviewers to do that and the Government needs to listen to the House. 

 

[1]  Box, G. E. P. (1979), "Robustness in the strategy of scientific model building", in Launer, R. L.; Wilkinson, G. N. (eds.), Robustness in Statistics, Academic Press, pp. 201–236