No model answers for this Covid 19 crisis
Post available in: English
A forecasting model is an opinion column with more maths. Its purpose is to spark action, not necessarily to be right.
Will we ever trust experts again? The models that predicted the spread of the coronavirus have been under fire for months. Now the economic forecasts that were based on those models have highlighted exactly how out of whack the epidemiological predictions were. What will happen if we lose faith in modelling?
Saving $60 billion should never feel like an anticlimax, but here we are in 2020, supposedly in the middle of a once-in-a-century pandemic, with depression-style dole queues preparing us for a $130 billion economic rescue package, and it turns out that, well, it might not be half so bad.
At a micro-level, the shortfall is due to stressed small business owners making mistakes when they registered their intention to claim JobKeeper. But the reason their cumulative mistakes escaped notice until now is because by some fluke the incorrect numbers more or less matched up to Treasury forecasts.
And those forecasts were wrong, because the health models on which they were based were also wrong. Instead of 50,000 to 150,000 virus deaths, we have now had 102. Australia has had just over 6500 confirmed cases of COVID-19, and hospitalised cases never got anywhere close to filling Australia’s intensive care unit capacity.
Many fear that this “crushed” curve points to a dramatic and costly overreaction to the threat of the virus, which had already started to flatten in mid-March, after border closures and social distancing recommendations, but before schools and workplaces were shut down.
That would make the hit the economy has taken unnecessary – and at $4 billion in lost productivity a week, as well as the $70 billion to support Australia’s “hibernation”, that would be quite a blunder.
The problem isn’t Australia’s alone: around the world the spotlight is turning on the experts and their models.
Imperial College London epidemiologist Neil Ferguson has become notorious since the global lockdowns – latterly for a salacious breach of social isolation that saw him resign, but before that for modelling published in mid-March predicting half a million Britons would die unless the government introduced strict isolation measures.
The British government changed course and locked the country down, whereupon Ferguson revised his estimate to 20,000 dead. The total now stands at almost 37,000, but Ferguson has since been the subject of savage criticism: his 13-year-old model has been found to be “a buggy mess” and his past predictions alarmist.
So have these models all failed? Well, not exactly.
An international group of data scientists led by the University of Sydney’s Centre for Translational Data Science has also pointed to what it considers flaws in a model developed by the Institute for Health Metrics and Evaluation at the University of Washington, arguing that it “substantially underestimates the uncertainty associated with COVID-19 deaths … casting doubt on whether the model is suitable to inform COVID-19 resource allocation”.
Here in Australia, the Peter Doherty Institute for Infection and Immunity supplied modelling that informed the federal government’s course of action – initially we were told to prepare for a shutdown lasting up to six months – as well as that Treasury forecast which has demonstrated how the assumptions of modelling can cascade.
So, have these models all failed? Well, not exactly. The job of a model isn’t to be right. It is to spark action.
As an economist friend of mine, who’s now far too important to be held accountable for his youthful candour, once confessed, “You can make a model to prove anything … Not being able to prove things they know are true makes people very unhappy.”
Or as one author in The Atlantic wrote in defence of Ferguson and Imperial College, “Right answers are not what epidemiological models are for.”
The job of a model in that telling is to make a persuasive case for action that the modeller believes to be right. If you like, it’s sort of like an opinion article that uses maths.
The more complex the model and the more unknowns it contains – for instance a model of a new disease we know very little about moving through a society in which people behave with typically human unpredictability to the spectrum of inputs they are exposed to – the more subjective the assumptions that underpin it become.
So while the model of the Reff – the effective reproduction rate – indicating the spread of the disease from one person to less than one other (ideal) or more than one other (eventually potentially fatal) is simple, timely modelling on how a new virus will travel and mutate, and who it will prove fatal to, is very tricky indeed.
As my economist friend instructed his audience many moons ago, “There’s an infinity of ways to re-specify the problem, dozens of commonly used estimation techniques, each of which gives different answers. If everything else fails, just use different data, or make up an excuse to exclude part of your data.”
Of course not every modeller is as dastardly as my friend. Most are trying their best to demonstrate what might happen if a particular course of action is pursued. But in the face of a big bungle, the public has the right to demand that from now on they show us their working.
Source: Parnell Palme McGuinness
Post available in: English