Who Got the Pandemic Predictions Right Last April?

 

A just-released study in PLOS One by mathematicians at Cambridge took a look at who was best at predicting pandemic numbers back at the start. For me it's one of those "why didn't I think of that" studies.

One year ago, the researchers surveyed three groups of people in the UK to make predictions about the pandemic. The groups were 140 experts (epidemiologists, statisticians, mathematical modelers, virologists, and clinicians), and 2086 laypersons. They laypersons were divided into high-numeracy non-experts (good with numbers) non-experts, based on a test they administered as part of the survey.

They asked each group to predict how many deaths and infections there would be by December 2020.  They asked them to make these predictions both as total numbers and as a rate per 1000 people.

Here is a chart I made based on their Table 1. It shows the total numbers predicted for each of the three groups, as well as the actual number in December: 


Please note that I used 100s of infections in the chart because the raw numbers were so large relative to deaths it would have screwed up the scale on the chart. So, for example, don't be tempted to conclude that non-experts predicted more deaths than infections.  They actually predicted 100 times the value shown in the bar.

One thing to notice is that all three groups vastly underestimated the actual numbers.  Experts did the best, but they too were consistently low.  

For infections, experts underestimated infections by almost 40%. But the non-experts were way off , by almost 80% for high-numeracy people and more than 95% for the normal non-experts. 

Everybody did better on deaths, but still underestimated the actuals.  Experts were 61% under actuals, high-numeracy non-experts 66% under, and non-experts 74% under.  

So in terms of raw numbers, everybody missed the actuals by a lot.  They all thought the pandemic would be both less infectious, and less deadly than it was.  

Here is the same chart for predicted and actual rates per 1000 people:

Completely different picture here. On infections, the experts got within 20% of the actual number.  The non-experts wildly overestimated the rates.  The high-numeracy non-experts predicted about 2.5 times the actual, and the non-experts were about 3.3 times the actual.  

Everybody over-estimated the death rate too. Experts were over by about double, high-numeracy non-experts by about 6 times and non-experts by about 10 times.

So, in contrast to the raw numbers where everybody underestimated, here almost everybody overestimated, and by a lot.

The study also looked at the confidence of these groups in their predictions. They did this in a complicated way that I will leave you to read about if you're interested.  But the upshot is that the experts were overconfident, and the non-experts were really overconfident.

Something that piques my curiosity is why everybody under-estimates total numbers, but over-estimates rates.  Unfortunately the authors don't address this. Probably there is research out there about how people make such judgments. My guess is that the experts routinely think of problems like this in terms of  rates, so they are better at it, whereas the non-experts don't and are worse.

This would be good to unpack. It has implications for how public health people communicate to the general public about the threat posed by pandemics.

But the bottom line is this: Nobody, even the experts, thought it would get a bad as it did in total numbers, and they were overconfident they were right.

Popular posts from this blog

Another One Bites the Dust

AZ Pandemic Numbers Summary for the Seven Days Ending September 7: Mixed Picture

AZ Pandemic Numbers Summary for the Seven Days Ending May 10: This is the Last of These Updates