Is the Greater Manchester lockdown based on poor data analysis?
The problem with data is that it is only useful if it is interpreted properly. It is easy to acquire data, but much more difficult to use it and analyse it properly.
The Greater Manchester lockdown announced on Friday may be another example of data, badly interpreted.
Prof Carl Heneghan of the Centre for Evidence-Based Medicine at Oxford has looked at the data again.
He makes 2 important points:
Because of the delays that can occur in test reporting (it can take 2 days to get your result), it is much more accurate to look at ‘specimen-date data’ rather than ‘result-date data’ when comparing over time
If you run more tests, you get more positive test results – so this needs to be factored in when looking at increases
So, the Professor has re-analysed the data in order to eliminate these effects, and arrives at some interesting results. Between 22 July and 29 July if you look at the 7 day rolling average of ‘result-date data’ there is indeed a 16.7%. But if you look at the ‘specimen-date data’ then it shows a 31% decrease.
Furthermore, he looked at the quantity of tests of the same period and found that this increased by 11.5%. This in itself would account for an increase in positive cases.
These conclusions are pretty astounding. His analysis suggests that the trend is a decrease in cases, and that actually a significant percentage of positive tests is likely to have been driven by increased testing.
This is a cautionary reminder that having access to data is all well and good. However, you need to spend time understanding it, and asking the right questions of it before drawing your conclusions and making decisions that have far reaching consequences.