The thrust of my recent post, If only I wasn’t a commitment phobe, argued that the science and its underlying data are insufficiently developed or clear to support the levels of certainty professed on all sides of a somewhat tribalist dialogue of the deaf within left and libertarian camps.1
My narrow focus on “the science” rightly drew a politically contextualising response, below the line, by Dave Hansell. I deem it too significant to remain tucked away in the comments section and am promoting it here as a guest post. I have taken the liberty of light touch editing on two fronts. One is to break long paragraphs into shorter ones, the other to highlight passages I see as especially significant. Other than that, this is pure Dave Hansell.2
- One example of this insufficency is a dearth of peer reviewed papers. Peer review is useful but has its downsides and, especially but not exclusively at times of paradigm shift, can be reactionary and even corrupt. We are not experiencing a paradigm shift in the science (though we may be in the early stages of one in politics) like that from Newtonian to particle physics but, as with any pandemic, including 1918, the learning curve is steep and fast as new data comes in by the hour. As a small but significant example, many of the papers I read (mostly skim-read) on false negatives and false positives in CV19 testing were “awaiting” peer review. That’s hardly surprising. At best – and as a former academic I saw how venal and ego driven this business gets – this is a flawed process. Here we’re nowhere near ‘at best’. Peer reviewers, good and bad, will be in no rush to commit to ‘blessing’ (an analogy drawn in this appraisal of peer review) papers whose conclusions draw on imperfect and incomplete data which may be eclipsed to make fools of them the next day.
- I’m also grateful to Dave for a link to this Inet Oxford paper on comparative death rates in England. Statisticians will look with keener eye on Z-scores, set out in these Euromomo maps and graphs. But the Inet paper looks to P-scores since “Z-score is less accessible … [it] measures excess deaths (i.e. actual minus ‘normal’ deaths) as a ratio to a standard deviation of deaths [while] P-score measures excess deaths as a ratio to ‘normal’ deaths. As death count and standard deviation of deaths are not published at country level it is harder to interpret social and economic implications of Z-scores. P-scores accompanied by graphics .. are more salient and interpretable.”