By William Briggs
How can you tell if a vaccine for a bug is effective? It’s not so easy; indeed, it can be excruciatingly difficult.
At the individual person level you’d need to measure all kinds of things, like the level of antibodies and other immune cells present before vaccination, and then again after and through time.
Then you’d demonstrate, in that person, the exact mechanism by which the vaccine was able to boost immunity, and whether this boost was sufficient to quell the infection, by looking at severity of illness (due to the bug and other existing conditions), how long it took for the infection to abate, and things like that. And that is only a hint of the complexities.
The analysis is made harder because the vaccinated person may never come into contact with the virus. People he meets may have already had prior infections, and so are now mostly or completely immune. Or those people have had a vaccine that was effective to varying degrees.
As difficult as all that sounds, it is not impossible in highly controlled circumstances to discover the extent and to quantify vaccine effectiveness. But it is a slow and painstaking process.
One way you cannot learn, not with anything approaching certainty, is looking at group-level comparisons, where people are not individually counted and compared, but where averages across groups are contrasted, and where you have no idea what the status of any individual is.
This is a popular kind of analysis because it’s cheap and easy. But it can, and often does, lead to huge over-certainties.
A prime example is from the paper “Global impact of the first year of COVID-19 vaccination: a mathematical modelling study” by Oliver J Watson, Gregory Barnsley, Jaspreet Toor, Alexandra B Hogan, Peter Winskill, and Azra C Ghani, in Lancet Infectious Disease.
They used a “mathematical model of COVID-19 transmission and vaccination” for both “reported COVID-19 mortality and all-cause excess mortality in 185 countries and territories” to assess vaccine efficacy in preventing deaths. This is as group-level an analysis as they come, especially with its “excess” deaths portion.
What are “excess” deaths? Deaths different than those predicted by a model. They can be positive, meaning more deaths than predicted by a model, or negative, meaning fewer deaths than predicted by a model. What it means here is that Watson and his co-authors used as input to their model, output from another model, and what that means I’ll explain in a moment.
Let’s first look at their transmission model.
In it, “Vaccination was assumed to confer protection against SARS-CoV-2 infection and the development of severe disease requiring hospital admission, and to reduce transmission from vaccine breakthrough infections”. Incidentally, “Breakthrough infection” is a term of incredulity: it assumes vaccines work and that, somehow, bugs are able to bypass it sometimes.
In other words, their model was told that covid vaccination worked. The model was told that the vax blocked infection and prevented severe disease, including death, and the model was told that infections were harder to pass on in the vaccinated.
The only thing this model can “discover”, therefore, is that the covid vaccine works. It could do nothing else.
Now about the data they used: “The first vaccination outside a clinical trial setting was given on Dec 8, 2020. We introduced vaccination from this point onwards in the model and explored the impact of the first year of vaccination up to Dec 8, 2021.”
Vaccinations did not begin in earnest, however, until spring of 2021, with a peak number of shots in April 2021. The bug, and its early mutations, had been circulating for at least a year by that point. This means many people were already infected by the spring of 2021. And many, the CDC estimated about 15%, never even knew they were infected.
Therefore the biggest flaw in this, and many other analyses, is not accounting for prior infection. This is an immense problem because we can not tell by looking at group-level data whether changes in infections and illness (including death) were caused by prior unnoticed infections or because of vaccinations.
Because of the panic, the majority of people who sought vaccination never had their background covid antibodies checked. After all, you could get the vaccine at the local drug store. It’s possible some of those with prior infections and who got the vax had superior immunity to those without either, but it’s also possible for some the addition of the vax on top of prior infection did nothing.
If this is confusing, what it means is that this, or any, analysis that does not account for previous infection will overestimate, perhaps by a lot, vaccine efficacy. Because it will credit the vax for deaths that were prevented by previous infection. There is also the possibility that other “unapproved” treatments, such as vitamin D and ivermectin, which many took in 2021 also would give credit to the vax.
The authors then fit their same model to “excess” deaths. There are two major flaws with this.
The first is assuming all “excess” deaths were due to or related to covid. This is almost certainly false, as I analyze here. There were still large numbers of deaths above what was usually expected (that “usually expected” is a model) after subtracting covid deaths, especially early in the panic.
These could be caused by aggressive over-treatment for covid, lockdowns, and people too scared to have other life-threatening conditions checked out, especially in 2020 when the panic was at its peak.
The initial frenzied panic subsided somewhat in 2021, which means many “excess” deaths decreased merely from lessened panic. This, too, is credited to the vax in their model.
The second mistake, even allowing that all “excess” deaths were indeed covid related, is that the uncertainty in the estimates of “excess” deaths—there is a plus-or-minus for these—was not incorporated into the estimated number of lives saved by the vax. This means that their own plus-or-minus windows for lives saved will be far too narrow.
None of these criticisms mean that the vaccine did not offer some protection in some people, while also conferring some risk of injury from the vaccination itself. Our point is that there are massive uncertainties that were not accounted for, leading to an exaggerated view of the vaccine’s efficacy.
A further uncertainty is the global nature of this analysis. Countries varied widely in behavior to and in the panic. Vaccines differed. Background health differed. Official policy differed. Really, most things of important differed. Numbers are all over the map. A gross country-by-country analysis is leaving far too much uncertainty unaccounted for.
Especially because of the blind panic, and the failure to note prior infection, it will forever be difficult, or even impossible, to know just how efficacious the vaccine was in 2021.
I am a wholly independent writer, statistician, scientist and consultant. Previously a Professor at the Cornell Medical School, a Statistician at DoubleClick in its infancy, a Meteorologist with the National Weather Service, and an Electronic Cryptologist with the US Air Force (the only title I ever cared for was Staff Sergeant Briggs).
My PhD is in Mathematical Statistics: I am now an Uncertainty Philosopher, Epistemologist, Probability Puzzler, and Unmasker of Over-Certainty. My MS is in Atmospheric Physics, and Bachelors is in Meteorology & Math.
Author of Uncertainty: The Soul of Modeling, Probability & Statistics, a book which calls for a complete and fundamental change in the philosophy and practice of probability & statistics; author of two other books and dozens of works in fields of statistics, medicine, philosophy, meteorology and climatology, solar physics, and energy use appearing in both professional and popular outlets. Full CV (pdf updated rarely).