Does the 2016 CDC Vaccine Safety Datalink (VSD) study actually prove vaccines are safe?
Cross verification with VAERS and then comparing with Qatar and New Zealand studies suggests the VSD studies are not very high quality
I just read this article by Steve Kirsch which mentioned a paper written by CDC in 2016 which had supposedly established vaccine safety.
It would probably help to at least scan his article first (because mine is complementary to what Steve has written).
The CDC article uses Vaccine Safety Datalink (VSD).
This is the dataset they used for the analysis:
From January 1, 2005, to December 31, 2011, there were 1100 deaths identified within 12 months after any vaccination among 2 189 504 VSD enrollees 9 to 26 years of age. Of the deaths identified, the mean number of days between vaccination and death was 179; only 76 deaths (7%) occurred 0 to 30 days after vaccination.
These are all-cause deaths, meaning every death recorded within the 30 day window was considered for analysis.
Cross verifying VSD using VAERS data
In the paper, they show this table for deaths after the HPV vaccine.
Here the 4vHPV refers to the tetravalent HPV vaccine, which is recorded in VAERS VAX CSV file as HPV4.
Later in the paper, they make this claim:
Recently, deaths immediately after 4vHPV vaccination have garnered intense media attention. Often, these media stories do not take into account the background rates of death in older children and young adults or disclose the potential for non–vaccine-related causes of death. In our study, 13 deaths were identified after 4vHPV vaccine among individuals 9 to 26 years of age within the 0- to 30-day risk window, a rate of 11.7 deaths per 100 000 person-years. This is significantly lower than what would be expected in this age group regardless of vaccination. The National Center for Health Statistics found the 2011 death rate for all causes among persons 15 to 24 years to be 67.6 deaths per 100 000 people.
Just to explain this: they found 13 deaths (due to all causes) out of 1,355,535 people in that age group1 within a 30 day window of the HPV vaccine. 13 deaths in a 30 day window translates to ~158 deaths in a 365 day (i.e. 1 year) period. 158 deaths/1.35 million = 11.7 deaths per 100000 person years.
They compare this with a cohort of 15-24 year olds which supposedly already has much more - 67.6 deaths per 100K people - during the year 2011 to show that it is “significantly lower”
But the comparison does not make any sense at all, for these reasons:
Not everyone who died was in the VSD system
This was actually very easy to verify using VAERS.
I first ran a query to get the list of VAERS reports for people between ages 9 to 26, vaccinated between 2005 and 2011 with the HPV4 vax, and DIED=Yes.
The first column diff is a computed column which calculates the difference between VAX_DATE and DATEDIED in days. If either of these columns is empty for a given report, then the value of “diff” will be empty.
Let us filter the column “diff” for all reports where the patient died within 30 days.
You can see that there are already 24 such reports. Notice that I have also filtered for the RECVDATE to be earlier than 1 Jan 2016, since the paper was published in early 2016, just to make the comparison more fair.
The question is - why did the Vaccine Safety Datalink find only 13 deaths, which is just about half of all the deaths within the first 30 days which is already in VAERS?
As you might imagine, the primary reason for this is that the VSD is not the full dataset for the entire US. In fact this is already mentioned in the paper (emphasis mine).
The VSD is a collaborative project between the Centers for Disease Control and Prevention (CDC) and several integrated health care systems (sites), which monitors the safety of vaccines in the United States. The VSD captures comprehensive medical and vaccination histories for >9 million people annually, ∼3% of the US population. The VSD uses electronic medical records and other administrative sources at each site to gather data on enrollees including demographics, vaccinations, and medical outcomes, including deaths.
If that is true, then why do they compare the number of deaths per person years from VSD to overall population-level all-cause deaths? This comparison makes no sense at all and should have been caught during peer review2.
VSD is not representative of the population
They write this in the paper:
We found only 1 death on the day of vaccination, which was not related to syncope; therefore, no separate analyses on syncope-related deaths were conducted. We found no significant interaction for age, gender, site, or number of vaccines received.
Remember that this is talking about all the vaccines, not just HPV4 vaccine.
But this does not match what is in VAERS.
I constructed a dataset for all deaths after any vaccine for the age group 9 to 26 between the years 2005 to 2011.
As before, the “diff” column represents difference between vaccination date and date of death.
There are already 3 deaths in VAERS where the date of death is the same as the vaccination date, and the cause was not suicide.
So VSD is certainly not representative of the US population, and seems to understate the real danger of the vaccines.
In fact, instead of days-to-death, if we use the more rigorous days-to-symptom-onset, there are 6 deaths for the age group 9 to 26 where the symptom onset is on the same day as vaccination, the cause of death was not suicide, and the patient died within the same month.
So either VSD is intentionally omitting these deaths from their system (less likely) or it is not really representative of the US population (more likely).
Shortening the 30-day “risk window” drastically changes the result
Here is the reasoning provided in the paper for the 30-day risk window as well as the reason for not choosing a different duration (emphasis mine):
In the analytic dataset, we included deaths of individuals who had received ≥1 vaccination in the year before death. For each death, the observed vaccination status during the prespecified risk window of 0 to 30 days before death was determined. We chose a 0- to 30-day risk window to include causes of death that would be biologically plausible with regard to vaccination; however, we also conducted a cluster analysis to identify any shorter or longer windows of interest using a scan statistic software program, Satscan.
In other words, the authors claim they did look at shorter or longer windows of interest using some statistical software program but decided on the 30 day window. They haven’t published the results of this analysis (even in summary form).
Until they publish these results, I don’t see a good reason to take their word for it. In fact, if you try to compare it with the information in VAERS, I don't think their assumption holds up very well3.
In my previous article, I wrote this:
Two, for an analysis like this to be internally consistent, they should be able to produce the same results even if they choose smaller time windows, such as 7 days and 14 days. The “pull forward effect” suggests that the background rate test will fail if they do this (but I would love to be proven wrong - is the New Zealand government willing to publish these numbers?)
Till now, I did not have a good example to demonstrate this, but now I think I do.
Let us start with this fact about VSD: it represents only 3% of the US population.
They took the number of all-cause deaths inside the 30 day window, calculated the number of deaths per 100K person years to be 11.7, and then compared it to the 67.6 deaths per 100K person years for 2011 (for that particular age cohort).
As I mentioned, this is not comparing apples and apples and is an absurd comparison to begin with.
However, let us say they figured that the number of deaths in VSD is consistently 20% of the total deaths across the population. In other words, they can multiply the 11.7 by 5X and still get a number like 58.5 which is less than the 67.6 and that too for the smaller age cohort.
But what if the number doubles or triples when you change the risk window?
Then the comparison with 67.6 would look quite a bit worse wouldn’t it?
So instead of considering deaths for 30 days post vaccination, what if we shrunk the window to only 15 days post vaccination and generated a proportional number based on the distribution in VAERS?
Here is how it looks in VAERS:
Number of deaths within 30 days = 24
Number of deaths within 15 days = 20
Another way of looking at it is that more than 80% (20/24) of the post-vaccine deaths within a 30 day window happened inside the 15 day period itself. This confirms that pushing the window out to 30 days makes the analysis more favorable for the vaccine if there is an actual temporal effect.
Now let us look at the VSD data.
So we have 13 deaths within 30 days, and if the deaths followed a distribution identical to VAERS, we would have 13 x 20/24 = 10.8 deaths in 15 days over a population of 677767 (half of 1,355,535) in VSD, which actually amounts to 38.7 deaths per 100K person years, which is actually more than triple the rate of 11.7 provided in the paper.
Now if we were to multiply it by 5X to cover the entire population, we end up at 193.5 deaths per 100K person years, which is well in excess of the 67.6 number provided for the comparison.
To be clear, I am not trying to claim that my analysis here proves or disproves vaccine danger. I realize that I am making a lot of assumptions and oversimplifications.
Instead, I am claiming that the existing 30 day risk window is not internally consistent, and to make the analysis better, studies should also do a (say) 15 day risk window side-by-side to ensure that they satisfy the critics of these studies.
Interestingly, the pull forward effect is even closer to the day of vaccination for the COVID19 vaccines when compared to the HPV vaccines.
The VSD study is not complete
The VSD study is also quite incomplete. This becomes clear when you compare it with the kinds of studies which have been done in other countries with smaller populations.
One positive thing that came out of the COVID19 vaccine rollout fiasco is that the countries who were taking vaccine safety for granted started producing safety studies across their whole population to prove that they actually know what they are doing. If not for the COVID19 vaccine rollout, I am not sure the Qatar and New Zealand mortality studies would have even been conducted.
At the very least, we now have some benchmarks which we can ask the VSD studies to meet4.
Comparing the VSD study and the Qatar study
Recently I discussed the Qatar study, which seems to be one of the best studies on COVID19 vaccine safety.
Here are some aspects of the Qatar study which are missing in the VSD study:
the Qatar study is genuinely population-wide (although obviously over a much smaller population)
the Qatar study published a detailed report on 52 out of the total of 138 deaths which happened within 30 days of vaccination
the Qatar study included the age as well as the days-to-death for each of the 52 deaths provided in the detailed report
The VSD study does not provide any of this information.
Note that the Qatar study did not do a age-wise comparison of the mortality rate with that of prior years.
Comparing the VSD study and the New Zealand study
The New Zealand mortality statistics report is a very interesting one, because it actually used the all-cause mortality for different age buckets from prior years (2008 to 2019), and did a more accurate population-wide deaths-per-person-year comparison to the old numbers.
Here are some aspects of the New Zealand study which are missing in the VSD study:
population-wide all-cause mortality calculation
age group breakdown of all the deaths
However the NZ study does not provide days-to-death or days-to-symptom-onset breakdowns of the deaths which happened within 21 days of vaccination.
Summary
In a previous article, I pointed out that aggregate-mortality based vaccine safety assessments have way too many confounders and that the pro-vaccine team will use them to argue for vaccine safety because they know these confounders make it almost impossible to come to any conclusions.
But I can state quite confidently that this is a losing battle because there are too many confounders, and the debate opponents know that increasing the confounders is the easiest way to avoid arguments.
In this article, I have taken an example using CDC’s own VSD data and explained why these confounders make it hard to do any conclusive analysis.
Ideally, what we want is a combination of
a) VSD’s comprehensive medical and vaccination history, but across the entire population
b) Qatar’s breakdown of individual death reports to include days to death (and if possible also days to symptom onset)
c) New Zealand’s comparison based on pre-vaccine-rollout age-wise mortality rates
As a result, I don’t think the VSD study can be considered the final word on vaccine safety.
Who were registered in the VSD system
In fact, if VSD captures only 3% of the US population, does it mean the actual numbers would be 30X as large? I doubt it, but if that is true, then the comparison would look very different.
This is another reason why an open source vaccine injury database is superior to a closed one. We don’t have to just take the word of the authors, and instead we can cross verify their assumptions.
I doubt they will do it, or that they even care. But it does not matter. We can just keep pointing to the gap between the best studies coming out of the CDC and the best studies done by the other countries to make our point about vaccine safety. Remember the Qatar study actually concluded that the death rates for the COVID19 vaccine is 1-in-100K, which is unacceptable for regular vaccines.