The “it’s banned in Europe” fallacy

banned in europe

The title of this post is written somewhat in jest because this is not a formally recognized fallacy; nevertheless, it is a very common line of reasoning that is logically flawed and very closely aligned with multiple fallacies. The argument, in a nutshell, asserts that something is dangerous or likely dangerous simply because it is banned in some (usually European) countries. It often takes forms such as, “if X is so safe, then why is it banned in so many countries in Europe? All of those countries can’t be wrong.” Most commonly, I see this flawed reasoning applied to GMOs, certain pesticides (e.g., glyphosate), and some pharmaceuticals, but the flaws are inherent to the argument structure, regardless of the topic.

Recently (and why I’m writing this now), I have been seeing it applied to the AstraZeneca COVID 19 vaccine, with the claim that the vaccine was banned in Europe due to concerns about blood clots, therefore, according to the argument, the vaccine must be dangerous. Before I go any further and talk about the general problems with this line of reasoning, let me be 100% clear that the vaccine was not “banned.” Rather, it was only temporarily halted while scientists investigated the reports of blood clots. Those investigations found that the rates of blood clots following vaccination were below or only slightly above background rates (Østergaard et al. 2021; Pottegård et al. 2021), and the European Medicines Agency (EMA 2021) concluded that while there are potential side effects that are being monitored (as is true of all medicines), any serious side effects are extremely rare, and the benefits outweigh the risks. As a result, the vaccine has been resumed in almost all countries that had temporarily halted it.

Moving on, let’s look at the more general problems with this line of reasoning. First, it is inherently an appeal to authority fallacy. It is asserting that something must be dangerous simply because of the people (governments) who say that it is dangerous. This is problematic for multiple reasons. First, government decisions are often influenced by political motivations and lobbyists, and in many cases, the people making decisions are politicians, not scientists. Indeed, this argument is very similar to the flawed tactic of citing court cases as scientific evidence, as if judges, lawyers, and jurors are somehow scientific experts who are qualified to evaluate scientific evidence and make scientific decisions. Non-scientists (and sometimes scientists) make bad calls about science all the time. Do I really need to explain that governments often get the science wrong? Isn’t that something that everyone intuitively knows? After all, Saudi Arabia bans the teaching of evolution in public schools, does that mean evolution is wrong? Obviously not. We have to look at facts and evidence, not just whether or not a government has banned something.

Nevertheless, you may be wondering about the subset of cases where the ban actually was the result of a scientific assessment by an agency such as the FDA, CDC, EMA, etc. Shouldn’t we listen to those organizations? That is a totally fair question, and in that subset of cases, the argument should be treated more seriously. However, it still ultimately has to be about the evidence. Even scientists on government agencies can make bad calls. So, ideally, we should base our views on the evidence, not the source of the evidence. Having said that, for most people who don’t have the time or training to wade through a mountain of scientific literature, it does often make sense to default to the relevant scientific agencies agencies. I would not, as a general rule, encourage you to challenge professional scientific organizations unless you have a considerable amount of scientific training and expertise yourself.

So, what do we do when a group like the EMA recommends halting a vaccine? There are two important considerations here. First, we need to look at precisely what they said, and why they said it, because usually, the situation is far more nuanced than simply, “they banned it.” The AstraZeneca situation illustrates this wonderfully. The vaccine was not initially “banned.” Rather, it was temporarily halted while the evidence was reviewed. This was done out of an abundance of caution, and once the possible harms had been evaluated, it was resumed in nearly all countries (as a side note, this actually shows just how seriously vaccine safety is taken; if anti-vaccers were right that governments/Big Pharma push vaccines with little regard for safety, then such halts wouldn’t take place).

The second consideration is the extent of the consensus on the topic. Was the ban recommended by a handful of organizations or was it recommended by the vast majority of scientific organizations? This is where we get to another one of the underlying flaws in the argument. Namely, it has a tendency to cherry pick experts. What I see happen over and over again is someone insisting that something must be dangerous because some countries banned it while totally ignoring the fact that just as many (often more) countries have approved it! In other words, if you can appeal to authority and claim that something is dangerous because some countries ban it, then it is equally logically valid* for me to claim that it is safe because some countries approve it (*both lines of reasoning are invalid, my point is simply that they are equivalent).

See how that works? If we are going to go down this road of blindly trusting authority, why should we blindly assume that the countries that ban something are right rather than blindly assuming that the countries that approve it are right, especially when (in a great many cases) the countries that approve it are more numerous than the countries that ban it?

Indeed, the AstraZeneca situation illustrates this well. While a handful of countries continue not to use it, most have resumed using it, and the EMA now recommends it. So why should we cherry pick the handful that don’t use it and hold them up as proof while totally ignoring the ones that do use it? What makes the countries that don’t use it right and the far more numerous countries that do use it wrong?

The answer is simple, there is no logical reason to blindly assume that something is dangerous just because a country banned it. We need to look at the actual evidence and the reasoning behind why it was banned. We shouldn’t determine whether something is safe by flipping a coin between lists of countries that do and do not use it, and we certainly shouldn’t decide by cherry picking the list that fits our biases and preconceptions. Rather, we need to look at the actual evidence and the logic of why it is banned or approved.

Related posts

Literature cited

  • EMA 2021. COVID-19 Vaccine AstraZeneca: benefits still outweigh the risks despite possible link to rare blood clots with low blood platelets. 18-March-21.
  • Pottegård et al. 2021. Arterial events, venous thromboembolism, thrombocytopenia, and bleeding after vaccination with Oxford-AstraZeneca ChAdOx1-S in Denmark and Norway: population based cohort study. BMJ 373:n1114
  • Østergaard et al. 2021. Thromboembolism and the Oxford–AstraZeneca COVID-19 vaccine: side-effect or coincidence? The Lancet 397: 1441–1443.

First posted on 14-June-2021

Posted in Rules of Logic, Vaccines/Alternative Medicine | Tagged , , , , , , | 16 Comments

The problems with anti-vaccers’ precautionary principle arguments

Invoking the precautionary principle is a favorite tactic of anti-vaccers, anti-GMO activists, and various other groups that are prone to opposing scientific advances, but there are numerous issues with this strategy. The exact definition of the precautionary principle is a bit amorphous and variable, but the general concept is that before taking an action that has potential risks (particularly with the application of something new), the burden of proof should be on the proponent to demonstrate the safety of that action before taking it. There is certainly an element of validity to this. I absolutely agree, for example, that new medications should be tested and shown to be safe before being made publicly available (indeed, that is part of why we have organizations like the FDA that regulate the approval of new drugs). Similarly, months ago when COVID vaccines were in early stages of testing, I and many other ardent pro-vaccers stated that while the vaccines were promising, we wanted to see the results of the tests before drawing any conclusions. That is a completely rational, evidence-based way to approach the situation. New drugs, vaccines, and technologies certainly have the potential to cause harm. So, it is not unreasonable to want safety testing prior to their use.

Where this becomes problematic, however, is in determining what constitutes reasonable evidence. Groups like anti-vaccers have a tendency to stretch the precautionary principle to unreasonable limits and use it as an excuse for science-denial, rather than a legitimate decision-making tool. Usually, what I see is an incorrect insistence that the precautionary principle demands absolute assurance of safety and that it should be used in a strictly binary decision tree where any possible risk means that the thing in question should not be used, regardless of the known benefits. This is very bad risk assessment and ignores important aspects of how science actually works. As I’ll explain, science is about probabilities, not absolutes, and you must consider both the risk and benefit of an action.

Note for clarity: Just to be sure that I have been clear, I am not attacking the general concept of the precautionary principle. Rather, I am taking issue with the way that many people apply it (and you could make an argument that they are not actually applying it at all and are instead using something else entirely that they are incorrectly calling the precautionary principle. Indeed, many definitions of the precautionary principle explicitly state that it only applies in the absence of scientific evidence).

Science is about probabilities, not proof

This is a really important concept that I have written about frequently. Science is an inherently probabilistic endeavor. It shows us what is most likely true given the current evidence, not what absolutely is true. It always leaves open the possibility that the current results are wrong or some other piece of evidence has been missed. Indeed, the statistics we use to determine things like drug safety and efficacy are inherently probabilities. When we say that a result is statistically significant, what we really mean is that if there is actually no effect (i.e., all results are from chance), there is a low probability that a result as great or greater than the one we observed could arise (i.e., if there is no effect of the thing being tested and we did the experiment again, we’d be unlikely to get such a strong result). Probabilities are inherent to modern science.

Ultimately, this is a good thing. It forces skepticism and avoids dogmatism. Unfortunately, opponents of science seize that inherent and entirely justified skepticism and erroneously conflate it with practical doubt. The fact that we can’t be 100% about a result does not inherently mean that we should have any practical doubt about it. Am I 100% certain that smoking causes cancer? No, but the topic has been so well-studied and the results are so consistent that I’m 99.999% sure, and for all practical purposes, there is nothing wrong with making a statement like, “smoking causes cancer.” Similarly, it is technically possible that scientists are wrong about gravity, but, to borrow from Tim Minchin, you would be pretty foolish to act as if there is any practical doubt about gravity when deciding whether to exit your apartment through your door or a second floor window.

Even so, for many vaccines, we have an extremely high level of certainty that they are safe and effective. Take the notion that vaccines cause autism, for example. As I’ve written about at length, this hypothesis has been extremely well-studied. Multiple studies with tens of thousands of participants have been conducted (e.g., (Hviid et al. 2019 [657,461 children]; Madsen et al. 2002 [440,654 children]; Anders et al. 2004 [109,863 children]; and Jain et al. 2015 [95,727 children]), including a meta-analysis with over 1.2 million children (Taylor et al. 2014), and no large, properly controlled study has found any evidence of vaccines causing autism. Nevertheless, I frequently encounter anti-vaccers who try to ignore that evidence by inappropriately invoking the precautionary principle and asserting that, “since we can’t be 100% sure that vaccines don’t cause autism, we should err on the side of caution and act as if they do cause autism.” That is extremely faulty logic and is nothing more than science denial dressed up as a cogent decision-making principle.

The other related issue is the never-ending string of possible mechanisms of harm. Anti-vaccers frequently concoct an ever-shifting litany of things that scientists need to test before they will accept that vaccines are safe, and they often do this under the guise of simply adhering to the precautionary principle. Sticking with autism for a minute, for a long time, mercury was the main anti-vax boogeyman (and it still is in some circles), and anti-vaccers insisted that the burden of proof was on pro-vaccers to show that the mercury wasn’t causing autism (fundamentally a precautionary principle argument; see note on the burden of proof). So, scientists did lots of tests and even removed thimerosal (the form of mercury in vaccines) from nearly all childhood vaccines. The results of those studies consistently showed that thimerosal doesn’t cause autism (Hviid et al. 2003; Verstraeten et al. 2003;  Taylor et al. 2014), and removing it did not reduce autism rates.

So the burden of proof has been met and the precautionary principle satisfied, right? Not according to anti-vaccers. According to them, maybe its actually the age at vaccination, and the precautionary principle says that we need to demonstrate that it isn’t age at vaccination before we consider vaccines to be safe. Then, when studies show that it isn’t the age at vaccination (Uno et al. 2015; Destefano et al. 2004; Smeeth et al. 2004; Madsen et al. 2002), they switch to number of antigens, and when studies discredit that (DeStefano et al. 2013), they switch to it being the number of doses, and when studies discredit that (Fombonne et al. 2006; Hviid et al. 2003), they switch to aluminum or countless other fantasies.

It’s like fighting the hydra. No matter how many arguments you test and defeat, more crop up to take their place. This is the problem with the practical application of the precautionary principle. There will always be other possibilities. No matter how many things we test, there will always be things that haven’t been tested. This is why the burden of proof is usually on the person making the claim, and by switching the burden of proof, the precautionary principle opens a can of unending worms (see note on burden of proof).

Again, to be clear, I agree with a reasonable level of testing before something like a vaccine goes to market, and even after it goes to market, if reasonable evidence arises that it is causing a problem, I agree that the evidence should be investigated and proper trials should be done. If there is a legitimate, science-based reason to suspect that a risk might exist, it should be investigated. That is a totally reasonable application of the precautionary principle. The problem is that many people try to use it unreasonably and insist that all possibilities must be tested, even if they can’t present any good evidence to show that a danger is likely. Actually testing all possibilities is, however, impossible. Thus, anti-vaccers can be immune to studies, because no matter how many we conduct, there will inevitably still be things we haven’t tested.

A similar issue arises with an insistence for “long-term” studies. As I’ve argued before, “long-term” is meaningless unless it’s carefully defined beforehand. We have studies that followed patients for multiple years (Idbal et al. 2013; Ferris et al. 2014; Vincenzo et al. 2014), which would fit the definition of “long-term” for most scientists, but this never seems to satisfy anti-vaccers. No matter how long the study is, they will always retort that the negative effects might come at some later age. This flawed reasoning is fundamentally just an argument from ignorance fallacy. It is saying that we don’t know for sure that there aren’t problems 35 years later, therefore we should act as if there are. That is bad logic unless we have some compelling reason to think that there would be negative effects 35 years later, which we don’t. We do, however, have plenty of evidence that there are enormous benefits to vaccines and the risk from not vaccinating is much higher than the risk from vaccinating. This brings me to the next topic: risk assessment.

Bad risk assessment

Good risk assessment needs to consider both the risks and the benefits within a probabilistic framework. All actions carry some level of risk, and people often ignore the fact that inaction can be riskier than action.

So, when it comes to topics like vaccines, we need to consider both the risks and the benefits. Vaccines do have side effects, but serious ones are very rare, and we know that the benefits far outweigh those risks because numerous studies have shown that vaccines are extremely beneficial at saving lives (Clemens et al. 1988; Adgebola et al. 2005; Richardson et al. 2010). Indeed, the WHO estimates that from 2000 to 2018, the measles vaccine alone prevented 23.2 million deaths! That benefit absolutely has to be considered when evaluating the risk of vaccines.

This is another place where anti-vaccers misuse of the precautionary principle becomes problematic. They focus on hypothetical potential risks that have somehow eluded all previous studies and argue that we should “err on the side of caution” before potentially injuring children with vaccines. That is horrible risk assessment because it totally ignores the massive, well-established risk from not vaccinating. In other words, the cautious approach is actually to use vaccines that have passed reasonable safety testing, because the known risk from not vaccinating is so high.

Is it possible that there is some unknown danger from vaccines that we have missed? Yes. As explained above, it is technically possible, but it is extremely unlikely. Meanwhile, we know that there are massive dangers from not vaccinating. Studies have repeatedly shown that vaccines save countless lives. Therefore, it is absurd to knowingly sacrifice those millions of lives out of a fear of some unknown danger which probably doesn’t even exist! That’s not erring on the side of caution, and it is not a proper application of the precautionary principle.

Similarly, when it comes to the issue of long-term studies, is it technically possible that there is some long-term effect that we haven’t found yet? Yes, but it is very unlikely. Meanwhile, we know that vaccines prevent diseases which are often fatal for children and can have long-term consequences for survivors. Here again, anti-vaccers are asking us to put more weight on an unknown and unlikely risk than on a very real and well-known risk. Further, even beyond the known risks, it is very possible that measles and other childhood diseases have additional long-term complications that we are unaware of. Indeed, based on our understanding of physiology and diseases, it is more likely that vaccine-preventable diseases have additional unknown long-term consequences than it is that vaccines do. Measles, for example, negatively affects children’s immune systems for years (Petrva et al. 2019; Mina et al. 2019; more details and sources here and here), thus opening the door for all manner of secondary infections and long-term complications. So if we are going to play this game of fearing the unknown, why should vaccines be the unknown that we fear rather than the dieses?

The answer given to that question often involves some variant of the appeal to nature fallacy, and asserts that we shouldn’t play God, or can’t improve on nature, or humanity survived for millennia with these disease, etc. These responses are all obviously flawed for numerous reasons that I have elaborated on elsewhere, so I won’t waste any more time here (see posts here, here, here, and elsewhere).

What about COVID vaccines?

Finally, let’s apply all of this to the new COVID vaccines, because I have recently seen countless people using some variant of the precautionary principle to argue against receiving the COVID vaccine.

First, it is true that, by the very nature of being new vaccines, the COVID vaccines have not yet received as much testing as most vaccines currently on the market*, but they have received just as much or more testing as those vaccines had when they first entered the marked, and the amount of testing they have received is very good. The Pfizer trials used over 43,000 participants (Polack et al. 2020), Moderna used over 30,000 (Mahase 2020), etc. Those are very large trials with good power to detect adverse effects, and they found that the vaccines were very effective (generally with effectiveness in the high 80s or 90s), with few serious complications (comparable to existing vaccines). This is very good evidence that the vaccines are safe and effective, and it meets any reasonable application of the precautionary principle. The problem is precisely that anti-vaccers’ application is unreasonable, so instead of accepting the results, they are spreading baseless fear about possible unknown dangers and future long-term consequences.

*Update 1-Sept-2021: This is no longer correct. The COVID vaccines are now extremely well-studied. See this post for details.

Again, I agree that those trials were necessary before the vaccines went to market. I don’t blindly support vaccines. Rather, I base my views on the evidence, and the evidence shows that these vaccines are safe and will save countless lives. Is it possible that scientists are wrong about these vaccines? Yes, but based on all the available evidence, it’s not likely. Nevertheless, many insist that because it is possible that we have missed something, we shouldn’t vaccinate and should instead apply “caution.” As I’ve been trying to explain throughout this post, that sets up a false dichotomy, because not vaccinating has substantially more risks than vaccinating.

Let’s just compare the two for a minute. The vaccines have passed large, well-conducted trials. Further, they are based on technologies that have been being researched for many years, and many of their components are the same as other vaccines that have passed numerous, repeated tests. All of this gives us very good reason to think that they are safe, and the probability that they are actually dangerous and we just haven’t found out yet is extremely low. In contrast, we know that COVID is highly contagious and highly deadly. Total global deaths are over 2.5 million, and in the USA, COVID is currently one of the leading causes of death (it spiked all the way to the number 1 spot during the large outbreak at the beginning of 2021). So, the known risk from not taking the vaccine is extremely high (see notes at the end before responding with the inane “it *only* kills 1 in 100” or “it only kills the elderly” arguments).

Further, if we want to go down this road of fearing unknown long-term complications, unlike the vaccine, there is very good reason to think that COVID will cause long-term problems. There is growing evidence that many patients have complications long after being infected (though the disease is recent enough that the data are still being collected and a clear picture hasn’t emerged yet), and given the damage that COVID is known to cause to the heart, lungs, and other organs, and our knowledge of other diseases, it is very reasonable to think that there might be long-term problems (Mitrani et al. 2020; Fraser 2020).

So, on the one hand, we have few known risks from the vaccine and no good reason to suspect unknown long-term complications, and on the other, we have an extremely high known risk from COVID, as well as good reason to suspect that there might be long-term damage.

Thus, proper application of risk assessment absolutely does not support avoiding the vaccine. Reasonable concerns have already been tested (thus appropriate precautions were taken), the known risks of COVID are substantially higher than the known risks of the vaccine, and there is far more reason to suspect unknown future complications from COVID than from the vaccine.

If your concern is really that there might be currently unknown long-term damage, then really think about which of the following is more likely to cause such damage: a vaccine that has been well-tested and simply stimulates your immune system and prepares it to fight a single pathogen, or a deadly virus that sets off a cytokine storm and is known to cause serious damage to your heart, lungs, and other organs. Which one actually seems riskier to you?

Part of the problem here is that we often perceive a decision not to take action as the safe option or “erring on the side of caution,” but that’s not always true. Not taking action still has risks, and taking action is not automatically “erring on the side of caution.” The precautionary principle no longer applies to approved vaccines because they have already passed testing. At this point, it is a simple matter of risk assessment, and the risks from not vaccinating are far higher than the risks from vaccinating.

See this post for details on why vaccines are highly unlikely to cause long-term side effects.

Note on the burden of proof: The burden of proof always lies with the person making the claim. In other words, the person saying something exist has to provide evidence that it exists, and the other person does not have to discredit its existence (e.g., the burden of proof is on someone claiming that bigfoot is real, not someone claiming it isn’t). The precautionary principle inherently flips this by saying that we need to provide evidence that a danger doesn’t exist, rather than the burden being on the person claiming that it does exist. As explained earlier, given the very real possibility of injury form something like a new medicine, some basic safety testing is rational prior to approving the drug. However, once reasonable testing has been done, the burden of proof then falls to anyone who claims that there is a risk that those tests missed. In other words, if you want to say that those tests were wrong, then the burden of proof is on you. This is fundamentally why anti-vaccine arguments fail. They try to shift the burden of proof rather than presenting actual evidence.

Note on the “it only kills 1 in 100” argument: I frequently hear people make the argument that we don’t need the vaccine because most people survive COVID. This is a very bad argument for numerous reasons, which I will only briefly outline (see detailed explanation here). First, 1 out of every 100 infected individuals is actually a very high death rate. Second, you also have to consider how infectious the disease is. Even if a disease only kills a minority of infected individuals, that can still result in millions of deaths if lots of people become infected. This absolutely is the case with COVID. Again, it is currently the leading cause of death in the USA. Third, this totally ignores all the non-lethal effects. Fourth, this totally ignores the massive economic harm being caused by the virus. Fifth, this totally ignores the fact that the odds of a currently unknown future side effect from COVID are still much higher than the odds of a currently unknown future side effect from the vaccine.

Note on the “it only kills the elderly” argument: First, no it doesn’t. Yes, mortality rates are much higher for the elderly, but they still exist for all other age groups. In contrast, the mortality rate from the vaccine is 0 for all age groups. So the vaccine reduces risk for all age groups. Additionally, if we want to protect the elderly, the rest of us need to vaccinate to reduce the spread of the disease (vaccines are often less effective for the elderly, not to mention that many vulnerable people simply cannot receive the vaccine). Finally, this argument suffers most of the same flaws as the “it *only kills 1 in 100” argument, so see that note as well. 

Related posts

Literature cited

(note: if these hyperlinks break and/or you have trouble accessing articles for free, see this post for suggestions about how to access them)

  • Adegbola et al. 2005. Elimination of Haemophilus influenzae type b (Hib) disease from The Gambia after the introduction of routine immunisation with a Hib conjugate vaccine: a prospective study. The Lancet 366:144–150
  • Anders et al. 2004. Thimerosal exposure in infants and developmental disorders: a retrospective cohort study in the United Kingdom does not support a causal association. Pediatrics 114:584–591
  • Clemens et al. 1988. Mesles vaccination and childhood mortality in rural Bangladesh. American Journal of Epidemiology 128:1330–1339
  • DeStefano et al. 2013. Increasing exposure to antibody-stimulating proteins and polysaccharides in vaccines is not associated with risk of autism. J Ped 163:561–567
  • Ferris et al. 2014. Long-term study of quadrivalent human papillomavirus vaccine. Pediatrics 134: e657-665.
  • Fombonne et al. 2006. Pervasive Developmental Disorders in Montreal, Quebec, Canada: Prevalence and Links With Immunizations. Pediatrics 118
  • Fraser 2020. Long term respiratory complications of covid-19. BMJ 370
  • Hviid et al. 2003. Association between thimerosal-containing vaccine and autism. JAMA 290:1763–1766.
  • Hviid et al. 2019. Measles, mumps, rubella vaccination and autism: A nationwide cohort study. Annals of Internal Medicine.
  • Idbal et al. 2013. Number of antigens in early childhood vaccines and neurophsychological outcomes at age 7–10 years. Pharmacoepidemiology and Drug Safety 22:1263–1270.
  • Jain et al. 2015. Autism occurrence by MMR vaccine status among US children with older siblings with and without autism. JAMA 313:1534–1540
  • Madsen et al. 2002. A population-based study of measles, mumps, and rubella vaccination and autism. New England Journal of Medicine 347:1477–1482
  • Mahase 2020. Covid-19: Moderna vaccine is nearly 95% effective, trial involving high risk and elderly people shows. BMJ 371
  • Mina et al. 2019. Measles virus infection diminishes preexisting antibodies that offer protection from other pathogens. Science 366:599–606
  • Mitrani et al. 2020. COVID-19 cardiac injury: Implications for long-term surveillance and outcomes in survivors. Heart Rhythm 17:1984–1990
  • Petrva et al. 2019. Incomplete genetic reconstitution of B cell pools contributes to prolonged immunosuppression after measles. Science Immunology 4: eaay6125
  • Polack et al. 2020. Safety and efficacy of the BNT162b2 mRNA Covid-19 Vaccine. New England Journal of Medicine 383:2603–2615.
  • Richardson et al. 2010. Effect of rotovirus vaccination on death from childhood diarrhea in Mexico. New England Journal of Medicine 362:299–305
  • Smeeth et al. 2004. MMR vaccination and pervasive developmental disorders: a case-control study. Lancet 364:963–969
  • Taylor et al. 2014. Vaccines are not associated with autism: and evidence-based meta-analysis of case-control and cohort studies. Elsevier 32:3623-3629
  • Uno et al. 2015. Early exposure to the combined measles-mumps-rubella vaccine and thimerosal-containing vaccines and risk of autism spectrum disorder. Vaccine 33:2511–2516
  • Verstraeten et al. 2003. Safety of Thimerosal-Containing Vaccines: A two-phased study of computerized health maintenance organization databases. Pediatrics 112:1039–1048
  • Vincenzo et al. 2014. Long-term efficacy and safety of human papillomavirus vaccination. International Journal of Women’s Health 6:999–1010.
Posted in Uncategorized | Tagged , , , , | 6 Comments

Facts: Why they matter and how to check them

We seem to be living in the golden age of misinformation, and while cognitive biases and motivated reasoning have always existed, they seem to have reached epidemic levels in recent years, with attacks on the very nature of facts coming from the highest offices in the land (the phrases “alternative facts” and “fake news” come to mind). Many people seem to be living in an alternative reality where everything that agrees with them is true, and anything that disagrees with them is biased, fake news or “just an opinion.” Indeed, I have frequently encountered an open disdain for facts and those who check them. For many people, fact checking seems to boil down to, “if it came from a source I like, then it is true, and if it came from a source I don’t like, then it is false,” but that is an extremely faulty and dangerous dichotomy. So, it feels like high time that we have a basic discussion about facts and how to evaluate them.

fact checking fake news

Now, at this point, it may seem like I am picking on the America’s conservative right, and there is an extent to which that is true as they currently embody this problem in a truly extraordinary fashion, but let me be 100% clear that this is not a conservative problem. Liberals do it to. Everyone does it. We are all prone to cognitive traps and biases, myself included. The key is to be aware of those traps and take active steps to correct them. Scepticism is a skill that has to be practiced and often requires training. It takes work to fact check claims before accepting or rejecting them, but with practice, it becomes habitual, and it’s a habit everyone should get into.

My point is that if while reading this you find yourself thinking, “yes, this other group/person really needs to read this” pause for a second and consider whether you are yourself falling into cognitive traps and biases, because this post applies to everyone, regardless of political affiliation. In other words, it applies to you too.

Note: Although I am going to talk broadly about facts (not just scientific facts) including some political facts and news sources, this is not a political post. Facts are not political. They can certainly be used to make political arguments, but the facts themselves are inherently apolitical, and if it seems like facts have a particular political bias, that is reflective of faults in the political position, not the facts themselves.

Facts vs opinions

This may seem extremely elementary and pedestrian, but recently, I have had numerous conversations with grown adults who seem truly baffled by the distinction between a fact and an opinion. So, let’s go through this. A fact is an objective statement of reality. It can be independently verified and does not change based on individual views, beliefs, and preferences. An opinion is a subjective perception of reality that does change based on individual views, beliefs, and preferences.

As an example, if I say, “a water molecule is made of one atom of oxygen and two atoms of hydrogen,” then I have stated a fact. It is an objective statement of reality that does not change based on individual beliefs or views. In contrast, if I say, “swimming is fun” I have stated an opinion. It is a subjective perception that others may not share.

For these basic examples, everyone seems to understand the concepts just fine, yet when it comes to political topics and “controversial” science topics, suddenly people seem to completely forget this distinction and act as though their personal disagreement means that something isn’t a fact. This is clearly faulty. As the old saying goes, “you are entitled to your own opinions, but not your own facts.”

The number of times that I have seen this play out in recent months is truly astounding. Over and over again I have been having a conversation with someone, shown them verifiable facts that are confirmed by numerous sources, only to have them say, “that’s just your opinion.”

Let me give you an example. In a recent discussion with a family-member, they made the claim that “Biden has never condemned violent rioting and looting.” I responded by showing him statements like this one, which Biden made way back in July, “I’ve said from the outset of the recent protests that there is no place for violence or the destruction of property.” My relative’s response was, “well you’re entitled to your opinion.”

Do you see the problem here? Whether or not Biden has issued statements against violent protests is a matter of fact, not opinion. It is a fact that he has done so, and that fact is not political. It is an objective, independently verifiable statement of reality. You can watch the videos and read the transcripts for yourself. Whether or not you like Biden and agree with Biden has no bearing on the fact that he has made these statements. Now, whether you think Biden’s statements are appropriate and how you feel about his position on violence is certainly a matter of opinion. What you do with the fact that that Biden has condemned violence is political and is up to your subjective perceptions and beliefs, but whether or not he has made statements condemning violence is a simple matter of fact.

This sort of flawed thinking also pervades science denialism. I frequently encounter people who describe evolution, the safety of vaccines, anthropogenic climate change, etc. as “opinions,” but that is just as flawed as claiming that Biden has never condemned violence. When thousands of studies have all confirmed that something is a fact (as is the case on all of the topics I just mentioned) you don’t get to have an opinion about whether or not that thing is a fact. You can blindly deny the fact (as many do), but it’s still a fact; you are just willfully ignorant of it. If I claim, for example, that water has four hydrogens per oxygen, that doesn’t suddenly make the composition of water an opinion that is subject to debate. Rather, I would simply be living in blind denial of the fact. My personal views do not change the nature of the fact.

Follow these links for my articles (which contain citations to the relevant literature) on evolution, vaccines, and climate change.

Facts matter

Interestingly, this is actually an opinion. I personally believe that facts matter, and that belief is based on my views that human life and well being are worth protecting, the environment is worth protecting, and knowledge is worth pursuing. If you don’t share those views, then you may not value facts (at least not the categories of facts that I’m talking about in this post), but for most people, I think you would agree, at least in concept, that facts matter.

For example, I think that facts on COVID matter, because I value human life and think it is worth protecting. So, when a politician lies and makes claims that are not supported by evidence, I think that matters, because those claims affect people’s behavior, and false or misleading claims cost lives. When a politician lies and says that COVID is going away and we’re rounding the corner when in reality cases are spiking, that matters, because people will behave differently based on whether it is going away or spiking, and that behavior affects who lives and dies. Similarly, if a politician downplays the seriousness of the disease, that matters, because it risks people’s lives. Now, if you don’t share my opinion that human life is valuable, then you likely don’t share my view that facts about COVID matter, but if you agree that human life is worth protecting, then these facts should matter for you as well, and that means that you should want to make sure that your information is correct. You should be willing to fact check and evaluate your views to make sure that you are not supporting a position that endangers lives.

The same is true for things like climate change. The evidence very clearly shows that we are the primary cause of climate change and heat waves, storm intensity, and droughts are all increasing and costing lives. I think those facts matter, because they affect lives. People will live or die based on whether we accept those facts and take appropriate action. Now, again, if you don’t value human life, then you probably don’t value those facts, but if you value human life, then surely it is worth laying aside your biases and looking closely at the evidence. Surely it is worth fact checking and verifying information before you believe or reject it.

Similarly, in politics, I think facts matter. I think the veracity of a politician’s statements and reported actions matter, because that affects people’s lives. It is critical that we actually test the veracity of claims, because things like whether or not Biden has condemned violence (he did), whether or not Trump instituted a zero-tolerance policy that caused thousands of families that would not otherwise have been separated to be separated (he did), whether or not Trump told a predominantly white crowd that they have good genes and he believes in racehorse theory (he did), etc. affect people’s lives. People are impacted by the things politicians do and say. Therefore, I think that we have a duty to carefully test information before accepting or denying it. We owe it to ourselves and the other members of our society to make sure that our political views are based on facts.

How to fact check

Details of this topic vary by discipline, but I’m going to provide some general concepts that are universal, as well as some details for a few specific subject areas.

Let me start by asking this as a question. What do you do when you encounter a claim? Let’s say you are on Facebook and you see a post making a political claim, how do you test whether it is factual? Be honest here, because being honest about your methods is the first step towards improving them. Do you use a simple dichotomy like, “it came from CNN, therefore it is false” or, on the other end of the spectrum, “Trump said it; therefore, it is false?” If you go further than that, how do you go about doing it? Do you search for sources that confirm your views, or do you look for a broad range of sources? When you find those sources, what do you do with the information inside them? Do you blindly accept it based on the source, or do you dig deeper and look at where the original information come from?

Hopefully this line of inquiry has started to reveal my point. Many people don’t fact check at all, or if they do, they don’t do it correctly. Many view fact checking as an exercise to find sources that agree with them, but this is actually the opposite of how it should work.

To fact check properly, there are several things you should look for:

  1. The quality of the source
  2. Verifiable information
  3. Agreement among sources

Let’s start with the quality of the source. For science, this usually means peer-reviewed studies. That is where scientific information is published, and it should be your primary source. Failing that, well-respected secondary sources (e.g., NASA, CDC, etc.) are a good option. A youtube video or random blog on the internet is simply not a good source and should not be trusted without verification (see #2; and yes, that includes this blog).

When it comes to something like politics, things become somewhat more fuzzy, especially because Trump has spent the past four years insisting (without evidence) that essentially any source that says anything negative about him is “fake news.” This has resulted in large swaths of the country blindly believing that nearly all major news outlets are “biased left-wing media” and a handful of highly conservative outlets (e.g., Fox and OANN) are the only sources of truth. This view is childish and immature. There are objective ways to evaluate whether or not a source is biased (more on that in a minute) and none of them include “it said something negative about Trump” as a diagnostic criteria. You can (and should) actually verify the information in sources, look at whether they gave the whole context, look at the wording they used and whether they presented information fairly, etc. The problem is that so many people have this mindset that saying something bad about Trump automatically makes it biased or, conversely people on the left frequently write off any source that ever says anything good about him. Again, that’s not how facts work, and it’s certainly not how fact-checking should work.

Fortunately, several non-partisan organizations have already done the heavy lifting for you. Several groups have put together media-bias trackers to document which sources are factual or inflammatory and which sources are neutral or biased. Ad Fonte’s media Bias Chart, Media Bias/Fact Check, and All Sides are three prominent examples. Similarly, there are many excellent fact-checking websites to assist you. Here are some useful examples: PolitiFact,, Washington Post’s fact checker, and, of course, Snopes.

I can already hear the outrage, derision, and mockery, but hear me out before you lambast me for thinking that those “biased, liberal” sources are useful. First, if your response was ridicule, my question is, why? Can you give me actual evidence that these sources are biased, funded by Soros, etc. or, are you simply assuming they are biased because they frequently say things you don’t like? I’m betting it is the latter. You see, many people live in what I have previously termed a “circle of ignorance,” where they have decided which sources are good and bad based on their biases, and the ones they’ve decided are good constantly re-affirm their biases (that’s why they were selected), which leads to further confirmation that those are the only good sources. Any source that contradicts those sources is then assumed to be biased. This is a very dangerous situation because it inherently gives you a biased view of the world, not a factual one. So, again, what objective reason do you have for thinking these fact checkers are biased?

The second point I want to make shifts gears into my next principle for fact checking: verification. You shouldn’t blindly believe something because it was in one of the fact checkers I listed or in one of the news outlets that is rated as trustworthy and non-partisan. Rather, you should look at the evidence they presented. Good sources will give their evidence and explain their reasoning. For the sites I’ve listed, they, at the very least provide information on their methodologies, funding, and steps they take to minimize biases, and, in most cases, you can actually see a detailed breakdown of why a source was rated the way it was or why a claim was rated as true or false. You don’t have to blindly take their word for it. Rather, you can and must verify.

Don’t just scoff at a Snopes article, actually read it. Actually look at the evidence it presents. Don’t just dismiss the claim that a source is biased (or conversely that it is non-partisan). Actually look at the reasons it was scored that way. Look at the evidence being presented.

This applies far more generally than just using the fact checkers. For any news article or video, look at where they got their information from. Is their source a speech that you can watch for yourself? a recorded interview? an official government document? or a single totally anonymous report? How solid is the information that they are giving? Don’t just assume that the source is “fake news,” actually look at the evidence.

In the conversation with my relative that I mentioned previously, I shared several sources with him (both media sources and fact checkers) and he immediately decried them all as fake news and refused to look at them (he also mocked me for being gullible enough to use them). Had he actually opened them, he would have found videos and transcripts of Biden’s speeches. He didn’t have to blindly trust the sources. He could (and should) have looked at their evidence, then done a bit of searching to see if it was verifiable.

The same sort of thing is true in science. If a site makes a claim about science, it needs to provide citations to peer-reviewed studies to back it up. Thus, although you should not blindly trust a blog like mine, I try very hard to provide good sources to back up my claims, and my posts on scientific topics generally include lengthy literature cited sections at the end so that you can verify what I am claiming. Sites like mine (and fact checkers) are conduits to information. You should use them to help you find good information, not as endpoints. That is the correct way to use them.

Finally, (point 3) see whether multiple reputable sources are saying the same thing. If multiple fact checkers, news outlets, etc. have all reported the same thing, you have good reason to think that it is likely true. In contrast, if many reputable sources are presenting contrary evidence or the information is only in a handful of fringe sources, you should be wary.

Here again, people often misunderstand fact checking as simply looking for something that agrees with you. I once had someone derisively respond to a link to a fact checking site with the retort, “well whose fact checking the fact checkers?” and someone else sarcastically told me, “I can’t wait until we have fact checkers for fact checkers, then we’ll REALLY know what is going on.” These statements misunderstand the process. If one fact checker errs, odds are that the others will catch it and call them out, which is why you should both use multiple sources and verify their information. For the most part though, you will find that these organizations are very thorough and, as a result, generally agree with each other because, again, facts are objective, whereas opinions are subjective. Similarly, if you compare the various media bias organizations, you will find general agreement in most cases despite their different approaches, staff, funders etc. Again, it is possible to objectively assess biases by looking at the language being used, how factual the information is, etc.

To be clear, agreement among sources is not an absolute guarantee of veracity, and it certainly can be abused by doing things like looking for agreement only among sources on one extreme of the political spectrum. So, I don’t recommend using this strategy in isolation (and to use it properly you need neutral sources). Rather, all three of my points need to be used in conjunction. You should make sure you are using good, neutral sources (not just a few biased sources), you should see whether there is agreement among sources, and very importantly, you should verify the information. Don’t just look at what a source said, look at why it said it. What facts did they base that claim on. Can you verify them?

A similar situation is true in science. It is always possible to cherry-pick a handful of outlier studies on any topic. That is why you should always look at the entire body of literature, rather than latching onto the first studies that agree with you (more details here).

At this point, it may seem that I have generated an endless chain of verification, where to verify a claim in a source, you have to track it to another source, then verify the claims in that source using another source, etc. Sometimes this happens. Often, however, fewer steps are required because you quickly get to the original source (e.g., a speech, legal document, etc.). To be clear, however, fact checking is work. It takes effort to force yourself to check all claims before believing or rejecting them. It takes time to verify information and check a diverse range of sources, but it is worth it. To use my relative as an example one last time, they proudly proclaimed to me that they got almost all of their news from OANN and didn’t care what fact checkers said. This is not a position to be proud of and it is, in fact, very dangerous. I really like the Socratic method, so let me ask this as a question. If you have a similar mindset and get all your information from a handful of sources on either side of the political spectrum, blindly believing them without doing any verification, then how would you ever know if they deceived you? How would you ever know that they have lied to you if you don’t verify their information? This sort of behavior, on either side of the political spectrum, inherently gives you a very narrow and biased view of the world, and that is dangerous for all the reasons I’ve discussed. Facts matter and they are worth checking.

In closing, I want to briefly bring up the topic of acknowledging your own ignorance. None of us know everything, and all of us can be misled. Therefore, if we are going to have a rational, evidence-based view of the world, it is critical that we are open to new information. To be clear, being open-minded does not mean believing something without evidence. Quite the opposite. Evidence must be a requirement, but we should always accept the possibility that we might be wrong and use good evidence to find out if we are wrong. We should humbly acknowledge our own limits and try to overcome them by seeking out good information and testing ideas before believing or dismissing them. We should be receptive to facts we were previously ignorant of, rather than blindly dismissing them as “fake news.”

Related posts

Posted in Uncategorized | Tagged , | Comments Off on Facts: Why they matter and how to check them

Case-fatality rates don’t indicate how well a country contained COVID

Recently, I have been frequently seeing the argument that the USA has actually done a very good job at dealing with COVID because it’s case-fatality rate (i.e., the percentage of infected individuals that die from COVID) is lower than many other countries, including many European countries like Italy, the UK, etc. This claim actually presents a good opportunity to look at some aspects of data analysis, cherry-picking, and how stats can be abused and misused, so I want to take a very brief look at the claim and simply address the following two questions:

  1. Are case-fatalities a good metric for how well a country responded to the outbreak?
  2. Does the US have a particularly good case-fatality rate?

While I am focusing on those two questions because I think they are instructive, much of what I am going to describe applies to many other arguments floating around the internet regarding COVID (e.g., a faulty comparison I keep seeing of countries that did and did not use hydroxychloroquine). So, as you read this, really try to understand the reasoning behind the answers, because that will help you analyze other claims/questions you encounter.

I want to be 100% clear at the start that this is not a political post. People often make the mistake of assuming that any discussion of any topic even remotely related to politics is inherently a political discussion. That is incorrect. Facts aren’t political. The questions I am going to address are strictly factual, scientific questions. They can (and must) be answered with evidence and facts, not politics. Now, you can certainly use those answers to make political arguments about whom to vote for, policies that should be put in place, etc. but the answers themselves are not political. They are simple facts that are not affected by political views. They are about objective reality, not politics.

Are case-fatalities a good metric for how well a country responded to the outbreak?

How relevant case-fatalities are depends on exactly what is being claimed/discussed. If we want to look at how well countries did at treating infected patients, then case-fatalities are relevant (with a lot of caveats; see question 2), because they help to describe the outcomes for people who became infected. However, many people (including the POTUS) keep using case-fatalities to make a more general argument about how well countries responded to the virus, and that’s a problem.

Case-fatalities are not a valid metric for how well a country contained the virus, because they only describe what happened to people who became infected. A country with 10 cases, and a country with 10,000,0000 cases could have exactly the same case-fatality rate. Indeed, if a large country allowed its entire population to be infected, and a full 2% of the entire population died from the virus, it would still have a case-fatality rate that is better than the global average (for countries of comparable size). So, case-fatalities simply don’t show how well a country prevented outbreaks.

If we want to know how well a country did at containing the virus, we need to look at metrics like the number of cases relative to population size. This shows the proportion of the population that became infected, and thus is the relevant metric for looking at how well the spread of the virus is controlled (there are still lots of caveats here, because things like population density have a big impact on spread). When we look at that, the US is the 10th worst country in the world. In other words, there are only 9 countries with more cases per capita (Qatar, Bahrain, French Guiana, Panama, Aruba, Chile, San Marino, Kuwait, and Peru). Further, many of those countries have actually had very small outbreaks, but their populations are so small that it’s a large per-capita rate. At the time of writing this, San Mario only had 735 cases, Aruba only had 2,358, and French Guiana 9,276; so those aren’t really fair comparisons. Regardless of whether you want to include those three, however, the point stands that the US has done one of the worst jobs of any country in the world at containing the virus and has done worse than the European countries people keep comparing to. That’s not a political statement, that is a simple, empirical fact (again, there are caveats that make it hard to actually precisely rank countries, but it is very clear from the data that the US is on the bad end of the distribution).

Does the US have a particularly good case-fatality rate?

Let’s now turn our attention to the question of whether or not the US has a good case-fatality rate, and this is going to bring in several important points about data analysis.

First, for all of the comparisons I’m going to talk about, I’ve limited the data to countries that have had at least 10,000 cases of COVID19 (I decided to do that before I ran any analyses). The reason for this is that percentages can be very unreliable when dealing with small sample sizes. As a result, including countries with few infections generates a lot of what we call “noise” in the data and it makes it hard to see the patterns that are really there, because those patterns are obscured by chance variation in small sample sizes. Also, only using countries that had fairly large outbreaks allows us to compare apples to apples and reduce some of the confounding facts (more on that in a minute).

So, when we do that, how well has the US done? There’s actually a lot of variation in these data, ranging from Singapore with a case-fatality rate <0.1% to Italy with a 13% fatality rate. The mean value is 3.1%, but because of a few extremes like Italy, you could make the case that the median is more appropriate, and it is 2.4%. So, how does the US compare? Its case-fatality rate is 3.0%, which is extremely average. It’s ever so slightly better than the mean value, and slightly worse than the median value, but either way it’s pretty close to the average. Not terrible, but also not great.

At this point, you may be thinking, “fine, the US did not do a good job of containing the virus, and it has done an average job of treating cases, but it still did better than most European countries,” but there is more going on here than is revealed by the crude percentages. We also have to consider confounding factors. There are things other than the variable we are interested in that vary among the groups we are comparing (countries in this case). It is obviously true that there are many differences among these countries other than simply how they treated COVID-19 patients.

To give one obvious example, as I perused these data and looked at the lists of European countries that people kept saying the US did better than, I realized those most of those countries have older populations than the US does. We know that there is a strong relationship between death from COVID and age, with the elderly being far more likely to die following an infection. So, we’d naturally expect countries with older populations to have more deaths per case load (i.e., population age is a confounding factor).

COVID case fatality rates COVID19

Correlation between median population age of a country and its case-fatality rate.

To actually examine this, Iran a regression between median age of the population in each country and the case-fatality rate. Unsurprisingly, there is a statistically significant positive relationship (P = 0.003). In other words, just as we’d, expect countries with older populations have higher case-fatality rates on average, and, as I said, the European countries that have higher case-fatality rates than the US tend to have older populations. In other words, population age is at least part of the reason why the US has a lower case-fatality rate than many European countries.

Now you may be wondering how much of a role population age is playing, so to get at least a crude answer to that, let’s dig a bit deeper into the scatter plot. The closer points are to the line, the more they conform to the central tendency of the data (by definition). In other words, when points are right on the line, then the general relationship between the X and Y variables is doing a good job of explaining those points. When points are further from the line, then other factors are at play and are explaining some of the variation in the data (the vertical distance between a point and the line is called the “residual” and is the variation that is not explained by the relationship between X and Y). For the total data set here, the R2 value (a correlation coefficient) is 0.092, which indicates that 9.2% of the variation in the entire data set is explained by the relationship between age and case-fatalities. Some countries, (like the US) are very close to the line, whereas others (like Italy) are much further. In other words, we know that median population age is explaining some of the variation in the data, and the US is close to where we’d expect it to be, based on that factor. So, age does a good job of explaining why the US is where it is on this graph.

To flip what I mean by that, we can us the equation of this line to predict a country’s case fatality rate based on its population age. The equation of this line is y = 0.0009x + 0.0012 and the median age for the US is 38.5 (x). So, the predicted case-fatality rate based entirely on population age is 3.6%. Its actual case-fatality rate is 3.0%. Thus, the actual case-fatality is lower than expected (based on this single factor; there are others) but not by a huge amount, and overall, population age does a fairly good job of predicting where America falls.

What about the rest of the variation? Well, there are lots of other confounding factors. Things like high population density (which results in very rapid, very localized outbreaks that overwhelm health-care systems) can have a huge impact. Further, some of the variation will inevitably be due to chance.

So, when we add all of that up, how has the US done at actually preventing deaths once people become infected? It’s hard to precisely rank it without doing a full, in depth statistical analysis that takes all confounding factors into account, but based on the available data, it’s done ok at treating patients once they become infected, but not exceptional. It has had an average response, and we know that a lot of the variation in the data is explained by confounding factors like age (also, again, that is just for treating people once they became infected; preventing infections is another story).


In short, you cannot use case-fatality rates to argue that a country did well or poorly at containing the virus and preventing its spread, because that metric shows what happened when people became infected, rather than reflecting the proportion of the country that became infected. It is possible for 100% of a population to be infected with COVID and still have a good case-fatality rate. Further, the USA has an average case-fatality rate. It’s not great, but it’s not terrible either. However, these comparisons are inherently problematic because there are many confounding factors. Things like the median age of a population have an effect, and many of the countries that have worse case-fatality rates than the USA also have older populations than the USA. We need really rigorous statistical analysis that measure and account for all of the confounding factors to get a clearer picture of why case fatalities are high in some countries and low in others.

Finally, I want to stress again that none of this is political. These are simple facts. You can certainly use those facts to make political arguments, but the facts themselves are not political. They are objective statements of reality.

Note: Although case-fatalities are not a good metric of how well a virus was contained, they can be influenced by the course a virus took in the country. A very large, localized outbreak that overwhelms the healthcare system will have a higher case-fatality rate than an outbreak that is very spread out (this is why there was so much focus on flattening the curve). However, it is still not a good metric because of all the reasons I’ve listed, and because there are many scenarios in which it is influenced by the outbreak course, but not in a way that is reflective of how well the outbreak was contained. Consider, for example, one country that had a single, localized outbreak that was contained and didn’t spread beyond that area, but did overwhelmed the resources in that area, and compare that to another country that never had that sort of extreme local outbreak, but failed to contain the virus and let it infect most of the country. The former clearly did better at containing the virus and will have far fewer cases and deaths per capita but will also likely have a higher case-fatality rate.

Update: As several people mentioned on social media, it is also worth pointing out that case-fatality rates are sensitive to the level of testing employed. When little testing is done, it often results in a high case-fatality rate because many people with COVID aren’t included. So for case-fatalities to be accurate, you need broad testing. Additionally, making comparisons can be complicated by differences in testing procedures and standards.

Data sources

  • The data on infections per capita were obtained from WorldOMeter on 05-Sep-2020
  • Data on population ages were obtained from on 04-Sep-2020
  • Date on case-fatalities were obtained from Johns Hopkins on 04-Sep-2020
    (both worldometer and Johns Hopkins have case-fatality data which are in close agreement with each other, but I considered Johns Hopkins to be more reputable and therefore used their data)

Related Posts

Posted in Uncategorized | Tagged , , | Comments Off on Case-fatality rates don’t indicate how well a country contained COVID

COVID comorbidities are not analogous to car crashes: Debunking the 6% mortality claim

CDC COVID twitter tweet #only 6%Recently, the CDC released data on COVID comorbidities, including data showing that 6% of COVID-19 deaths only listed COVID on the death certificate, while the remaining 94% of COVID deaths also listed other conditions. Many have jumped on this as proof that COVID is far less deadly than previously claimed, and they are arguing that most reported COVID deaths are actually just people who died of some other condition while happening to have COVID. In particular, I keep seeing an analogy of someone who has COVID getting hit by a car, then the death being attributed to COVID. This is a very bad analogy (and faulty argument in general) that horribly mischaracterizes these data. So, I want to briefly explain what is actually going on.

First, you need to realize that when a patient dies, doctors list all of the factors that contributed to the death. This often includes multiple conditions, at which point we call them “comorbidities.” In the case of COVID, two main things are happening. First, in some cases, people have a pre-existing condition that interacts with COVID and makes them more likely to die from COVID. Second, COVID leads to conditions that then contribute to the death.

Let’s start with the pre-existing condition situation. We know that people with some health conditions are more prone to die from COVID than people without those conditions, because those conditions make them more vulnerable to COVID. Thus, there is an interaction between COVID and the pre-existing condition, with both contributing to the death. Importantly, however, in most cases, the person would not have died at this particular point in time had it not been for COVID. In other words, something like an existing respiratory problem makes people more sensitive to COVID, resulting in a higher death rate when infected with COVID. That does not mean that COVID wasn’t a key factor in their deaths. It is simply that it was not the only factor.

By way of analogy, imagine that someone with asthma gets trapped in an environment with lots of smog, ultimately resulting in an inability to breathe and subsequent death. What killed them? Well, both the asthma and the smog played a role. The smog was a serious problem because of the asthma, but conversely, they could have kept on living with the asthma had it not been for the smog. If we could have prevented them form being exposed to the smog, they would have lived.

Even so, for many people, COVID is fatal because of interactions with other conditions, but that still means that COVID was fatal. It still means that they would have lived had it not been for COVID.

To give one final analogy, imagine a disease that is far deadlier in men than in women. Imagine that we look at the mortality statistics form that disease and see that 94% of deaths were from men. It would clearly be absurd to say, “they didn’t die form the disease, it was being a male that killed them.” That would obviously be nuts. It would be apparent to everyone that there was an interaction between the disease and sex that causes men to be more sensitive to it. Even so, there are interactions between many pre-existing conditions and COVID that make people with those conditions more sensitive to COVID and more likely to die from it.

On the flip side, many of the reported comorbidities are actually caused by COVID. Look at the data from the CDC. The single most common comorbidity category* (68,004) was influenza/pneumonia. These diseases are often secondary infections that happen as a result of viral infections. Similarly, respiratory failure was present in 54,803 cases. Again, this is something that we know COVID causes. So many of these comorbidities are actually caused by COVID!

*Technically, the most common category was “other” which includes a very wide range of conditions that were grouped together because each was too uncommon to merit its own category. Thus the influenza/pneumonia category was the most common category for discrete diseases, rather than the large hodgepodge of conditions.

By way of analogy, the argument being made by science deniers is no different from someone bleeding out from a gunshot wound, then someone else saying, “bullets aren’t dangerous, because she died from blood loss, not the bullet.” That’s obviously a dumb argument. She only lost the blood because of the bullet. Even so, many people are only dying from conditions like respiratory failure or heart failure because of COVID19.

It is also worth noting that, as is often the case, this argument is straight out of the anti-vaccine playbook. For diseases like measles, secondary infections with diseases like pneumonia often contribute to children’s deaths. Thus, anti-vaccers incorrectly argue that measles isn’t deadly because the pneumonia is what killed them. Just like COVID and my gunshot example, however, they only developed pneumonia because of measles.

So now, with all of that in place, let’s circle back the analogy of someone getting hit by a car. I like analogies a lot. I have frequently argued that they are valuable for testing whether consistent reasoning is being applied. However, as I have explained before, for the analogies to be useful, they must follow the same logical structure as the original argument. That is very clearly not the case here. Someone who happens to have COVID getting hit by a car is a very, very different thing from either someone with a pre-existing condition that predisposes them to complications from COVID dying from an interaction between the condition and COVID or COVID itself causing a secondary condition.

Do you see the difference? The vast majority of comorbidities listed are directly related to COVID either as a factor that exacerbates the situation or as a result of COVID. In contrast, the car accident has nothing to do with COVID. They are not analogous, and anyone who would use such a clearly terrible argument obviously does not know what they are talking about.

Having said all of that, there are almost certainly some cases in this database where COVID truly wasn’t the cause. There are probably some cases where someone who had COVID just happened to have a heat attack that would have happened without the COVID, or where someone who had COVID was in an accident, but when you start looking closely at the data, those are clearly a very tiny minority, and the vast majority of comorbidities relate to COVID. Indeed, beyond these data and all the data looking at how COVID attacks the body, we also know that there have been far more deaths this year in the US than there were during the same time period last year (Weinberger et al. 2020). Indeed, there are more excess deaths than the total number of reported COVID deaths. Understanding exactly what that means is very complicated because there are many contributing factors. We may be underestimating COVID deaths, but also, there may be increased deaths due to factors like people not seeking medical help for conditions for which they normally would seek help. Conversely, things like a decrease in car accidents could pull the number the other direction. However, several pieces of evidence (such as a spike in excess deaths in places that had large outbreaks with many reported COVID deaths; e.g., New York city) indicate the COVID is a key factor in the number of excess deaths seen this year, and it is very unlikely that we are grossly overestimating the COVID mortalities.

As others have pointed out, the correct way to look at this 6% figure is not that only 6% of reported COVID deaths were actually from COVID. Rather, it means that of all the people who died from COVID, 6% did not have any other reported conditions. In other words, these data show that some people are more vulnerable to COVID than others due to existing health conditions (which we already knew) and COVID often results in secondary problems which contribute to patients’ demise (again, which we already knew). Stop trying to twist science to fit your personal agenda and look rationally at the facts. Think critically and don’t blindly believe something just because you saw it on Facebook or Twitter.

Posted in Uncategorized | Tagged , | Comments Off on COVID comorbidities are not analogous to car crashes: Debunking the 6% mortality claim