Understanding the reported risks of medicines, foods, toxic chemicals, etc.

image from board game risk

I mostly used this image because I was stumped for a good picture to use for this post, but the board game Risk can actually be a great exercise in probabilities.

We are constantly bombarded with news reports and claims like, “A new study found that chemical X increases your risk of disease Y by 100%” or “doing X makes you twice as likely to have Y,” but what do those numbers actually mean? People are notoriously bad at assessing risk, and we often perceive risks to be much greater than they really are (or sometimes less than they really are). Part of the problem is the way that we talk about risk. Describing it in terms of percent changes or multiples is actually a really bad idea because it can be very misleading. When you describe risk as a percent change, you can make a very large risk seem very small or you can make a very small risk seem very large. In fact, in isolation, a statement like, “X increases your risk of Y by 50%” is fairly meaningless. Therefore, I want to explain how you should assess risk, and what numbers like, “a 50% increase” actually mean.

How general are the risks?
Whenever you hear about some new risk that scientists have discovered, you should always look very carefully to check for caveats. It is often the case that the news articles make very general claims like, “drug X increases the risk of cancer by 100%,” when the research actually showed something along the lines of, “in men of a certain age range, who are also taking Y, and have pre-existing condition Z, a very particularly form of cancer increases by 100%.” It is not valid to generalize a result like that to the entire population.

Also, you need to check the relationship between the dose and the increase in risk. Never forget that the dose makes the poison. Essentially everything is toxic in a high enough dose and safe in a low enough dose. So it may be that chemical X increases your risk of certain diseases when consumed at a high dose, but that does not automatically mean that it is dangerous at a low dose.

Finally, you should of course make sure that the study which produced the risk estimate is in fact reliable (i.e., was peer-reviewed, had a large sample size, used a robust design that was capable of establishing causation, etc.).

Percent changes are relative
When you hear claims like, “X increases Y by 50%,” you are typically being presented with a percent change. The calculation for this is fairly simple. You take the risk with X, subtract the risk without X, divide by the risk without X, then multiply by 100. This is basically a measure of the relative difference between the risk with X and the risk without X. However, percent changes are funny things because they are strongly influenced by your starting point. If your starting point is very small, then even a small actual change will yield a large percent change. Conversely, if your starting point is very large, even a sizable actual change will yield a fairly small percent change.

Let me illustrate it this way. Let’s say that we have four diseases (A, B, C, and D), and each of them is associated with the following risk (i.e., these are their infection rates): disease A = 1 in 10, disease B = 1 in 100, disease C = 1 in 1,000, and disease D = 1 in 10,000. To put this another way, out of a population of 100,000 people, 10,000 will have disease A, 1,000 will have disease B, 100 will have disease C, and 10 will have disease D (on average). Now, let’s say that scientists discover that chemical X increases your odds of getting each disease by 100%. So, if you use chemical X regularly, you are twice as likely to get each disease. That means that your actual risk of getting each disease has gone up by the same amount, right? Wrong! With X, each disease has the following rates (risks): disease A = 2 in 10, disease B = 2 in 100, disease C = 2 in 1,000, and disease D = 2 in 10,000. That means, that in our population of 100,000 people, 20,000 will have disease A, 2,000 will have disease B, 200 will have disease C, and 20 will have disease D. Disease A increased by 10,000 people, whereas disease D only increased by 10 people, even though both rates went up by 100%! In other words, describing these risks as percent changes is extremely misleading, because the actual risk of disease A goes up by a much greater amount than the actual risk of the other diseases.

Making mountains out of mole hills
Understanding how percent changes work is extremely important, because people often present risks as percent changes in order to frighten or manipulate people (or, in the case of the media, to grab attention). Imagine, for example, that there is a rare type of cancer (Y) that occurs in 1 in 1,000,000 people, and scientists discover that drug X increases that to 3 in 1,000,000. It’s pretty easy to imagine the headlines, “Drug X increases cancer Y by 200%!” or “Using X makes you 3 times as likely to develop cancer Y!” It’s also easy to imagine people with an ideological opposition to drug X pouncing on this result and touting it as evidence of the evil nature of “Big Pharma.”

Now that you understand percent changes, however, the problem with that should be very obvious. Without X, there is only a 0.0001% chance that you will develop cancer Y (assuming that the disease is truly random and everyone has an equal probability of getting it). With drug X, your odds only increase to 0.0003%. So even though your odds went up, it is still extraordinarily unlikely that you will develop cancer Y, and although it is technically true that your odds increased be 200%, describing the increase that way is clearly misleading. Further, as I will explain more later, there may be huge benefits of X that majorly outweigh that minor increase in risk.

Conversely, you can use percent differences to mask really important changes when you are talking about things that are already common. Let’s say, for example, that there is a disease (W) which affects 5 out of every 10 people, and scientists discover that drug Z increases that to 5.1 out of every 10 people. That’s only a 2% increase, and it probably wouldn’t make major headlines, but in reality, it is far more significant (in terms of the number of people infected) than the changes that drug X made to cancer Y in our previous example. You see, if we have a population of ten million people, then at a rate of 1 in 1,000,000, there will only be 10 cases of cancer Y, and using drug X only increases that to 30 cases. In contrast, there will 500,000 cases of disease W, and drug Z will increase that to 510,000 cases! Hopefully now you see why percent differences are so misleading. Drug X caused a 200% increase, but only resulted in 20 extra cases per 10,000,000 people; whereas, drug Z only caused a 2% increase, but that resulted in 10,000 extra cases!

Comparing risks
The situation becomes especially complicated and confusing when something has multiple effects, and it is here, more than anywhere else, that people struggle with risk assessment. Let’s say, for example, that there are two equally horrible diseases (disease 1 and disease 2), both of which are lifelong once they are contracted. Disease 1 is rare and affects 1 in 100,000, whereas disease 2 is common and affects 1 in 100. Now, scientists develop pharmaceutical X which lowers your chance of disease 2 by a mere 1%, but it increases your chance of disease 1 by 100%. Should you take drug X? An enormous number of people would say “no,” and it is extremely easy to imagine the arguments that X is horrible because it doubles your chance of disease 1. In reality though, you absolutely should take drug X.

Let’s take a quick look at the math (note: there is a far more detailed explanation of the math at the end of the post). Without drug X, there is a 0.001% chance of you developing disease 1, and there is a 1% chance of you developing disease 2. Thus, there is a 1.00099% chance that you will develop at least one of those two diseases (see the math at end of post). Now, if you use drug X there is a 0.002% chance of getting disease 1 and there is a 0.99% of getting disease 2. This means that with drug X, you have a 0.99198% chance of getting at least one of these two diseases. So using drug X actually lowers your chance of disease! To put this in a way that makes more sense to most people, if we have a population of 10,000,000 people, then without drug X, 100,099 people will become ill from these two diseases, but with drug X, that number will be reduced to 99,198. In other words, drug X would prevent 901 cases of disease in a population of 10,000,000.

Part of the problem with risk assessment is obviously psychological. Even though mathematically we are better off with drug X, the fact that using drug X increases our odds of getting disease 1 creates a psychological barrier. It feels safer not to do that, even though in reality you are more likely not to get sick if you use drug X. This one again demonstrates why gut instincts are unreliable, and you should instead follow science and mathematics.

Real world importance
I have been using hypothetical examples to illustrate the concepts, but risk assessment is a very important part of real world life. Essentially everything has a risk associated with it, and failing to properly assess risks can have serious consequences.

Vaccines are a great example of this. Indeed, the entire anti-vaccine movement can aptly be described as a massive misunderstanding of risk. Like all real medications, vaccines do admittedly have side effects (i.e., they increase your risk for certain problems), but serious side effects are very rare. So the risk associated with them is very low. In contrast, if we stop vaccinating and allow infectious diseases to return, then the risk of injury or even death from those diseases is extremely high (i.e., the risk associated with not vaccinating is greater than the risk associated with vaccinating).

Vaccines and infectious diseases are more complicated than the simplistic examples that I have been using thus far, which makes calculating exact numbers difficult, but we can get a pretty good idea of the risk involved in not vaccinating by looking at things like declining childhood mortality rates reduction, the death tolls prior to vaccines, studies that have documented the reduced mortality rates that result from vaccines (Clemens et al. 1988; Adgebola et al. 2005; Richardson et al. 2010), and studies which have estimated that vaccines prevent several million deaths globally each year (Ehreth 2003), including thousands of deaths within the US (Whitney et al. 2014). All of these lines of evidence clearly demonstrate that not vaccinating comes with an extremely high risk. Indeed, even if some of anti-vaccers’ most absurd claims about vaccines were true (which they aren’t), we would still be better off with the vaccines.

Again, I think that a key part of the problem here is psychology. It is true that vaccinating your child involves risk, and it is true, that if you let a doctor vaccinate them, they could be injured by that vaccine (again, serious injuries are extraordinarily rare). To many parents, that is simply unacceptable. They refuse to do something which might hurt their children. Indeed, one of their most prolific arguments is, “no vaccine is 100% safe.” This desire to protect their children is clearly understandable and even admirable, but in this particular case, it is misguided, because while it is true that vaccinating has risks, it is equally true that not vaccinating has risks, and our modern society has become so removed from the risks associated with not vaccinating, that we no longer consider them. A proper understanding of risk, however, shows us that we absolutely have to consider them, and it is actually safer to vaccinate than to avoid vaccines.

Note: you could argue that if you don’t vaccinate but you live in a community with a high vaccination rate, then your personal risk of a vaccine-preventable disease is still quite low. This is technically true, but it misses several key points. First, that only works if everyone else vaccinates, and every time that someone else refuses to vaccinate, your risk goes up. So if this is your reasoning, then you really shouldn’t be promoting anti-vaccine views, because doing so increases your risk. Second, this line of reasoning is absurdly selfish, because you are expecting everyone else to take risks so that you don’t have to (again, to be clear, the risk from vaccinating is very, very low).

Conclusion/take home messages
Risk is an admittedly complicated topic, and my hypothetical examples don’t even begin to capture the full complexity of the problem. For example, in most real situations, when you are comparing two risks you have to consider both the amount of risk and the severity of the thing being risked (e.g., a 50% risk of death is worse than a 50% risk being ill). So I have no delusions of having thoroughly addressed the problem, but here are a few key take home messages to help you with risk assessment.

First, you need to check the quality of the study that produced the risk, as well as checking the doses for which the risks were reported. You also need to make sure that the results are actually widely applicable and don’t simply apply to a small subset of the population.

Second, you always need to look at the actual risk, not just the percent change. So, when you hear some alarming report about a drug, chemical, food, pesticide, etc. increasing your risk by a massive percent, don’t overreact. Stop and look at what the actual risk is without the substance in question, then look at what the actual risk is with it. Even tiny changes in absolute risk can appear very large when expressed as a percent.

Third, don’t get sucked into thinking that doing X has risks, but not doing X is risk free. There are generally risks associate with both choices, and you need to compare them. For example, if someone says, “we shouldn’t use pesticide X because of its risks” you need to consider not only the risk of pesticide X, but also the risks of the alternatives to X. Similarly, when considering medications, you need to consider not only the risk associated with taking the medication, but also the risk associated with not taking the medication. Only after you have considered the actual risk (not the percent change) associated with both choices, can you make a truly informed decision.

Note: I have tried to double check all of my math, but it is entirely possible that I lost a 0 somewhere or something stupid like that. So if you spot an arithmetic mistake, please let me know so that I can fix it.

Appendix: Understanding the math
In the main body of the post, I did not explain the math in great detail because I didn’t want to confuse people (or just make them bored), but understanding the math of probabilities is important, so let’s look more closely. First, it’s important to understand that probabilities can be expressed in several ways: as a number of cases within a defined population (e.g., 1 in 1,000), as a probability ranging from 0 to 1 (0 = no chance, 1 = a 100% chance), or as a percentage. You can also refer to any of these as the odds, chance, or probability of the event in question occurring. Converting between these formats is actually very straight forward. To go from the number of cases to a decimal, simply divide. For example, 1 in 1,000 is the same thing as 1/1,000 = 0.001. To go from a decimal to a percent, simply multiple by 100 (e.g. 0.001*100 = 0.1%). So, 1 in 1,000, 0.001, and 0.1% all mean exactly the same thing. To go in the opposite direction, simply reverse the functions (i.e., to go from a percent to a decimal, simply divide by 100, and to go from a decimal to a number in X, simply multiply X by your decimal). Also, you don’t need to use 1 in X as your starting point. For example, 3 in 37, 0.08181 (3/37), and 8.181% (0.0818*100) all mean the same the thing. To actually calculate the odds of several events occurring, however, we need to use the decimal format.

Thus, for our calculations of disease rates under the “comparing risks” section, we can do the following:
Disease 1 without X = 1 in 100,000, or 0.00001, or 0.001%
Disease 1 with X = 2 in 100,000, or 0.00002, or 0.002%
Disease 2 without X = 1 in 100, or 0.01, or 1%
Disease 2 with X = 0.99 in 100, or 0.0099, or 0.99%
(note: going from 0.00001 to 0.00002 is a 100% increase, and going from 0.01 to 0.0099 is a 1% decrease, as described in the post).

Many of you are probably familiar with the product rule. This states that if you have two independent events, then the probability of both occurring is the product of the probabilities of each occurring independently. For example, if we are going to flip two coins and we want to know the probability of getting heads on both flips, then we simply take 0.5 times 0.5, because the odds of getting heads from coin 1 is 50% (0.5) and the odds of getting heads from coin 2 is also 50% (0.5). Thus, the odds of getting both heads is 0.5*0.5=0.25 (25% chance). In our disease example, however we don’t want the odds of both diseases occurring. Rather, we want the odds of at least one occurring. Calculating that is a bit less straightforward. What we need to do is calculate the odds of neither happening, then subtract that from 1.

Let me use coins again to illustrate. Suppose that we want to know the odds of getting at least one head (rather than the odds of getting both heads). The only way to get at least 1 head is if we don’t get both tails (i.e., you can think of getting at least one head as the opposite of getting both tails). Thus, we calculate the odds of getting both tails (0.5*0.5=0.25) and subtract from 1 (1-0.25 = 0.75). Thus, there is a 75% chance of getting at least 1 head. If you think about what we just did there, it should make sense. There are 4 possible combinations of heads and tails (HH, HT, TH, TT) and the only one of those that does not involve getting any heads is two tails (TT). Thus, if there is a 25% chance of getting two tails, then there must be a 75% chance of a result other than two tails, and any result other than two tails will involve at least one head. Thus, there must be a 75% chance of getting at least one head.

Now, for our disease example. Without X, the odds of getting disease 1 are 0.00001, which means the odds that you won’t get disease 1 are 1-0.00001=0.99999. Similarly, the odds of getting disease 2 are 0.01, which means that the odds that you won’t get disease 2 are 1-0.99=0.99. Therefore, the odds that you won’t get disease 1 or disease 2 are 0.99999*0.99=0.9899901. In other words, there is a 98.99901% chance that you will not get either disease. Just like in our coin example, anything other than getting neither disease must involve getting at least one disease. Thus, the odds of getting at least one disease without X are 1-0.9899901=0.0100099. Now, you can then do the exact same thing for the diseases when X is in use, and (assuming that I did my math correctly) you should find that with X, the odds of getting at least one disease have dropped to 0.0099198.

This entry was posted in Nature of Science, Vaccines/Alternative Medicine and tagged , , , . Bookmark the permalink.

2 Responses to Understanding the reported risks of medicines, foods, toxic chemicals, etc.

  1. Matt McOtelett says:

    Hello,

    This is a great article you’ve got there. I took the liberty to translate it in french, as well as a few others, on this blog :
    https://latheierecosmique.wordpress.com/2016/03/18/trad-qui-sont-ces-risques-qui-sifflent-sur-nos-tetes-pesticides-vaccins-et-autres-produits-chimiques-difficulte-moyenne-3000-mots-20-mins/

    You can find it all in the “traduction” (=translation) subsection.
    I thought you might want to know about it 🙂

    Well, great job again, keep on feeding us with quality skepticism and science !

    Like

Comments are closed.