Dear Americans, stop using China and India as climate change scapegoats

Recent comments on a climate change post that illustrate this faulty argument.

I spend a lot of time on this blog debunking bad arguments, and I have previously devoted a lot of effort to debunking bad arguments against man-made climate change. There is, however, one extremely common argument that I have not previously addressed. I’ve been reluctant to deal with it because it is an argument about what we should do, rather than the facts of climate change. Nevertheless, it is extremely problematic and prevalent. Therefore, I think it is worth discussing.

The argument in question is made by Americans and states that there is no point in America reducing its greenhouse gas emissions because developing countries, India, China, etc. aren’t going to change their practices (you can see several examples of this argument in the screenshot from recent comments on my Facebook page). Underpinning this argument (and sometimes directly stated) is the assumption that America does a better job of dealing with emissions than other countries do, and countries like China are really the major guilty parties. This is one of those rare arguments where every aspect of it is wrong. The premises are incorrect, and even if they were true, the conclusion doesn’t follow from those premises. So, let’s talk about this for a minute.

A comparison of absolute greenhouse gas emissions among the biggest contributors. Any country that produced at least 5% of the global emissions is shown, with all other countries grouped into either Africa, Europe (without Russia), South America, or Other. The data are 2012 data from the European Commission Joint Research Centre.

First, we need to be clear about just how much America contributes to the world’s greenhouse gas emissions, because it is substantial. The USA produces roughly 14% of the world’s total greenhouse gas emissions. By comparison, the entire continents of Africa and South America combined produce only 11%. So those developing countries that proponents of this argument are so worried about aren’t the big contributors. Even when you combine a huge number of them, you don’t meet the emissions from the US. Indeed, India, which is one of the countries that nearly always gets singled out by this argument, only produces 7% of the world’s greenhouse gases. To be clear, that’s substantial, and it is a problem, but trying to shift the blame from the USA to India is insane, because the US produces way more greenhouse gases than India does. To be fair, China does produce a lot more than the US in terms of absolute numbers, but, as I’ll elaborate on below, China has a substantially larger population than the US.

If we really want to understand how much of a role each country is having in climate change, we also need to look at the data per capita (i.e., corrected for population size) rather than just the raw numbers. Looking at these data can become messy, because there are a bunch of tiny countries that, for various reasons, produce a lot of emissions per person. As a result, they score very high on per capita emissions, but are only contributing a tiny amount in absolute terms. The sensible approach is, therefore, to look at per capita emissions among the countries that are large contributors to the absolute amount of emissions. For the sake of this post, I set that threshold as countries that produce at least 1% of the world’s total greenhouse gas emissions. As you can see from the graph below, only 19 countries meet that criteria, and most of them aren’t countries that people would label as “developing.”

Greenhouse gas emissions per capita for any country that produced at least 1% of the world’s total emissions. I have colored the US, China, and India in red since they are the countries that are usually involved with the argument I am addressing. The data are 2012 data from the European Commission Joint Research Centre.

When looking at the per capita data for those countries, Australia and Canada are the worst offenders, and they absolutely should be held to account for that and should change their practices, but they only produce about 1.3% and 1.6% (respectively) of the world’s total emissions. So they are still fairly small players. The US, on the other hand, is a close third place for per capita emissions and, again, has very high total emissions. But what about China and India, the countries that people are so worried about? China is in 11th place, and India is in last place for per capita emissions (among countries who contribute at least 1% to the total emissions). Let me try to put the numbers this way: China has about 18.4% of the world’s population and produces 23% of the emissions. India has 17.2% of the population and produces 7% of the emissions. In contrast, the US has a mere 4.4% of the population but produces 14% of the emissions! To put that another way, on average, each American produces as many greenhouse gases as 2.2 Chinese people or 7.8 Indians. Again, to be clear, the emissions produced by China and India are a problem. I’m not suggesting otherwise, but stop pretending that they are the problem while simultaneously acting as if the US is some magical greenhouse gas utopia. It isn’t. The US is one of the worst offenders and produces far more greenhouse gases per person than either China or India.

Now that we have cleared up the actual numbers, let’s talk for a minute about the notion that the US is the only country that is taking action. This is blatantly false. Indeed, if you’ve paid even the tiniest bit of attention to world politics over the past few years, it should be obvious that this is false because of the Paris Agreement. This agreement was signed by China, by India, by all of those developing countries people are so worried about, etc. Do you remember which country backed out if though? I seem to recall it being the USA. Other countries (including China and India) are, in fact, investing in renewable energy. Similarly, China is implementing an emissions trading program to try to reduce their emissions. This notion that other countries aren’t acting is simply false.

Additionally, even if America was the only country that changed its actions, a large reduction in America’s emissions would still have global benefits, because, again, it is the second biggest contributor to climate change in absolute terms and one of the largest per capita. People seem to have this notion that climate change is a problem with a binary solution: either we fix it or we don’t, but that is an insane way to view the problem. Global warming is a continuum, and every 10th of a degree of warming matters and makes things worse. So, let’s imagine for a second that America drastically cuts its emissions but no other countries do, or perhaps they do so more slowly. What happens? Well, climate change still happens, but it occurs more slowly and/or doesn’t become as extreme. So, there would still be a global benefit even if America was the only country to take serious action (again, other countries are taking action, often more seriously than the US).

Finally, I want to ask, since when was, “other people were doing it too” a valid excuse for harmful actions? Let’s be clear here, how we should deal with climate change is really an ethical dilemma, not a scientific one. Science can tell us what is causing climate change, what we need to do to limit the warming, what happens if we don’t limit the warming, etc., but it can’t tell us what we should do, because that requires a moral judgement that science can’t make. Science does tell us, however, that the consequences of not taking action will be dire. They aren’t going to end modern civilization, but millions of people will die, and, indeed, thousands have already died as a result of climate change. Further, those negative effects will disproportionately impact the poorest members of our planet, even though they are the ones that contributed the least to the problem. I’m not going to enter into a lengthy philosophical rant here, but for me personally, that scenario is a problem. I personally think that human life has value and there is a moral imperative to minimize the loss of human life. Thus, I think that people need to realize that our actions have consequences, and regardless of what other countries do, the actions of Americans contribute to a problem that costs human lives. Now, maybe that isn’t an issue for you. Maybe you don’t value human life. I’m not here to convince you otherwise, but if you do value human life, then you need to take responsibility for your country’s actions rather than trying to pass the buck off to other countries. The fact that another country does something immoral does not justify you or your country doing something immoral.

In conclusion, I want to be clear that I am not attacking America, suggesting that it is entirely responsible for climate change, etc. This is a global problem and every country needs to take responsibility for their role in it. Further, countries like China and India do play major roles, but so does America, and Americans need to stop trying to shift the blame. It is a fact that America produces a disproportionate level of greenhouse gases and is a major contributor to climate change. It produces the second highest level of greenhouse gases in absolute terms, and per capita, it produces far more emissions than either China or India. So, this notion that other countries are the real culprits is blatantly false. America bears a huge portion of the blame for climate change. Additionally, it is insane to act as if America is the only country that is taking action against climate change, because it is lagging behind many countries, and countries like China and India are, in fact, making changes. There are certainly more changes that need to be made across the board, but again, that doesn’t mean that America shouldn’t do its part. Finally, even if America was the only country that was taking climate change seriously (which is clearly not the case), that still would not absolve America of its responsibility, and a reduction in America’s emissions would still be beneficial.

Data: I used the data from the European Commission Joint Research Centre. I originally chose this source because it had fairly recent global data for CO2 emissions (2016), but I ultimately decided it would be more meaningful to look at total greenhouse gas emissions (which only go up to 2012 at the time I wrote this). The patterns in both data sets are very similar and would in no way change my arguments, but feel free to play with the CO2 data if you want.

Related posts:

Posted in Global Warming | Tagged , | 23 Comments

Vaccines don’t “bypass the immune system”

I frequently encounter anti-vaccers who argue that vaccines are bad because they are “unnatural” and “bypass the natural immune system.” This argument is nothing more than an appeal to nature fallacy. Whether or not something is natural has no bearing on whether or not it is safe and beneficial, so I could stop right here, but let’s look at this argument further, because the core premise of this argument is not even true. Vaccines don’t “bypass” the immune system, if they did, they wouldn’t work. Rather, vaccines train your immune system to recognize deadly pathogens before you are exposed to them.

Before I can explain how vaccines train your immune system, you need to understand the basic concepts of how your immune system works. It can be broken into two broad categories: the innate immune system and adaptive immune system. Your innate immune system is, well, innate. It’s always there acting as your first line of defense. It includes things like your skin (which acts as a barrier to pathogens), mucosal surfaces, and non-specific immune cells like phagocytes and macrophages. That last qualifier is particularly important: the innate immune system is non-specific. It targets anything that does not belong in your body, but it is not specialized for particular pathogens. Think of it like your basic infantry. It’s a vital part of the military, but for particularly dangerous threats, you often need more specialized troops and weapons. This is where the adaptive immune system comes into play.

The adaptive immune system (aka acquired immune system) provides a targeted response that is specific for a given pathogen. You see, your immune system uses tiny molecules on the surfaces of cells to recognize friend (your body’s cells) from foe (foreign pathogens). These molecules, known as antigens, are specific for each type of cell, and your body can use them to engineer specialized cells that are specifically intended to fight a particular pathogen. Thus, when your body detects the presence of a novel pathogen (via its antigens), it triggers the adaptive immune system and begins producing B and T cells that are specific for that pathogen. Think of them like your special ops, snipers, stealth bombers, etc.

The catch is that because the adaptive immune system produces cells that are specific for a given threat, it can’t start producing those cells until your body has encountered the threat, and by that point, your body has already been invaded. In other words, it takes time for your body to mount an adaptive immune response, learn to recognize and target the invading pathogen, and produce sufficient numbers of the specialized cells. Meanwhile, despite the best efforts of your innate immune system, the enemy pathogens are replicating, spreading, and amassing a formidable army. Thus, for highly pathogenic diseases, by the time that your adaptive immune system is ready to go, the invaders have already claimed territories and your immune system is going to war, resulting in you being sick and potentially dying.

If you win the war and survive, your body will remember the disease and keep low levels of specialized troops circulating (via memory B and T cells). That way, a specialized force is ready to go if you are ever re-invaded by that pathogen in the future. This is what we know as “natural immunity.” It’s not perfect, and it can wear off overtime if your body stops producing and maintaining those specialized cells, but, combined with your innate immune system, it does a pretty good job of protecting you.

So, now that you understand the broad strokes of how your immune system works (it’s obviously far more detailed than what I explained), let’s talk about how vaccines fit into this. Vaccines present your body with the antigens of a given pathogen (usually via a dead or weakened form of the pathogen), along with an adjuvant to stimulate your immune system. This causes your adaptive immune system to mount a response even though you aren’t actually under attack. Thus, your body produces the specialized cells for fighting a given disease even though you don’t have the disease. Think of vaccines like scouting reports from spies that inform generals about the enemy’s plans and movements before the enemy attacks, thus allowing them to plan an appropriate defense. That is fundamentally what a vaccine does. It trains your adaptive immune system to be ready to fight a disease before you actually encounter the disease.

Further, following a vaccination, your body will continue to maintain a reserve of the specialized troops, just like it does after an actual infection. This is what actually protects you from diseases. It’s not the vaccine itself that protects you. Rather, it is the immune cells that the vaccine stimulates your body to produce (along with the antibodies some of those cells produce). Like natural immunity, the immunity from vaccines can wear off overtime, but a simple booster shoot will remind your body that this pathogen is important and cause it to continue to maintain adaptive immune cells that are ready to rapidly divide and fight the pathogen as soon as it enters your body.

Thus, as you can hopefully now see, vaccines don’t “bypass your natural immune system.” Rather, they stimulate your immune system and train it to recognize and fight pathogens before you are exposed to them. Indeed, they work exactly like “natural immunity” with only one important difference: natural immunity requires you to get the disease, whereas vaccines train your immune system without you getting the disease.

Suggested further viewing

If you want an excellent and much more detailed overview of the immune system, Hank Green’s three part Crash Course video series is about the best 30-minute introduction you could ever hope for (note: I didn’t watch these videos until after writing the post [I was looking for good videos to recommend to my readers], but he amusingly uses more or less the same military analogy that I used).
Part 1: Innate immunity
Part 2: Adaptive immunity (B cells)
Part 3: Adaptive immunity (T cells)

Related posts

Posted in Vaccines/Alternative Medicine | Tagged , , | 11 Comments

Extreme weather: The effects of climate change are already here

We are in the middle of yet another atypical hurricane season. Between hurricane Florence in America and typhoon Mangkhut in the Philippines and China, both hemispheres have been hit almost simultaneously by unusually strong storm systems. As a result, many people are again pointing a finger at climate change. This is not an unreasonable response, but it is one that we should be cautious about, because it is very easy to fall victim to the same logical blunder that often ensnares climate change deniers: confusing weather with climate. What we really need to know is whether there is a consistent pattern of increasing extreme weather events (spoiler alert: there is), so that is what I want to look at in this post (I’ll return to the topic of individual storm systems at the end). I’m going to briefly discuss the literature on extreme weather events to see if there is evidence that climate change is increasing their frequency and/or intensity. For the sake of brevity, I’ll just focus on three major categories of extreme weather: heatwaves, precipitation (both droughts and floods), and hurricanes (aka cyclones, aka typhoons; they are all the same type of storm, the names just differ in different parts of the world, but since most of my readers are American, I will refer to them all as hurricanes throughout).

Before I begin, I want to point out that this is a very important topic for understanding the consequences of anthropogenic climate change, because the extremes are potentially the most dangerous aspect of climate change. If you think about heatwaves for a second, this should make sense. Summers are already hot, but for the most part, they are bearable. They only really become a problem when we have consecutive days of abnormally hot weather (i.e., heatwaves). Thus, having a summer where the temperature is consistently 1°C above normal is inconvenient, but not critical (strictly in terms of the heat itself). However, having a summer with heatwaves that are more frequent and more intense is a far more serious dilemma. That type of summer is far more likely to cause heat strokes and various other problems. Similarly, having an average increase in rainfall (with low variation) is not nearly as problematic as having an increase in the number of massive, flash flood-inducing downpours. So, if we are going to talk about the potential damage and cost (economically, physically, and environmentally) of climate change, it’s important to discuss the extremes.

I’d also like to briefly state that the evidence is extremely clear that we are the primary cause of the current warming. Discussing that evidence is beyond the scope of this post, but I have previously done so here and here, as well as debunking most of the common arguments to the contrary here. Please read those posts before commenting with an argument that it’s not our fault.


Let’s start with the easy one. As its name suggests, global warming is resulting in a planet that is, on average, warming. Currently, the four hottest years on record are 2014, 2015, 2016, and 2017 (not in that order), and if current trends continue, 2018 will join their ranks, meaning that all five of the top five hottest years will have occurred in the past five years. Based on these increases in mean temperatures, it is hardly surprising that in many areas heatwaves have also been increasing and that increase is linked with global climate change (Klein Tank and Konnen 2003; Della-Marta et al. 2007; Tanarhte et al. 2015; Habeeb et al. 2015). Indeed, Perkins et al. (2012) found that, globally, the intensity, frequency, and duration of heatwaves is increasing. In other words, heatwaves are becoming hotter, we are experiencing more of them, and they are lasting longer (Habeeb et al. 2015). Further, other research has found that not only are heatwaves increasing, but the areas that are affected by them are expanding (Russo et al. 2014).

All of that is really bad, because despite common perceptions to the contrary, heatwaves are actually the most dangerous natural disaster in terms of human mortality. According to the CDC, in the US, heat kills more people than tornadoes, floods, lightning, or hurricanes. Indeed, the death tolls during large heatwaves can be staggering. For example, during 2003, Europe experience a record-breaking heatwave (far beyond expectations for natural weather patterns; Schar et al. 2004; Stott et al. 2004) that resulted in over 70,000 deaths (Robine 2008), with nearly 15,000 deaths in France alone (Argaud et al. 2007)! Let that sink in for a minute. This heatwave was so bad that it killed nearly 15,000 people in a single country.

To be clear, none of this is fearmongering, speculation, or “liberal propaganda.” These are simple facts. People are dying as a direct result of climate change, and the situation will continue to get worse if we don’t take action (Meehl et al. 2004; Luber et al. 2008; Lelieveld et al. 2016).

Extreme precipitation

 One of the things that people often find confusing about climate change is that the effects are different in different areas. Indeed, some areas are expected to experience increased precipitation (to the point of flooding), while others are expected to experience increased drought. Sometimes people jump on this fact and claim that climate scientists are simply making things up and claiming that everything is climate change no matter what weather we experience. but such claims are untrue. If you actually read the literature and look at the models, they clearly predicted beforehand that the patterns of change will not be uniform (i.e., some areas will have droughts while others have floods; though the net effect should be increased precipitation globally). Further, we aren’t running around arbitrarily claiming that changes in precipitation are due to climate change. Rather, we are very carefully studying the changes in wind currents, evaporation rates, etc. so that we understand the underlying mechanisms that are driving the changes. Finally, despite common claims to the contrary, our observations are actually pretty consistent with models’ predictions (Dai 2012; more sources and details here).

Describing these mechanisms in detail is far beyond the scope of this post, but two major patterns seem to be at play. First, many areas will experience more of the extremes of their typical weather patterns (Dai and Trenberth 1998). In other words, if you live in a fairly wet area, it is probably going to become wetter, and if you live in a fairly dry area, it is probably going to become drier. Again, this situation of exaggerated extremes is problematic. Dry areas already struggle with not having enough water, while wet areas already struggle with having too much of it, and climate change is expected to make both of those situations worse. The second pattern, which is related to the first, is that, in many cases, more precipitation is expected at high latitudes, while less precipitation is expected in the arid sub-tropics (Trenberth 2011).

So, what are we actually observing? In short, there is a net increase in precipitation globally (Alexander et al. 2006), and some areas are experiencing more extreme downpours (Dai and Trenberth 1998; Groisman et al. 2005; Trenberth 2011), while dry areas are experiencing more droughts (Dai and Trenberth 1998; Dai 2010; Trenberth 2011). These downpours and droughts in turn are resulting in increased floods and wildfires (respectively), increased damage to crops, increased damage to ecosystems, increased damage to property, increased loss of human life, etc. (Rosenzweig et al. 2001; Milly et al. 2002; Flannigan et al. 2009; Carnicer et al. 2010; Schlenker and Lobell). Further, I want to make it clear again that scientists aren’t running around arbitrarily blaming climate change for these events. We have carefully studied the underlying mechanisms of these precipitation extremes and found that the current trends are unlikely to be natural and are linked to human-induced increases in temperature (Held and Soden 2006; Allan and Soden. 2008; Min et al. 2011).


These storm systems are probably the ones that get the most attention in the press and general public, but they are, unfortunately, some of the hardest to study. This is because they are infrequent (resulting in small sample sizes per year) and because record keeping for them has been surprisingly inconsistent, making it difficult to look at long-term patterns. Having said that, we have sufficient data from the past few decades to draw some conclusions. Before I get to those though, I want to talk about scientists’ expectations, because most models don’t actually predict an increase in the total number of tropical storms (in some cases they actually predict a slight decrease). Rather, the prediction is that the storms will increase in intensity, and really intense storms will become more common. In other words, the total number of hurricanes per year should stay the same or go down slightly, but we expect more of those hurricanes to be very large, powerful storms (e.g., category 4 and 5 hurricanes). As with everything else that I have talked about thus far, that is problematic because the extremes are where most of the damage comes from. Having the same total number of hurricanes but more category 4s is worse than having a greater total number of hurricanes with mostly category 1s and 2s.

So, with all of this in mind, let’s once again look at what we have actually found. Walsh et al (2016) published a fairly recent review of this topic, so I recommend reading them for more details and sources, but in short, what we’ve found is that there is a general increase in both storm intensity and the proportion of storms that are really powerful (e.g., 4s and 5s), but the total number of hurricanes has not increased (Emanuel 2005; Elsner et al. 2008; Holland and Bruyere 2014). Also, the trends are more pronounced in some areas than others, with the North Atlantic basin (i.e., the one that affects the US) showing the strongest patterns. Another interesting and alarming result is that hurricanes are moving further away from the tropics and towards the poles (Kossin et al. 2014). In other words, as the planet warms, the tropics are expanding north and south of the equator, and, as a result, powerful hurricanes can strike further north and further south than they could previously. Thus, cities that have never had serious hurricane problems may now be faced with strong storm systems.

Update 2019: There is increasing evidence that hurricanes are also “stalling” more frequently. In other words, they are staying in one area for longer, thus resulting in increased damage (Kossin 2018; Hall and Kossin 2019).

As a final note, the damage caused by these systems is also increasing not only because the storms are becoming more intense, but also because sea level rise is resulting in increased storm surge and flooding.

The influence of climate change

 Before I conclude this post, I want to return briefly to the topic of blaming particular storm systems on climate change, because that situation is actually more complex than most people give it credit for. In short, we can never say with 100% probability that climate change caused a particular extreme weather event, but, based on all of the data that I have discussed, we can confidently say that climate change is making these events more likely, and for any particular event, it is likely that climate change played a role.

Let me use smoking as an example. If a regular smoker is diagnosed with cancer, you can never say with 100% certainty that smoking caused the cancer. It is always possible that they would have developed cancer even if they never smoked. However, because we know that there is an overarching causal relationship between smoking and cancer, we can say that smoking very likely contributed to their cancer and that, in general, smoking rates contribute to cancer rates. The same thing is true with climate change and storms. Because of the known causal relationships between temperatures and extreme weather events, for many extreme events, we can state that climate change likely played a role in them and that, in general, increased climate change is resulting in increased extremes.

Additionally, in the case of climate change, we can often go even further. By examining natural trends, our influence on the climate, and the causes of particular storm systems, we can often calculate the probability that a given system would arise absent our influence (and, conversely, how likely it is that our actions played a role). Indeed, several of the studies that I have cited throughout this post have done that. For example, as I mentioned earlier, when scientists examined the 2003 heatwave in Europe, they found that it was unlikely based on natural patterns (Schar et al. 2004) and that our actions have doubled the risk of such events (Stott et al. 2004). So, while we should be cautious about blaming everything on climate change, there is often very good evidence that particular events were probably influenced by our actions.


In short, there is very clear evidence that extreme weather events are increasing, and that increase is linked to climate change (which we are causing). Heat waves, floods, and droughts are all on the rise, and they bring with them heavy economic, environmental, and health burdens, with thousands of people dying as a result of them. Further, the intensity of these events is increasing as well as their frequency. Similarly, for hurricanes, storms are becoming more intense, and the strongest, most dangerous categories are becoming more common. This is a very real and dangerous consequence of our actions.

Again, this is not fearmongering or “liberal propaganda,” it’s not something that will only happen in the distant future, and it certainly isn’t a Chinese hoax. This is real, and it is happening right now. People are already dying as a direct result of what we are doing to the atmosphere, and those death tolls will only become worse if we don’t immediately take action to stop the climate from changing any further. I rarely include calls to action in my posts (other than encouraging people to fact check and think critically), but this topic is far too important for me to end the post without one. We need to start taking climate change seriously and stop relying on fossil fuels, even if it costs some jobs, increases taxes, etc. The cost of not taking action will be far, far higher than the cost of taking action, both economically and in terms of human lives. Thousands have already died because of climate change, and thousands more, probably millions more, will die if we don’t change our actions. We have the technology right now to make a huge difference, we just need to invest in it, and that means that you need to take personal responsibility in your daily choices and, perhaps most importantly of all, you need to contact your governmental representatives and tell them that this needs to be a priority. Then, you need to vote accordingly.

Note: In all likelihood, switching energy sources would actually result in a net increase in jobs and net increase in the economy, but even if that wasn’t true, the jobs of a few coal miners and bank accounts of rich oil CEOs aren’t worth the lives of the thousands of people who will die because of climate change.

Related posts

 Literature cited

  • Alexander et al. 2006. Global observed changes in daily climate extremes of temperature and precipitation. Atmospheres 111:D05109.
  • Allan and Soden. 2008. Atmospheric warming and the amplification of precipitation extremes. Science 321:1481–1484.
  • Argaud et al. 2007. Short- and Long-term Outcomes of Heatstroke Following the 2003 Heat Wave in Lyon, France. Archives of Internal Medicine 167:2177–2183.
  • Carnicer et al. 2010. Widespread crown condition decline, food web disruption, and amplified tree mortality with increased climate change-type drought. PNAS
  • Dai 2010. Drought under global warming: a review. Cliamte Change 2:45–65
  • Dai 2012. Increasing drought under global warming in observations and models. Nature Climate Change 3:52–58.
  • Dai and Trenberth 1998. Global variations in droughts and wet spells: 1900–1995. Geophysical Research Letters 25:3367–3370.
  • Della-Marta et al. 2007. Doubled length of Western European summer heat waves since 1880. Atmospheres 112:D15103.
  • Elsner et al. 2008. The increasing intensity of the strongest tropical cyclones. Nature 455:92–95.
  • Emanuel 2005. Increasing destructiveness of tropical cyclones over the past 30 years. Nature 436:686–688.
  • Flannigan et al. 2009. Implications of changing climate for global wildland fire. International Journal of Wildland Fire 18:483–507.
  • Groisman et al. 2005. Trends in intense precipitation in the climate record. Journal of Climate 18:1326–1350.
  • Habeeb et al. 2015. Rising heat wave trends in large US cities. Natural Hazards 46:1651–1655.
  • Hall and Kossin 2019. Hurricane stalling along the North American coast and implications for rainfall. Climate and Atmospheric Science 2
  • Held and Soden 2006. Robust response of the hydrological cycle to global warming. Journal of Climate 19:5686–5699.
  • Holland and Bruyere 2014. Recent intense hurricane response to global climate change. Climate Dynamics 42:617–627.
  • Kossin et al. 2014. The poleward migration of the location of tropical cyclone maximum intensity. Nature 509:349–352.
  • Kossin 2018. A global slowdown of tropical-cyclone translation speed. Nature Letters 558: 104-108.
  • Klein Tank and Konnen 2003. Trends in indices of daily temperature and precipitation extremes in Europe, 1946–99. Journal of Climate 16:3665­–3680.
  • Lelieveld et al. 2016. Strongly increasing heat extremes in the Middle East and North Africa (MENA) in the 21st century. Climate Change 137:245–260.
  • Luber et al. 2008. Climate change and extreme heat events. 35:429–435.
  • Meehl et al. 2004. More intense, more frequent, and longer lasting heat waves in the 21st century. Science 305:994–997.
  • Milly et al. 2002. Increasing risk of great floods in climate change. Nature 415:514–517.
  • Min et al. 2011. Human contribution to more-intense precipitation extremes. Nature 470:378–381.
  • Perkins et al. 2012. Increasing frequency, intensity and duration of observed global heatwaves and warm spells. Geophysical Research Letters 39:L20714.
  • Robine et al. 2008. Death toll exceeded 70,000 in Europe during the summer of 2003. Epidemiology 331:171–181.
  • Rosenzweig et al. 2001. Climate change and extreme weather events – Implications for food production, plat diseases, and pests. Global Change and Human Health 2:90–104.
  • Russo et al. 2014. Magnitude of extreme heat waves in present climate and their projection in a warming world Atmospheres 119:12500–12512.
  • Schar et al. 2004. The role of increasing temperature variability in European summer
  • heatwaves. Nature 427:332–336.
  • Schlenker and Lobell. Robust negative impacts of climate change on African agriculture. Environmental Research Letters 5:1–8.
  • Stott et al. 2004. Human contribution to the European heatwave of 2003. Nature 432:610–614.
  • Tanarhte et al. 2015. Heat wave characteristics in the eastern Mediterranean and middle East using extreme value theory. Climate Research 63:99–113.
  • Trenberth 2011. Changes in precipitation with climate change. Climate Research 47: 123–138.
  • Walsh et al. 2015. Tropical cyclones and climate change. Climate Change 7:65–89.
Posted in Global Warming | Tagged , , | 10 Comments

Is it likely that alternative medicine works? The importance of prior probability

Prior probability is basically just the plausibility of a result, given everything else we know about the universe. In other words, when evaluating a hypothesis or a study, you should weight your confidence in the result not only on the characteristics of the study in question, but also on how likely it is that a given result is true given our prior knowledge. To put that another way, science always builds on previous research, and the results of previous studies can often give us a good idea of whether or not a given result is plausible, before we actually conduct the study. People often act as if ideas and hypotheses exist in a vacuum and should be considered in isolation from everything else, but that isn’t how science works. It always builds on existing knowledge, and new ideas and results have to be evaluated in light of what we already know. Thus, it is often the case that we know enough about the system in which a hypothesis exists to have a really good idea of whether or not that hypothesis is correct before directly testing it, and, in some cases, our knowledge of a system is comprehensive enough that it is not even necessary to directly test the hypothesis.

To give a silly example, imagine someone tells me that pouring Mountain Dew into my car’s engine will improve its performance. Even without actually testing that hypothesis, I can reject it with a very high degree of confidence. I know enough about how cars work to know that it is almost certainly wrong. In other words, this idea conflicts with what we already know about how cars work, therefore it has a very low prior probability, and there’s really no good reason to bother testing it. Further, ignoring this suggestion that Mt Dew is good for cars isn’t being “close-minded,” “ideological,” “dismissive,” etc. There’s simply no good reason to think that it is right, and plenty of good reasons to think that it is wrong. Science inherently has to use what we already know as a starting point, and throwing everything that we know out the window to chase absurd suggestions is a substantial waste of time and money.

At this point, I can hear people screaming at their computers that science has to be based on evidence and experimental results, and you’re absolutely correct, but nothing that I have said disagrees with that. Prior probabilities have to be based on previous research, or else they are going to be very inaccurate probabilities. I’m not suggesting that we arbitrarily decide what is and is not true. Rather, I am simply pointing out that, thanks to decades of scientific research, we know enough about how the universe works to know that some ideas are patently absurd without directly testing those ideas. Further, keep in mind that science itself is an inherently probabilistic endeavor. Even after we directly test a hypothesis, we can’t say with 100% confidence whether the hypothesis is true or false. Rather, we can simply say that it is likely true or false based on the existing evidence.

Let me give another trivial example to illustrate this further. As a herpetologist, people often send me blurry photos of reptiles and amphibians that they want me to identify. Now, imagine someone sends me some blurry pictures that were taken in northern Michigan, and they tell me that they think that these photos show a population of broad-headed skinks (a lizard species). Without even looking at those pictures, I could say that they almost certainly are not broad-heads, and I could say that, because northern Michigan is well outside the known range of broad-heads. In other words, everything that we know about broad-heads says that they aren’t in Michigan. Therefore, the prior probability that these photos are of broad-heads is essentially zero. We’d have to be very wrong about our understanding of that species for those photos to be broad-heads. Nevertheless, it is, of course, always technically possible that our knowledge is in fact wrong, but we’d need far more evidence than some blurry photos before we could reach that conclusion. I’d want to see the lizards themselves and, ideally, test their DNA.

With all of that in mind, let’s talk about “alternative medicine” (aka complementary medicine, aka CAM). There are countless alternative treatments out there, and while many of them haven’t been studied (or only have a few studies), others have been extremely well-studied and have hundreds of publications (e.g., homeopathy and acupuncture). Looking at the evidence for these well-studied treatments can, however, be confusing, because while there are tons of studies saying they don’t work, there are also some studies saying that they do, sometimes even including systematic reviews. There are really good ways to evaluate the studies themselves (details here, here, here, here, and here), but prior probability is also quite useful. Further, evaluating the studies themselves obviously doesn’t help for the treatments that lack studies, but prior probability is still helpful.

that's not how this works memeLet’s take homeopathy as an example. As I’ve written about before (here and here) homeopathy relies on some pretty strange assumptions. First, it is based on the concept that “like cures like.” In other words, it treats a condition with something else that should cause the symptoms of that condition. For example, according to homeopathy, since coffee can cause people to have difficulty sleeping, coffee should also be useful for treating people with insomnia, because coffee causes the same symptoms as insomnia (I’m not making this up; homeopaths literally make supposed sleep aids from coffee beans). If that sounds crazy, good. It is crazy. We know a lot about biochemistry and how the human body works, and we know that it doesn’t work that way.

Second, homeopathic treatments are made by doing numerous serial dilutions, with each step making the active ingredient increasingly dilute. According to homeopathy, this works because diluting something actually makes it stronger. Again, that’s now how things work. One of, if not the, most fundamental concepts in toxicology and pharmacology is that the dose makes the poison. Everything is safe at a low enough dose (i.e., if it is diluted enough) and everything is toxic at a high enough dose. We know this. So, this concept that diluting something makes it stronger flies in the face of basic chemistry. As I’ve argued before, if you think that diluting something makes it stronger, try diluting some beer and let me know if it takes more or less of it for you to become drunk.

Third, because homeopathic solutions are often so dilute that they literally no longer contain a single molecule of the active ingredient (i.e., they are nothing but water), homeopathy also claims that water has memory and somehow retains the properties of the active ingredient even though the active ingredient is no longer there. Again, that’s not how chemistry works. Water doesn’t retain the properties of things it previously came in contact with.

So, when you add it all up, homeopathy is extremely implausible because it requires three different assumptions, each of which disregards basic facts about the universe. The odds that we are so fundamentally wrong on all three of those topics are very low. Therefore, homeopathy has an extremely, extremely low prior plausibility. A huge chunk of modern science would have to be wrong for it to be right. This means that we have really good grounds for dismissing it without further investigation. It also means that we would need some truly extraordinary evidence before we could conclude that it actually works. A few small studies simply won’t cut it. We would need many, massive studies with exquisite experimental designs and a very consistent pattern of positive results among them before we could say that homeopathy works.

Homeopathy has actually been well studied, but when you look at the results of those studies, you don’t find anything even approaching extraordinary evidence. There are lots of studies that found negative results (i.e., that it doesn’t work) and the studies that found positive results usually had small sample sizes and only found moderate effects. That is simply not sufficient evidence for a topic with such a low prior probability. Remember, false positives do occur even when a study was conducted correctly. That’s why we need to look for consistent patterns of evidence and consider the prior probability of a given outcome. The less plausible the conclusion, the more consistent and powerful the evidence needs to be.

Moving beyond homeopathy, we find the same type of implausible conclusions throughout alternative medicine. Acupuncture is based on mythical meridians and the pseudoscientific concept that there is good and bad energy. Similarly, “treatments” like reiki and healing crystals rely on unsubstantiated nonsense about energy and frequencies. Detoxes and cleanses ignore how your liver works, and the acid alkaline diet ignores how homeostasis works. I can keep going here. Ear candling, magnet therapy, earthing, reflexology, cupping, using chiropracty to treat disease, etc. All of these ignore basic facts about how the human body functions, while making absurd assumptions for which there is no evidence. Thus, they all have a very, very low prior probability, and we would need some extraordinary evidence before we could conclude that they work, and in the absence of that evidence, we can confidently move them into the rubbish-bin of failed ideas, because they conflict with everything that we already know about physiology, physics, chemistry, medicine, etc.

To be fair, some alternative treatments do have a higher prior plausibility than the treatments I’ve talked about thus far. Herbs, for example, are kind of a wash when it comes to prior probability. We know that many plants produce chemicals that have effects on our bodies, and we know that sometimes those effects are beneficial. So, the basic concept behind herbs makes sense, and there is a good prior probability that some of them work. However, that does not mean that there is a high prior probability for any one particular herb. Basically, what this means is that we need studies before we can conclude that an herb works, but the studies don’t need to be anything extraordinary. A few reasonably large, well-constructed studies are sufficient (assuming that there are consistent results among those studies), and in the absence of those studies, the rational position is to say that we don’t know if a given treatment works, rather than saying that a given treatment probably doesn’t work. Thus, this is a very different situation from something like homeopathy where we would have to be fundamentally wrong about the universe in order for it to work.

Now, at this point, you may be wondering why I am picking on alternative medicine instead of talking about conventional, or so-called “western” medicine (aka medicine). The answer is simply that our system for developing new drugs and treatments is specifically designed to maximize prior probability before we ever get to human trials. First, drugs are designed based on a detailed knowledge of biochemistry, which is far better than alternative medicine’s strategy of making a treatment based on anecdotes, tradition, and, often, superstition. Second, before being tested on humans, new drugs go through in vitro and/or animal trials. Drugs that fail those tests are deemed to have a low prior probability of working and are discarded, whereas drugs that work safely and effectively in those trials have a higher prior probability of being safe and effective in humans. To be clear, we still need good studies before we can conclude that they do actually work, but, as I tried to explain earlier, we don’t need the type of monumental, paradigm-shattering evidence that we need for many alternative treatments.

Finally, I can foresee those who believe in alternative medicines responding to this with the comment, “but [insert anecdotes].” So let me forestall that. Anecdotes simply are not good evidence of causation and, as such, don’t help you much with prior probability. As I’ve explained at length before, saying, “I took X, then got better, therefore X made me better” is a logical fallacy known as post hoc ergo propter hoc. It is an invalid line of reasoning. You could have improved because of regression to the mean, something else you took, chance, another type of placebo effect, etc. Further, you can find anecdotes for literally any treatment. There are, for example, many people who use anecdotes to argue that you should be drinking bleach and/or turpentine, but, somehow, I doubt that you place much weight on those anecdotes.

Nevertheless, you might try to argue that anecdotes aren’t good evidence of causation, but they do shift the prior probability. In the case of something like herbs, where no fundamental scientific concepts are being violated, I will grant you that they slightly improve the prior probability, but only to the extent that, if a scientist is going to test some herbs, it makes sense to start with the ones with lots of anecdotes behind them. I would not agree that anecdotes shift the prior probability enough to be useful for interpreting the results of the subsequent studies. Further, for things like homeopathy, acupuncture, etc. the prior probability is already so insanely low that anecdotes don’t make any difference.


In short, prior probability is simply the concept of using previous scientific knowledge to assess how likely it is that a given hypothesis or result is actually true. From that, we can  determine the strength of evidence that is necessary before we reach the conclusion that a given result is correct. If a result is consistent with what we know about the universe and makes sense based on previous research, then it has a high prior probability, and only needs moderately strong evidence before we can conclude that it is likely correct. In contrast, a result that flies in the face of basic scientific concepts would have a very low prior probability and would require extraordinary evidence before we could accept it as likely being true. Much of alternative medicine falls into that later category. It often ignores basic facts about science, makes absurd assumptions, and invokes fictitious concepts about energy, frequencies, etc. Indeed, many alternative treatments fly in the face of what we already know about the universe and have such a low prior probability that we can be reasonably confident that they don’t work without actually testing them.

Additionally, although I focused on alternative medicine for this post, the concept of prior probability is widely applicable. Countless topics like astrology, psychics, tarot cards, etc. have a very low prior probability based on everything we know about the universe. Therefore, they can be confidently dismissed as nonsense until such time as extraordinary evidence arises in their favor. Again, to be clear, if that evidence arises, you have to consider it, but you are not required to take the positions seriously or treat them as plausible until extraordinary evidence in their favor is found. To put that another way, we know enough about the universe to know that something like astrology is almost certainly wrong without actually testing astrology.

Related posts

Posted in Vaccines/Alternative Medicine | Tagged , , | 5 Comments

Courts don’t determine scientific facts

Most people have probably seen the recent news that Monsanto has been ordered to pay $289 million following the ruling by a California jury that Monsanto’s glyphosate (a.k.a. Roundup) is dangerous and likely contributed to Dewayne Johnson’s cancer. I could write many lengthy posts about why that ruling is wrong. I could talk about the numerous scientific studies that failed to find evidence that glyphosate causes cancer (e.g., this large, long-term cohort study with over 50,000 participants that wasn’t funded by Monsanto and failed to find an association between glyphosate use and cancer among farmers [Andreotti et al. 2017]). I could talk about the well-established fact that the toxicity of glyphosate is quite low. I could talk about the fact that multiple well-respected scientific bodies have examined the evidence and concluded that it does not suggest that glyphosate causes cancer. I could also talk about how the one dissenting scientific report (i.e., WHO’s IARC report) cherry-picked their evidence and reached a conclusion that has been widely criticized by the scientific community. Plenty of other pages have, however, already done all of those things, so I won’t spend more time on them here. Rather, I want to discuss why trials like this one are inherently problematic. Citing court rulings is an extremely common tactic among science deniers (anti-vaccers do it all the time), but it is not a logically valid tactic because courts don’t determine what is and is not a scientific fact.

The first major problem is simply that juries don’t consist of experts in the relevant scientific field. As I’ve talked about before, science is complicated. It takes years of carefully training, study, and hands-on experience to learn everything that you need to know to be able to properly evaluate scientific evidence. The notion that an untrained jury is going to master that over the course of a trail is absurd. Further, it is especially ridiculous when you consider that courtroom conditions inherently involve two opposing sides arguing as if they have equivalent merit. To put that another way, it is extremely easy to cherry-pick evidence to make it look like the science isn’t settled on an issue or, worse yet, like the scientific consensus is the opposite of what it actually is, and in a courtroom, a lawyer will do precisely that. They are obligated to argue in favor of a given position, regardless of whether that position is actually supported by the evidence.

Let me try an example. Imagine that there is some issue with your heart that you want diagnosed, and someone suggested to you that it might be because a particular aspect of your diet (i.e., you eat X, and they think X is bad for your heart). So, you take two approaches to figuring out whether your diet is the cause. In the first approach, you get multiple respected scientific organizations to examine the scientific evidence that X can lead to heart problems. These bodies of highly trained and experienced experts spend months or even years systematically examining the studies on this topic. The look at all the evidence that they can get and, ultimately, they conclude that there is no compelling evidence for X contributing to heart problems.

For the second approach, you construct a jury using the same criteria as in a court, then you get two lawyers to debate the issue as in a courtroom. One of them tries to convince the jury that X does cause heart problems, and the other tries to convince the jury that X does not cause heart problems. Rather than systematically examining all of the evidence, both lawyers cherry-pick evidence that supports their position, attempt to play on the jury’s emotions, bring in cherry-picked “expert” witnesses, etc. At the end of the trial, the jury concludes that X does cause heart problems (which is the opposite of what the scientific committees found).

Which conclusion seems more reliable to you? The one that was arrived at by experts spending months carefully and systematically examining all of the available evidence, or the one that was arrived at by non-experts basing a decision on a comparison of two extremely biased representations of the evidence? I think that the answer to that is pretty obvious.

To be clear here, I’m not saying that scientists are infallible or that the conclusions of scientific organizations are definitive statements of reality. That would be an appeal to authority fallacy. Rather, my point is that the courtroom system is fundamentally flawed and unreliable for determining scientific facts. The fact that a jury decided that X causes Y is completely and 100% irrelevant in any scientific debate. It has no bearing on reality, and you would be crazy to trust it instead of relying on numerous high-quality studies and reviews and meta-analyses of those studies that were systematically assembled by teams of experts. Whether or not something is a scientific fact has to be determined by actual research, and a jury’s opinion about that research is irrelevant.

Posted in Nature of Science, Rules of Logic | Tagged , , , | 54 Comments

Scientism: Is it a straw man or a legitimate critique?

If you browse through the comments on this blog/Facebook page, or the pages of just about any other pro-science page, you will quickly find accusations of “scientism.” Indeed, among those who like to disagree with scientific results, this seems to have become a get-out-of-jail-free response that they use to dismiss any evidence or arguments that conflict with their preconceptions. People seem to think that accusing their opponent of scientism is a valid substitute for presenting actual evidence to back up their position. Further, at least in instances that I have personally observed, this accusation is often a straw man fallacy that either misrepresents scientism or misrepresents the science-advocates’ claims. Nevertheless, it is very easy to get sloppy with how we phrase things and inadvertently make a statement that has the appearance of scientism, even if that was not the intent. Therefore, I want to briefly talk about what is and is not scientism.

Scientism is a philosophical position that emphasizes science above all else. Unfortunately, like many philosophical views, it is a bit amorphous, and there is no one universally accepted definition, and it’s really probably more of a spectrum than one discrete view. Nevertheless, here are a few common themes that you generally see in definitions of scientism. First, scientism often overstates our confidence in the results of science. Second, it often tries to apply science to topics that are outside of the scope of science, and third, it often states that science is the only source of knowledge. I’m going to talk about each of these and give some examples.

Let’s start by talking about our confidence in scientific results. I frequently get angry comments on my blog/Facebook page about how other skeptics and I are clueless idiots who worship scientists like gods and think that science is infallible. If we actually worshiped scientists or thought that science gave absolute and infallible answers, then we would, in fact, be guilty of scientism. However, I have yet to see anyone actually do either of those things, and this argument is usually a straw man. Science doesn’t give absolute answers. Rather, it is an inherently probabilistic process that simply tells us what is most likely true given the current evidence. That probability can change, however, when new evidence arises. In other words, all that we are saying is that we have to accept the results science gives us until such time as scientific evidence arises showing that those results are wrong.

The problem here is that people often jump from, “science doesn’t give definitive answers” to “science is unreliable, and I don’t have to accept its answers.” That’s illogical (in fact it is the very definition of science denial). That fact that science doesn’t give 100% proofs doesn’t mean that we can’t be very certain of the results that it gives, and it certainly doesn’t mean that you can reject it whenever you want. When dozens, hundreds, or even thousands of studies have all converged on a result, then it is very unlikely that the result is false, and it would be foolish to reject that result. That’s not a statement of scientism, rather it is simply a rational, evidence-based view of reality.

To put it simply, saying, “numerous studies have found that X is true, therefore X is absolutely true and there is no chance that it is wrong” would be scientism. However, saying, “numerous studies have found that X is true, therefore it is most likely true and we should act as if it is true until we have evidence to the contrary” is not scientism (at least not by any reasonable definition I’ve read).

Note: please read this post before bringing up the paper arguing that most scientific studies are wrong.

This comment was left on my page a while ago, and it is very typical of the type of straw man fallacy I am describing. Challenging the accepted wisdom because you have new studies that show that it might be wrong is fine. Challenging it because you don’t like it or because it conflicts with your preconceptions is not. (note: I forget the exact topic that this comment was left on, but I recall it being something for which a very strong consensus of studies existed)

To state that another way, any scientific result can be overturned, and scientists should consider new evidence as it arises, but, importantly, there is no reason to doubt a well-established scientific result until solid new evidence arises. In other words, many people want scientists to question well-established results based on anecdotes, speculation, and other forms of shoddy evidence, and when scientists refuse to do that, they accuse them of scientism (see the comment to the right that someone left on my Facebook page, for example). Being open-minded means being willing to accept new evidence, not being willing to accept something despite a lack of evidence (that’s being gullible). Further, it is worth clarifying that asking questions is good, even encouraged, but you have to be willing to accept the answers to your questions. It is fine to ask a question like, “is this treatment safe?” but if the answer is that there are multiple high-quality studies saying that it is and no compelling evidence that it isn’t, then refusing to accept the results of those studies is, by definition, science denial.

Moving on, scientism can occur when you try to use science to argue about a topic that is outside of the realm of science. Science, by its very definition, is limited to the physical universe. If we can’t observe and quantify it (or at least observe and quantify its results), then we can’t study it using science. Thus, philosophy and theology are outside of the scope of science, and science cannot answer questions like, “is there a god?” or “does life have meaning?” or “is this morally right?” To put that another way, science can show us how to clone a human being, but it can’t show us whether or not it is morally right to clone a human being.

Usually, religion is where people get into trouble with this more than in philosophy (again, in my observations at least). Anytime you hear someone make a statement like, “science has disproved the existence of god,” you are hearing scientism. The concept of god is inherently one of a metaphysical being who exists outside of the laws of science. Therefore, science cannot address his or her existence.

The flip-side of that is that religious people will often use accusations of scientism to attack scientific results that conflict with claims that their religion makes about the physical universe. Creationism is the most obvious example of this. Science can’t tell us if god exists, but it can tell us (with an extremely high degree of certainty) that life on earth has evolved for billions of years, Noah’s flood didn’t happen, etc., and none of that is scientism. You see, anytime that religion makes a claim about the physical universe, it has entered into the realm of science, and we can use physical evidence to evaluate the claim.

This leads to the final category I want to talk about: claims that science is the only source of knowledge. This is tricky to talk about, because the concept of knowledge has been debated by philosophers for millennia. So rather than getting bogged down in the definition of knowledge, I’m just going to explain why I don’t agree with the notion that science is the only source of knowledge, as well as discussing how confusion arises with accusations of scientism (note: I am assuming that I am real and in a real physical universe, but if you want to get philosophical, I agree that I cannot “know” that in the strongest sense of the word; again, I’m trying to avoid getting derailed by debates like that).

There are plenty of things that we “know” without science. First, relating back to the previous point, I would argue that for many philosophical/moral topics, we can arrive at pretty good conclusions by logic and reasoning. So, I don’t agree that philosophy is worthless; rather, it simply answers different questions than science does.

Even in the physical world, we can know plenty of things without science. I know, for example, that I am sitting at a computer right now. Did I acquire that knowledge by doing a systematic experiment and running some statistics? Obviously not, and I don’t think anyone would argue that we need to do that to know that I am sitting at a computer. Indeed, our lives are full of this type of knowledge that is acquired by simple observation, rather than systematic research. The problem is that at times our observations are very unreliable and conflict with scientific results.

Let me give a trivial example. On countless occasions, I have had people in the US insist that rattlesnakes hybridize with non-venomous snakes like garter snakes and rat snakes. They claim to know this because they’ve seen hybrids. As a herpetologist, however, I know that the notion of those species hybridizing is patently absurd. Those snakes are in totally different families. Their reproductive structures are different, their genetics are different, their mode of reproduction is different, etc. I hesitate to use the word “impossible” after the above discussion of probabilities, but something like this is so unlikely that for all intents and purposes, it might as well be called “impossible.” We would have to be fundamentally wrong about so many things for those snakes to be able to hybridize that it is extraordinarily unlikely that it is possible. Thus, I can state with a very, very high degree of confidence that the aforementioned people’s knowledge on this topic is wrong and the scientific results are correct. Again, that’s not scientism, that’s just accepting evidence, but you will notice that the evidence disagrees with people’s casual observations. In other words, casual observation is a way of knowing, and often a useful way of knowing, but it does not confer the same degree of confidence as systematic research (i.e., science).

Perhaps the most common way that this plays out is with anecdotes about medicine (or the various “treatments” that masquerade as medicine). People love anecdotes, and they frequently claim to know things based on anecdotes. The problem is that, as I have previously explained, anecdotes cannot establish causation. Forget science for a minute and let’s just talk about logic. Saying, “X happened before Y, therefore X caused Y” is a logical fallacy known as post hoc ergo propter hoc. It is an invalid line of reasoning. Nevertheless, people frequently insist that a given treatment works or a given medicine is dangerous because they’ve “seen it themselves.” This is where false accusations of scientism tend to start flying.

What I usually see happen is the following.

  • Person 1: Here are multiple studies showing that X does not cause Y.
  • Person 2: Those studies must be wrong because I know that X causes Y. I’ve seen it happen myself, so I know that it is true.
  • Person 1: Personal anecdotes aren’t good evidence of causation. There are lots of things that could make it appear that X causes Y, even if it doesn’t. You need carefully controlled studies for your position to be valid, and in lieu of those studies, ignoring the evidence against a causal relationship is science denial.
  • Person 2: That’s scientism! Science isn’t infallible, and science isn’t the only form of knowledge! How dare you say that my personal experiences are less valid than your science? Questioning the accepted wisdom isn’t science denial. Scientists are supposed to be open-minded.

Do you see what is going on there? Person 2 is committing a straw man fallacy and is using the accusation of scientism as an excuse for science denial. In other words, they don’t want to admit to denying scientific evidence, so instead they try to shift the blame by saying that they aren’t denying the evidence, the other person is just exhibiting scientism.

That line of reasoning is specious. The fact that science isn’t infallible doesn’t mean you can ignore its results anytime that you want, and the fact that science isn’t the only source of knowledge doesn’t automatically mean that other sources of knowledge are equal in all contexts. When it comes to establishing causation in the physical universe, science is the best and most reliable method, and you can’t reject it anytime that you want. Further, I’ll reiterate my previous point that asking questions in the absence of evidence is fine, but refusing to accept the results of numerous studies is not.

This all comes back to a concept that I discuss frequently on this blog: the burden of proof. The person making the claim bears the burden to back up that claim, and extraordinary claims require extraordinary evidence. To put that another way, science is not infallible, but it is really good, and if you want to say that numerous studies are wrong, then you are going to need some extraordinary evidence, and logically invalid personal anecdotes won’t cut it.

Indeed, all of this can be summed up with the simple statement that scientific topics require scientific evidence. That’s not scientism, that’s just how science works.

In short, scientism is a philosophical position that over-values science and argues that it is the only source of knowledge and/or that it applies to all topics. Although people do sometimes make arguments along those lines, accusations of scientism are often straw men that are used simply to deflect from the weakness of one’s own position. In other words, rather than admit that their view is incompatible with scientific evidence, many people simply accuse their opponent of scientism in an invalid attempt to delegitimize their opponent’s position. Science isn’t infallible, but you must have good evidence before you challenge the results it produces.

Note 1: Although observation is an important part of science, it is not in and of itself science. Science requires a systematic collection of observations.

Note 2: To be clear, I’m not suggesting that no skeptics are ever guilty of scientism. It does happen. My point is simply that in many cases, accusations of scientism are straw men.

Related posts

Posted in Uncategorized | 40 Comments

Replacing science-based cancer treatments with “alternative treatments” increases your risk of dying from cancer (new study)

Cancer is a truly terrible disease, and although our ability to treat and even cure many types of cancer has greatly improved, our current methods for dealing with it are admittedly imperfect and often have serious side-effects. It is, therefore, hardly surprising that many people who are diagnosed with cancer would forgo science-based treatments for “alternative” or “complimentary” treatments (aka CAM). After all, who wouldn’t be tempted by the promise of a miracle cure? The problem is that these “natural” treatments are unproven and unregulated, and choosing them over science-based medicine actually increases your risk of dying from cancer. This was demonstrated by a paper published earlier this year (Johnson et al. 2018a) that compared people who took only alternative cancer treatments to those who took conventional treatments. It found that those who relied on alternative treatments had a significantly lower survivorship. A second study (by some of the same authors as the first study) was just published and further supports that conclusion, so I want to talk about it for a minute.

This study (Johnson et al. 2018b) differed from the previous study in that all of the patients used at least one round of conventional treatment, but some of them also used alternative treatments. To compare these groups, authors used a design known as a retrospective cohort study. I discussed this design at length here, but in brief, this means that the researchers took the medical records for a large group of people who had cancer, and the went through the records and split them up into a group that only reported using conventional treatments and a group that used at least some alternative treatments. Because this study design is entirely observational and these groups weren’t assigned beforehand, there were potential confounding factors. Therefore, the authors matched patients such that for every one person who used alternative treatments, there were four people who were similar in age, ethnicity, cancer type, cancer stage, etc. but used only conventional treatments. This resulted in the groups being as similar as possible for all confounding factors so that causation could be assigned.

They used a couple of different methods to analyze the data, but the core analyses were hazard ratios. The simplest way to think about this is that it is the hazard associated with one group divided by the hazard associated with the other group. Thus, a hazard ratio of 2 means that one group has twice the hazard as the other. In other words, if the outcome of interest is death, then all else being equal, you’d expect one group to have twice the rate of mortalities as the other at any given time point during the study. The actual math there gets a bit complicated because you need to use models that take into account confounding factors, but that’s the idea in a nutshell (note that this is not a measure of absolute risk).

So, what did this study find? There are several interesting results. First, people who used alternative treatments were less likely to use the full rounds of conventional treatments. In other words, many people were choosing alternatives instead of using the recommended schedule of conventional treatments.

Figure 1 from Johnson et al. 2018b showing the difference in survivorship between those who did and did not use alternative cancer treatments.

The second key result was that the mortality rates were higher for people who used alternative treatments. The hazard ratio was 2.08 with a 95% confidence interval of 1.50–2.90. That result was based on a test that did not account for differences in the levels of conventional treatments that were used, and the authors followed up on it by doing a test that incorporated the history of conventional treatments. That second model showed no statistically significant difference between the two groups. This suggests that the increased mortality for people who used alternative treatments was driven by skipping conventional treatments rather than by the alternative treatments themselves. Thus, using the full course of conventional treatments was the key factor for maximizing survival.

So, does this mean that it is fine to take alternative treatments as long as you also take the full line of conventional treatments? Not necessarily. For one thing, there is no proven benefit from those alternatives. Further, out of necessity, this study lumped all alternative treatments together. As a result dangers of particular treatments may have been masked by a large number of benign treatments. Some alternatives are clearly dangerous (like drinking bleach and paint thinner), and others may be harmful on their own or have negative interactions with science-based treatments. We simply don’t know.

Risk assessment always has to weigh the benefits against the risks, and given that there are no known benefits of these alternative treatments, in my opinion, the risk and wasted money is not worth it. Having said that, as long as you are following the convention treatment schedule it is entirely possible that taking alternatives won’t do any harm (other than to your wallet), but at the very least tell your doctor exactly what you are taking and make sure that they don’t have reason to think that your chosen complementary “medicine” isn’t dangerous.

Finally, I want to talk about an important weakness of this study. Namely, the fact that, as I just stated, they lumped all alternative treatments together. This was out of necessity because there are a mind-boggling number of alternative treatments out there, and they only had 258 people in the alternative treatment group. Thus, there simply wasn’t enough power to test any particular alternative remedy. Further, beyond the dizzying array of alternatives available, for a study, you’d also need information about doses, schedules of administration, etc. In other words, the data simply aren’t there to do a comparison based on specific treatments. As a result, you could try to argue that your specific preferred alternative works, and it just couldn’t be detected because of all the statistical noise from the treatments that don’t work. It is technically possible that there were a few effective treatments in there, but that’s a dice-roll. It’s exactly the same as what I described earlier for risks. There may be a few good treatments in the mix, there may be some very harmful ones, there’s probably lots of neutral ones, and without properly controlled studies, you have no way of knowing which treatment is which. They all have anecdotes “supporting” them, and for any one of them you can find countless blogs, forums, etc. singing their praises, and you have no objective way of which ones (if any) are actually safe and beneficial. That’s why we have to rely on science, and why I don’t recommend that anyone take these untested treatments.

In short, replacing science-based cancer treatments with alternative or complementary treatments reduces your chance of survival, and you should not rely on them.

Related posts

Literature cited

  • Johnson et al. 2018a. Use of Alternative Medicine for Cancer and Its Impact on Survival. Journal of the National Cancer Institute 110:121–124.
  • Johnson et al. 2018b. Complementary Medicine, Refusal of Conventional Cancer Therapy, and Survival Among Patients With Curable Cancers. JAMA
Posted in Vaccines/Alternative Medicine | Tagged , , | 2 Comments