Climate change and a flat earth (I can’t believe we’re still dealing with this argument)

I'm not sure where this meme originated. if anyone knows, please tell me so that I can give credit.

Image via Terry Pratchett’s “Discovery World.”

When I started this blog, I wanted it to be entirely about science and the fundamental concepts of logic. As such, I decided that topics like religion and politics where off limits. However, anytime that a politician makes a statement about science, they enter my territory and I feel obliged to deal with their comments (by the same token, I only talk about religion when people use it to make claims about science). One of the most common and irritating ways that this occurs is when politicians try to compare modern science (usually climate research) to the idea that the earth is flat and/or we are the canter of the universe. This argument is used ad nauseum and I have dealt with it before. Nevertheless, it is once again featuring prominently in our political dialogue, so I feel the need to explain why it is utter nonsense yet again.

I am, of course, referring to the recent comments by Trump advisor Anthony Scaramucci on CNN. When asked about the scientific consensus on climate change, he made several hopelessly ignorant comments like, “There was an overwhelming science that the Earth was flat, and there was an overwhelming science that we were the center of the world.” Statements like that are, however, absurd for numerous reasons.

First, and most obviously, they are blatantly false. Science is an extremely careful, empirical, and systematic process of making observations, forming hypotheses, and empirically testing those hypotheses by using repeatable, controlled* experiments, and that system is an extremely recent one. It has only existed in its modern form for the last few centuries. In contrast, we have known that the earth was round since the ancient Greeks! So there can’t possibly have been a scientific consensus that the earth was flat because science as we know it did not even exist then! Similarly, the idea that we are the center of the universe was discredited long before modern science, and the idea was largely based on religion. Indeed, although Galileo slightly predates modern science, when he demonstrated that the earth was not the center of the universe, he did so by applying the same basic type of systematic observations and reasoning that scientists use today. Thus, although modern science never said that the earth was the center of the universe, scientific reasoning was the thing that discredited that idea.

I want to elaborate on that last point for a second, because I want to be clear that I am not suggesting that there were no scientists before the modern era. There certainly were, but science was not the formalized discipline that it is now. In other words, although there were people who were making observations, testing hypotheses, etc., there was not a standard for the type of reasoning that qualified as science, nor was there the vast body of philosophical thought about the nature of science that we have today. My point is that the “science” that said that the earth is flat and the earth is the center of the universe is not actually the same thing that we know as science today, thus the comparison is invalid (i.e., it is an equivocation fallacy because it uses the word “science” to mean two different things).

Further, so far I have only been talking about the general concept of what science is, but we also have to take into account the tools that are available to scientists. For example, the statistics and mathematical formulas that we use today have only been around for about 100 years (sometimes much less than that). Those mathematics are extremely important because they are what allow us to rigorously examine our data and assign confidence values to our results. Similarly, today we can use computers to analyze millions of data points, run massive simulations, and do analyses that were unimaginable to scientists just a few decades ago. Further, our ability to accurately make the measurements that we use in those analyses has also increased astronomically. For example, we now have satellites that can continuously collect very precise data from all of the world. Tools like that give us incredible power for understanding our planet and the universe in general, and they give modern science a major advantage over past scientific endeavors.

There is also another problem here, and it is a more fundamental one. If this argument actually worked, then you could use it any time that you wanted, rather than simply for climate change. In other words, if the fact that people used to think that the earth was flat actually gave you cate blanche to reject science, then you would never have to accept any scientific result that you didn’t like. For example, I could say, “scientists say that washing your hands helps prevent the spread of disease, but they were wrong about the earth being flat, so I don’t have to listen to them when they say that we should wash our hands.” That is obviously absurd, but that is exactly the same thing that Scaramucci did. His argument was logically identical, which means that if either of those arguments worked, then both should work, and the fact that the hand washing example is clearly absurd means that the climate change argument must also be absurd. That’s how logic works.

Nevertheless, you might be tempted to think, “but scientists aren’t completely, 100% sure that we are causing climate change, so it is always possible that they are wrong.” Technically, that is true, but here is the important qualifier: science never gives us 100% certainty. It is inherently a skeptical process, and it only tells us what is most likely true given the current evidence, not what is absolutely true (details here). So sure, it is true that I can’t be 100% sure that we are causing climate change, but I also can’t 100% sure that the earth is round. Indeed, I can’t be 100% sure that you even exist, nor can I be 100% sure that Donald Trump isn’t an alien squid from another universe. However, the fact that I can’t be 100% sure about those things does not mean that I get to have any form of practical doubt about them. It would clearly be ridiculous for me to actually think that Trump is an alien squid, and politicians should not make laws based on that notion. Even so, the fact that we can’t be 100% sure about climate change does not mean that you get to have any practical doubt about it, nor should we be basing laws on that doubt.

In technical terms, what I have been describing is known as an argument from ignorance fallacy. It can occur in many ways, but one of its forms happens when you say, “we can’t be sure that X is true, therefore X is false” or “we can’t be sure that X is true, therefore it is reasonable to believe that it isn’t true.” This is not logically valid. The fact that we can’t be completely certain of a conclusion does not mean that you can assume that all of the evidence supporting that conclusion is false. This comes back to one of my favorite topics: the burden of proof. The person making a claim is always responsible for providing the evidence to support that claim. In other words, if you want to claim that we aren’t causing climate change, then you must provide actual evidence that we aren’t. You can’t simply assert that we can’t be 100% certain, the models might be wrong, etc. If you want to claim that all of the scientific evidence is wrong, then you have to provide evidence to support that claim, not speculations. Further, although it is true that science never proves anything with 100% certainty, there are some topics that are so well supported by so many lines of evidence that it is almost unthinkable that they could actually be totally wrong, and anthropogenic climate change is one of those topics (others include atomic theory, the germ theory of diseases, etc.).

This brings me to the final topic that I want to discuss here: the actual evidence for climate change. You see, so far I have been talking in pretty general terms about science, but Scaramucci’s statements are particularly absurd and misguided when we focus in on climate change, because there is so much evidence supporting it. As I have explained in more detail elsewhere, we have tested the sun, volcanoes, and other natural drivers of climate change, and they can’t explain the current warming by themselves (Meehl, et al. 2004; Hansen et al. 2005; Wild et al. 2007; Lockwood and Frohlich 2007, 2008; Lean and Rind 2008; Foster and Rahmstorf 2011; Imbers et al. 2014). However, including our greenhouse gases in the calculations does explain the warming trend (Stott et al. 2001; Meehl et al. 2004; Allen et al. 2006; Lean and Rind 2008; Imbers et al. 2014). In other words, we know that our emissions are driving the warming, because we have tested the natural drivers of climate change, and none of them (even when combined) can account for the current increase in temperatures. Further, we know that CO2 traps heat and is largely responsible for both our climate and past climate changes (Lorius et al. 1990; Tripati et al. 2009; Shakun et al. 2012), we know that we have greatly increased the CO2 in the atmosphere (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009), and we have used satellites to directly measure the amount of heat energy that CO2 in our atmosphere is trapping, and (just as expected) it is trapping an increasing amount of heat (Harries et al. 2001; Griggs and Harries 2007). In other words, we have direct evidence that our CO2 is causing the earth to trap more heat (i.e., warm). The evidence is overwhelming, and when you look at the scientific literature, you will find thousands of papers saying that we are causing climate change, and only a tiny handful that disagree. The scientific consensus on this topic is incredible (details here), and the consensus among scientists exists because of the evidence. Scientists do not “believe” in climate change (as Scaramucci erroneously asserts), rather we accept it as fact because of the overwhelming evidence. There is simply no serious debate on this topic among actual scientists.

To illustrate this a different way, a few weeks ago, I attended the annual meeting of the Ecological Society of Australia, and there were lots of talks about the effects that climate change is having and will have on plants and animals, but not one of those talks hedged their comments with statements like, “if climate change is real.” Rather, all of them treated anthropogenic climate change as a scientific fact because that is what it is, and it is absolutely crazy to suggest that we should act as if it isn’t settled just because some non-scientists over 2000 years ago thought that the earno matter what crackpot notion you believeth was flat! Further, the fact that a handful of scientists today disagree doesn’t help that argument either because there will always be a few who disagree on any topic (this is an appeal to authority fallacy). Having an advanced degree does not guarantee that you know what you are talking about, and you can sadly find plenty of crackpots even among scientists (note: before you accuse me of committing an appeal to authority fallacy by citing the consensus, please read this post on the difference between appealing to authority and deferring to experts).

To be clear, there is disagreement about the extent of climate change that we are going to cause. There are lots of variables that have to be considered, and there are admittedly still topics that we are actively studying in order to refine out models. However, although the models may be imprecise about the extent of the warming, it is almost unthinkable that we are wrong about the warming itself. Indeed, the idea that we are causing the climate to change is not based on the predictive models, rather it is based on research like the papers I presented earlier. Further, out models have actually done a pretty good job of predicting the current warming, so although they aren’t perfect, they probably aren’t that far off (Hansen et al. 2006; Frame and Stone 2012; Rahmstorf et al. 2012; Cowtan et al. 2015; Marotzke and Firster 2015).

In short, the science that said that the earth is flat or that we are the center of the universe was not the same thing as what we know as science today. Further, the past few decades have furnished us with a fantastic array of amazing tools for collecting and analyzing massive amounts of data, and when we apply those tools to the topic of climate change, we get an extremely consistent result: it is real and we are the main cause of it. Because of that evidence, there is no serious debate about this among scientists. Yes, there are a few dissenting voices, but those will always be there. Further, yes, it is always technically possible that we are wrong, but it is also technically possible that Donald Trump is a space squid, and the fact that something is technically possible does not mean that it is rational to think that it is actually true.

*Note: When I said that science uses controlled experiments, I was not referring specifically to randomized controlled trials (RCTs). Although they are a particularly powerful design, they are far from the only one, and they often aren’t applicable in fields outside of medicine. In climate research, for example, we generally can’t do RCTs, but we can still control our experiments and analyses by measuring all of the confounding factors and including those factors in out analyses.

Related posts

Literature cited

  • Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
  • Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
  • Cowtan et al. 2015. Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters 42:6526–6534.
  • Foster and Rahmstorf 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
  • Frame and Stone 2012. Assessment of the first consensus prediction on climate change. Nature Climate Change 3:357–359.
  • Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
  • Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
  • Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. Science 308:1431.
  • Hansen et al. 2006. Global temperature change. Proceedings of the National Academy of Sciences 10314288–14293.
  • Harries et al. 2001. Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature 410:355–357.
  • Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
  • Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
  • Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
  • Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
    Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
  • Marotzke and Firster 2015. Forcing, feedback and internal variability in global temperature trends. Nature 517:565–570.
  • Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
  • Rahmstorf et al. 2012. Comparing climate projections to observations up to 2011. Environmental Research Letters 7:044035.
  • Shakun et al. 2012. Global warming preceded by increasing carbon dioxide
  • Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
  • Tripati et al. 2009. Coupling CO2 and ice sheet stability over major climate transitions of the last 20 million years. Science 326:1394–1397.
  • Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
  • Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters


Posted in Global Warming | Tagged , , , , | 18 Comments

Evolutionary mechanisms part 7: Gene flow

So far in this series, we have talked about natural selection, sexual selection, genetic drift, and mutations, but that still leaves us with one final and very important evolutionary mechanism: gene flow. This simply refers to the movement of genetic information among populations (migration, for example, causes gene flow). At the surface, that may sound quite simple and boring, but it is actually fascinating, complex, and immensely important. For example, gene flow often plays a critical role in determining whether or not an organism will speciate (i.e., split into two species). Similarly, gene flow interacts with genetic drift and selection, and low levels of gene flow may predispose a population for extinction, whereas high levels may be equally harmful by preventing a population from adapting. Therefore, in the final post of this series, I want to talk about gene flow and explain how it interacts with other evolutionary mechanisms and determines the fate of many populations.

Maintaining connectivity

When we talk about gene flow, we are generally talking about the movement of genetic information from one population to another. This can occur either by individuals moving among populations (and mating) or by mechanisms that just transfer genetic material (e.g., wind and animals moving the seeds and pollen of plants).

One of the most powerful traits of gene flow is its ability to prevent populations from splitting apart. If you recall from previous posts, selection and genetic drift are constantly shifting the allele frequencies of populations, but which allele becomes common can vary from one population to the next. Genet drift is, of course, random. Thus, for a given genetic locus, one allele might become common in one population, while becoming quite rare in another. Similarly, selection (which is non-random) simply adapts populations for their current environment. As such, we expect that two populations that are living in different habitats would be under different selection pressures and thus evolve in different directions. Finally, mutations randomly create new genetic information, so if a mutation occurs in population A but not in population B, that will result in yet another genetic difference between the two populations.

The result of all of this is that when two populations are isolated, they both slowly accumulated genetic differences. Different mutations will arise in each one, different alleles will be selected for, and different alleles will be affected by genetic drift. Over many generations, these populations will eventually accumulate so many differences that we would consider them to be different species. This is called allopatric speciation (i.e., the formation of two new species from isolated populations).

Gene flow can, however, counteract that process by moving genetic information from one population into another. Imagine, for example, that a new mutation arises in population A but not population B. However, in each generation, a large number of individuals migrate between the two populations. As a result, some of the individuals who have the mutation carry it with them into population B where they mate with individuals of that population and pass the mutation on. Similar patterns also occur with selection and drift (more on that in a minute). Gene flow constantly shuffles the genetic information around and causes alleles to be exchanged among populations. Thus, it mixes the populations together and can prevent them from splitting apart.

Note: Although low levels of gene flow can be a very important driver of speciation, they are not required. There is also sympatric speciation which occurs when a connected population evolves in two separate directions and becomes two separate species (differences in mating patterns are a good way for this to happen).

 Interactions with genetic drift

Gene flow is often extremely important for the long-term survival of a population, and maintaining gene flow is often a conservation priority. This is because gene flow is a powerful way to counteract genetic drift. If you remember from previous posts, genetic drift is constantly removing variation from a population, but that variation is important because it is what allows populations to cope with future changes to the environment, new diseases or predators, etc. Indeed, it is well established that genetically diverse populations are superior to low-diversity populations (Hedrick and Kalinowski 2000, Keller and Waller 2002, Reed and Frankham 2003, Spielman et al. 2004, Whiteman et al. 2006, Hughes et al. 2008). Additionally, in very small populations, genetic drift can actually overpower selection and cause harmful alleles to rise to prominence (Lacy, 1987; Whitlock, 2000; see this post).

This shows gene flow's ability to counteract genetic drift. In the top panel (A) there is no gene flow, and alleles became fixed in each population by 200 generations. In contrast, in the lower panel (B) there is gene flow, and alleles never stay fixed. Thus, gene flow allowed the populations to maintain a higher diversity.

This shows gene flow’s ability to counteract genetic drift. In the top panel (A) there is no gene flow, and alleles became fixed in each population by 200 generations. In contrast, in the lower panel (B) there is gene flow, and alleles never stay fixed. Thus, gene flow allowed the populations to maintain a higher diversity.

Gene flow can, however, counteract this by constantly restocking the genetic diversity of a population. Consider the examples illustrated on the left. In the top example, I simulated the frequency of a given allele in10 populations of 25 individuals. At the start of each simulation, the allele had a frequency of 50% and there was no selection. In the first set of 10 populations, there was no gene flow, and you can see that the frequency of that allele changed randomly (sometimes increasing, sometimes decreasing), and by 200 generations, in each population it had either become fixed (i.e. everyone had it and the alternative allele was lost) or it was totally removed from the population. In other words, in every case, diversity was lost because of genetic drift. Now, look at the second set of 10 populations (bottom). That simulation was set up identically, but at the start of each generation, I had one individual immigrate from a large neighboring population that had an allele frequency of 50% for the allele in question. As you can see, neither allele ever becomes fixed for more than a few generations, because even if one is lost, gene flow reintroduces it. Thus, the populations maintain a higher diversity than they would without gene flow.

Gene flow can also cause something called the “rescue effect.” Many things such as disease outbreaks, over-harvesting, natural disasters, etc. can cause populations to go through large genetic bottlenecks where they lose a large portion of their genetic diversity. That is very bad for them because, as mentioned earlier, diversity is important for the long-term survival of a population. Gene flow can, however, rescue those populations by resupplying them with some of the diversity that they had lost (Wright 1931, Slatkin 1985, 1987, Whiteley et al. 2015). In other words, if you have two neighboring populations, and one goes through a bottleneck while the other does not, the population that did not go through a bottleneck can rescue the bottlenecked population via gene flow.

Interactions with selection

When gene flow interacts with genetic drift, it is usually beneficial, but when it interacts with selection, it can be very harmful, because it can introduce alleles that aren’t locally adaptive, and those alleles can swamp selection (García-Ramos and Kirkpatrick 1997, Lenormand 2002, Kawecki and Ebert 2004, Foster et al. 2007, Funk et al. 2012). Remember, selection just adapts populations for their current environment. So selection for a give trait may be strong in one population, but weak, non-existent, or even reversed in a neighboring population. As a result, selection may be acting to remove an allele from one population, while gene flow is constantly reintroducing that allele from a neighboring population where the allele is actually beneficial (or at least not harmful).

This shows gene flow's ability to counteract natural selection. In the top panel (A), there is no gene flow, and selection removes one allele from the population as expected. In contrast, in the lower panel (B) there is a high level of gene flow, and selection never manages to remove the harmful recessive allele from the populations.

This shows gene flow’s ability to counteract natural selection. In the top panel (A), there is no gene flow, and selection removes one allele from the population as expected. In contrast, in the lower panel (B) there is a high level of gene flow, and selection never manages to remove the harmful recessive allele from the populations.

I simulated this and have presented the results on the right. The populations (top) were set up in exactly the same way as for my gene flow example, but this time, I added selection. Individuals that had the dominant phenotype had a 100% chance of surviving and reproducing, whereas individuals with the recessive phenotype only had a 90% chance (i.e., there was selection against the recessive allele). As you can see, selection quickly removed the recessive allele (thus the dominant allele had a 100% frequency after only a few generations). If I showed you the survivorship for these populations, you would also see that their survivorship has increased (i.e., they adapted).

Now, let’s do the exact same thing, but let’s have one immigrant per generation, just like in the gene flow example (bottom). As with the gene flow, neither allele ever became fixed. The frequency of the beneficial dominant allele did shift up a bit from the starting point on average, but it never stayed at 100% like it did without gene flow because the gene flow was constantly reintroducing the harmful recessive allele. As a result, these populations could not adapt as well as the populations without gene flow, and they had a lower survivorship. This can be very problematic for populations, especially if they are being faced with something like an introduced disease. It is also a big concern for conservation efforts that move individuals among populations (thus creating artificial gene flow). The flip side of that, however, is that you can, in concept, do targeted genw flow where you take individuals from a population that has a beneficial allele and move those into a population that would benefit from that allele (e.g., take individuals from a population that has adapted to a disease and move them into a population that is still declining from the disease; Kelly and Phillips 2015).

Conclusion

In short, gene flow simply refers to the movement of genetic information among populations. It is an incredibly important evolutionary mechanism because it can prevent populations from splitting apart and it can counteract genetic drift and rescue low diversity populations. However, in some situations it can also counteract natural selection and flood a population with maladaptive alleles. Thus, although it is often beneficial, gene flow can be harmful in some situations.

Related posts

Literature cited (I ran out of time to look up the hyperlinks for all of the citations, sorry. I’ll try to add them in later, but in the mean time you should have no trouble finding them if you just past the article title into Google Scholar)

  • Foster, J. T., B. L. Woodworth, L. E. Eggert, P. J. Hart, D. Palmer, D. C. Duffy, and R. C. Fleischer. 2007. Genetic structure and evolved malaria resistance in Hawaiian honeycreepers. Molecular Ecology 16:4738–4746.
  • Funk, W. C., J. K. McKay, P. A. Hohenlohe, and F. W. Allendorf. 2012. Harnessing genomics for delineating conservation units. Trends in Ecology and Evolution 27:489–496.
  • García-Ramos, G., and M. Kirkpatrick. 1997. Genetic models of adaptation and gene flow in peripheral populations. Evolution 51:21–28.
  • Hedrick, P. W., and S. T. Kalinowski. 2000. Inbreeding depression in conservation biology. Annual Review of Ecology and Systematics 31:139–162.
  • Hughes, A. R., B. D. Inouye, M. T. J. Johnson, N. Underwood, and M. Vellend. 2008. Ecological consequences of genetic diversity. Ecology Letters 11:609–623.
  • Kawecki, T. J., and D. Ebert. 2004. Conceptual issues in local adaptation. Ecology Letters 7:1225–1241.
  • Keller, L. F., and D. M. Waller. 2002. Inbreeding effects in wild populations. Trends in Ecology and Evolution 17:230–241.
  • Kelly, E., and B. L. Phillips. 2015. Targeted gene flow for conservation. Conservation Biology 30:259–267.
  • Lacy, R. C. 1987. Loss of genetic diversity from managed populations: interacting effects of drift, mutation, immigration, selection, and population subdivision. Conservation Biology 1:143–158.
  • Lenormand, T. 2002. Gene flow and the limits to natural selection. Trends in Ecology and Evolution 17:183–189.
  • Reed, D. H., and R. Frankham. 2003. Correlation between fitness and genetic diversity. Conservation Biology 17:230–237.
  • Slatkin, M. 1985. Gene flow in natural populations. Annual Review of Ecology and Systematics 16:393–430.
  • Slatkin, M. 1987. Gene flow and the geographic structure of natural populations. Science 236:787–792.
  • Spielman, D., B. W. Brook, D. A. Briscoe, and R. Frankham. 2004. Does inbreeding and loss of genetic diversity decrease disease resistance? Conservation Genetics 5:439–448.
  • Whiteley, A. R., S. W. Fitzpatrick, W. C. Funk, and D. A. Tallmon. 2015. Genetic rescue to the rescue. Trends in Ecology and Evolution 30:42–49.
  • Whiteman, N. K., K. D. Matson, J. L. Bollmer, and P. G. Parker. 2006. Disease ecology in the Galapagos hawk (Buteo galapagoensis): host genetic diversity, parasite load and natural antibodies. Proceedings of the Royal Society of London B: Biological Sciences 273:797–804.
  • Whitlock, M. C. 2000. Fixation of new alleles and the extinction of small populations: drift load, beneficial alleles, and sexual selection. Evolution 54:1855–1861.
  • Wright, S. 1931. Evolution in Mendelian populations. Genetics 16:97–159.
Posted in Uncategorized | Tagged , , , , | 2 Comments

No, homeopathic remedies can’t “detox” you from exposure to Roundup: Examining Séralini’s latest rat study

Image by Mark Philpott, shared via the Creative Commons license.

Image by Mark Philpott, shared via the Creative Commons license.

One of my main goals for this blog is to help people learn how to evaluate scientific studies. To that end, I have written several posts that dissect papers and explain either why they are robust or why they are untrustworthy (for example, see my posts on Splenda, GMOs, and vaccines). These posts have the dual goals of debunking bad science and helping people think critically, and the time has come for me to write another one of these posts. Earlier this week, someone showed me a recent study which they claimed proved that detoxing is a real thing and there are natural remedies that help your body rid itself of toxins. The study in question is, “Dig1 protects against locomotor and biochemical dysfunctions provoked by Roundup.” As you might imagine, it is less than an exceptional paper. Indeed, it was such a blatantly horrible paper that I thought it would make a good teaching tool to illustrate some of the things that you should watch out for in scientific studies. I’ll summarize the main points below, but I encourage you to read the paper for yourself and see if you can spot the problems with it before you read any further.

I have organized this post in a progression starting with problems that are concerning, but not fatal, then moving to issues that limit the papers conclusions, and ending with problems that completely nullify the paper. I have chosen this order because it is also the progression of knowledge required to spot the problems. Most people should be able to see the red flags that I will start with, so even if you don’t have the statistical knowledge to spot the more technical problems, you can still use those early warning signs as clues that the paper should be scrutinized closely before accepting it.

Authors and conflicts of interest

It is always a good idea to look at both the authors who wrote the paper and the funding sources. Some scientists have reputations for publishing crappy or even fraudulent research, and you should be wary of them. Similarly, financial conflicts of interest should make you more skeptical of a study. Having said that, I want to be absolutely, 100% clear that you cannot claim that a study is wrong simply because of the people who wrote it or their funding sources. Those things are red flags that should make you cautious and should make you look at a paper more closely, but they are not in and of themselves enough to sink a paper (i.e., using them as the basis for outright rejection is an ad hominem/genetic fallacy). Let me put it this way, if I have a study that has some sections that are unclear, but it was written by reputable scientists and did not have any conflicts of interest, then I will probably give the authors the benefit of the doubt. In contrast, if that same paper had been written by notoriously second-rate scientists and/or had serious conflicts of interest, I would be far less willing to give the authors a pass. Another consideration is the general body of literature surrounding the paper. Extraordinary claims require extraordinary evidence, and it is always suspicious when a paper that conflicts with a large body of literature was also written by a fringe scientist and funded by people who stand to benefit from the paper.

Now that all of that has been said, let’s look at the paper itself. The first thing that jumps out is the fact that the final author on this paper is Gilles-Éric Séralini (the last author position is usually reserved for the most senior scientist who was in charge of the overall project). Séralini, for anyone who doesn’t know, is infamous for publishing low-quality, fringe studies in opposition to biotechnology (specifically GMOs). Indeed, he was the author on the infamous rat study which purported to show that GMOs caused cancer in rats, but actually only showed that Séralini doesn’t understand the concept of a proper control. Indeed, the study was so horrible that it was retracted, at which point, Séralini re-published it through a minor and questionable journal that didn’t even bother to send the paper out for review (hardly the actions of a proper scientist).

We aren’t off to a good start, but things get even worse when we look at the funding. The paper is about the supposed benefits of a homeopathic product known as Digeodren (aka Dig1), but it was funded by the company that produces Digeodren (Sevene Pharma). The authors try to get around this by saying, “The authors declare that they have no competing interests. The development of Dig1 by Sevene Pharma was performed completely independently of its assessment,” but that is just a red herring. The fact that the development and testing of Digeodren were separate is completely irrelevant. The point is that the study was funded by the same company that both produces Digeodren and stands to benefit from it. That is, by any reasonable definition, a serious conflict of interest.

Again, to be 100% clear, I am not saying that the study is invalid because it was funded by Sevene Pharma, nor am I saying that it is invalid because it was conducted by Séralini, but, both of those things are serious red flags, and the rest of the study will need to be impeccable if we are going to overlook them.

The journal that published the paper

Another quick and easy thing to look at is the quality of the journal that published the paper. You need to be careful when using this tool, however, because there is plenty of good research that is published in minor journals simply because it is not of a “broad enough scope” or “wide enough impact” to interest major journals. So you need to judge journal quality against the claims being made in a paper. In other words, when a paper is making extraordinary claims but was published in a minor journal, you should be skeptical. As with the authors and conflicts of interest, however, this is not enough to sink a paper, but it is a red flag to watch out for.

So how does our paper do? Well, it is claiming not only that a homeopathic remedy works (more on that in a minute), but also that it can help to remove toxins from your body. Both of those are extraordinary claims that fly in the face of a large body of literature. In other words, if those claims were well supported, then this paper would be of extremely wide interest and should be published in a top journal. Therefore, the fact that it showed up in a fringe journal (BMC Complementary and Alternative Medicine) is yet another warning sign that something is seriously wrong with it.

Extraordinary claims require extraordinary evidence

As alluded to earlier, you should always consider the a prior plausibility of the claims being made in a paper (i.e., how likely are they to be true given the results of other studies). In other words, if a paper is simply reporting something that dozens of other papers have reported, then you don’t need to be too critical (you should still evaluate it, but it requires less scrutiny). In contrast, when a paper is reporting something extraordinary that conflicts with numerous other papers, then the paper needs to present extraordinary evidence to support its claims.

In this case, the claims of the paper are in fact quite extraordinary. First, it is testing a homeopathic remedy. I explained the folly of homeopathy in more detail here, but in short, it relies on the utterly absurd notions that diluting something makes it stronger, like cures like, and water has memory. In other words, homeopathy violates several of our most fundamental scientific concepts. Again, that does not automatically mean that it is wrong because it is always technically possible (albeit very unlikely) that those concepts are in fact flawed. However, if you want to claim that they are flawed, you need to provide some extraordinary evidence, and in the case of homeopathy, that evidence is nowhere to be found. Indeed, systematic reviews of the literature show that homeopathy is nothing more than a placebo. Similarly, detox supplements, shakes, foot baths, etc. are scams. Your body already does a very good job of keeping potentially harmful chemicals at safe levels, and no natural remedies have been shown to actually remove toxins.

Given the weight of evidence against the claims being made by this paper, it would need to be an outstanding study to be convincing. It would need enormous sample sizes, extremely rigorous controls, careful statistics, etc. In other words, it would need to meet an extremely high bar, but as I will demonstrate, it fails to do that.

The importance of the introduction

You can often tell a lot about a paper by its introduction (called the “Background” in this journal). This is where authors are supposed to review the current state of knowledge on the topic of the paper and make the case for why their study is interesting and important. When authors fail to do that convincingly, it is often a sign of underlying problems with the study.

In this case, the introduction is quite short and has several irregularities. First, multiple of the papers that were cited were other Séralini studies. That is not a good sign. There is a lot of other relevant literature out there that should have been included (much of which disagrees with Séralini’s studies). Similarly, several of the cited studies are questionable at best. Indeed, one of their central arguments hangs on a citation to the aforementioned GMO rat study that was so flawed that it was retracted.

Further, the authors cited several relevant papers about the properties of the active ingredients in Digeodren, but they totally failed to mention that Digeodren is a homeopathic remedy and those active ingredients are only present at extremely low concentrations (in this case about 1–10 parts in 100,000). They did mention this briefly in the methods, but its omission from the introduction is extremely troubling. If you are going to do a study on homeopathy, you had dang well better discuss the existing literature on that topic.

Methods: animal trials

Now we finally get to the core of the paper itself, and the first thing that jumps out is the fact that this was an animal trial. As I explained in more detail here, humans have a different physiology than other animals. As such, animal studies have a fairly limited applicability to humans. Therefore, they should be used to identify treatments that are good candidates for human trials, but you cannot jump from saying that something works in rats to saying that it works in humans. To be clear, I am not saying that the results of animal studies are wrong. Indeed, in many cases, the drug in question does in fact work in the species that was being tested, but the fact that it worked in that species does not automatically mean that it will work in humans. As a result, you need to be careful about applying the results of animal studies to humans.

Methods: experimental design

Their experimental design was pretty simple. They took a group of 160 rats and randomly divided them into four groups of 40. One group was kept as a control and did not receive any form of treatment, one group received Roundup in its water, one group received Digeodren in its water, and one group received both Digeodren and Roundup in its water. That’s not a terrible design, but it is also not a great design. A much better approach would have been to include a blocking element.

Imagine, for example, there was a slight thermal gradient in the lab where the rats were housed, and the cage rack containing the control mice ended up being on the warm end, while the cage rack with the Roundup mice ended up being on the cool end. That introduces a new variable and can have dramatic effects on the study. You’d be surprised how much a little thing like that can skew a result. Thus, a much better approach is to do what is known as “blocking.” To do this, instead of having four sets of cages, with each set containing a different group, you have members of each treatment group in each set of cages. In other words, for each set of cages, you randomly select 10 cages from each of your four treatment groups, that way, each set (what we would call a “block”) has 10 individuals from each treatment group (the position of the cages within each block should also be randomized). Now, if there is a thermal gradient (or any other confounding factor), it balances out because it affects all of your treatment groups equally. Further, you can (and should) including that blocking variable in your analyses to actually test for confounding factors across your sets of cages. Failing to block the experiment like that is not always fatal to an experiment (depending on the type of experiment), but it does make me far less confident in the results, and remember, to accept this particular paper, it needs to be an extraordinarily good paper.

This shows three different setups for the same experiment comparing four groups of 40 rats (I am assuming one rat per cage). On the far left, you have what seems to be being described by this study (each experimental group is separate). This is a weak design. A better design is what you see in the middle where you have representatives from each experimental group within each "block." The best design is then to randomize the location of the cages within those blocks (as seen on the far right).

This shows three different setups for the same experiment comparing four groups of 40 rats (I am assuming one rat per cage). On the far left, you have what seems to be being described by this study (each experimental group is separate). This is a weak design. A better design is what you see in the middle where you have representatives from each experimental group within each “block.” The best design is then to randomize the location of the cages within those blocks (as seen on the far right).

A second issue is that this experiment wasn’t blinded. In other words, the researchers knew which rats were in each treatment group. That makes it very easy for their biases to inadvertently influence the experiment, especially given that one of the researchers has a reputation for publishing agenda-driven papers (again, even a slight difference in how the rats were treated could have affected things).

Note: the authors were a bit vague about how their cages were set up, so it is not clear how many rats were in each cage or how many sets of cages there were. However, it is clear that they did not use a proper blocking design.

Methods: The doses

Anytime that you are looking at a toxicology study, you have to look at the doses to see if they are reasonable. Remember, everything (even water) is toxic at a high enough dose. So when a study is looking at an environmental chemical like Roundup, it is important that they use a dose that you would realistically be exposed to in the environment. Otherwise, the study has no real applicability.

In this study, the rats in the Roundup group were given 135mg/kg of Roundup daily. After converting that to a human dose, we find that it is the equivalent of a human consuming 21.9 mg/kg daily. That is an insanely high dose. The exact allowable daily intake (ADI) for glyphosate (i.e. Roundup) varies by country, but it is much lower than that. In Australia, for example, it is 0.3 mg/kg, whereas the WHO sets it as 1 mg/kg. The dose in the experiment is also well above the levels that people are normally exposed to. Even if I want to be generous, and assume the questionable estimates put forth by the “detox project” are correct and people in the US are eating up to 3 mg/kg of glyphosate daily, the dose that the rats received is still seven times that!

To put it simply, this study is worthless because the dose is so unrealistic. Even if the authors had successfully demonstrated that Digeodren did something useful when faced with those levels, that would not in any way shape or form indicate that it does anything useful when exposed to normal levels of Roundup.

Methods: Statistics

Finally, we get to the biggest problem with this study (IMO), and this one would sink it even if it was the only thing wrong with the paper. It is a problem that I write about a lot on this blog, so you may already know where I am going with this. The problem is multiple comparisons. In technical terms, the authors failed to control the family-wise type 1 error rate. In laymen’s terms, this was a statistical fishing trip. They simply did enough comparisons that they eventually got a few that were “significant” just by chance.

I’ve previously written lengthy posts about this, but to be brief, standard statistical tests like what the authors used rely on probabilities for determining statistical significance. In other words, the report “P values” that show you the probability of getting a result of the same effect size or greater than the effect size that you observed if there is not actually an effect. It’s not technically correct, but you can think of this as the probability that you could get your result just by chance. To apply this to our study, they were looking for differences among their groups, so the P values were the probabilities of getting differences as large or greater than the differences that they observed if the treatments don’t actually cause a difference. To actually determine if something is “statistically significant” we compare it to a pre-defined threshold known as “alpha.” In biology, the alpha value is usually 0.05, so any P value less than that is considered significant. What a P value of 0.05 really means, however, is that there is a 5% chance that you could get a difference that large or larger just by chance. This is really important, because it means that you will occasional get “significant” results that arose just by chance, and we call those statistical flukes type 1 errors.

Following all of that, it should make intuitive sense that as you make more comparisons, the odds of getting at least one false positive increase. In other words, if you do enough comparisons, you will eventually find some results that are statistically significant just by chance. So your error rate across all of your tests is actually much higher than 0.05. This is what we call the family-wise type 1 error rate, and it is extremely important. To compensate for it, you should do two things. First, at the outset of your study, you should have a clear prediction of what you expect to be true if your hypothesis is correct, and you should only make the comparisons that are necessary for testing that predication. You should not make a whole bunch of haphazard comparisons and hope that something significant comes out. Second, if you end up using multiple tests to answer the same question (e.g., does drug X work?) then you need to control the family-wise error rate by adjusting your alpha value (this is usually done through a Bonferroni correction). In its simplest terms, this makes the alpha more stringent as you increase the number of comparisons that you do.

So, how did our intrepid scientists do? In short, not well. They made a whopping 29 comparisons, only 8 of which showed any form of significance, and only 6 of which showed significance in a direction that would suggest that Digeodren does anything useful. Further, they did not control the error rate among these tests. In other words, they did exactly the opposite of what you are supposed to do. They went on a fishing trip looking for significance rather than only testing a small set of pre-defined expectations. They made so many comparisons that they got some statistically significant results just by chance. To put this another way, if I set up the exact same experiment with four groups of rats, but I did not give any of them Digeodren or Roundup, and I made the same 29 comparisons among those four groups, I would expect to get several significant results, even though I treated all four groups exactly the same. Their results are statistical flukes, nothing more.

Finally, they did not report their P values for each comparison, which means that we can’t even properly assess their results (see the note below). If they had reported a table of P values like they should have, we could do the Bonferroni correction ourselves, but since they failed to do that, we have nothing to go on.

To be clear, in most cases, the fact that an author did not control their error rate would not automatically mean that their results were statistical flukes, but it would mean that we should consider their paper untrustworthy and reject it. However, in this particular case, there is another important factor to consider. Namely, all of the existing evidence that homeopathy doesn’t work. When you consider that evidence, and the low quality of the experimental design of this particular study, the most rational conclusion is that the results are wrong rather than simply untrustworthy.

Note: If you read the paper, you will see a reference to a Bonferroni test as well as P values, but they only used those within a test rather than across tests. In other words, the tests that they were using (ANOVA and Kruskal-Wallis) make comparisons among several groups (in this case the four treatment groups) and report a single P value that tells you whether or not at least one significant difference exists among those groups. Then, if you get a significant result, you make pairwise comparisons among all of your groups and get individual P values for each comparison. So they reported the P values and controlled the error rates for those individual comparisons within each ANOVA, but I am talking about the P values across ANOVAs, because you should never even do the individual comparisons unless the ANOVA itself is significant, and if you don’t control the error rate across ANOVAs (as they didn’t), a lot of your ANOVAs will be false positives. In other words, they did 29 ANOVAs/Kruskal-Wallis tests, each of which compared four groups, and they controlled the error rates for the post-hoc comparisons of the four groups, but not for the ANOVAs themselves.

 Conclusion

In summary, this paper is riddled with problems and is little more than a steaming pile of crap. It had major conflicts of interest, was written by an author with a reputation for publishing shoddy, agenda-driven studies, it was published in a fringe journal, it made inadequate references to the relevant literature, the experimental design was sub-par and failed to incorporate blinding procedures, and (most importantly) it made an astounding 29 comparisons without bothering to control the error rate. This paper is a statistical fishing trip. The authors simply made so many comparisons that they eventually got a few that were significant just by chance. This is a common tactic that is frequently employed by pseudoscientists (and sometimes legitimate researchers as well) and you should learn to spot it.    

Posted in Nature of Science | Tagged , , , , , | 13 Comments

Don’t mistake an assumption for a fact

Carl Sagan quote extraordinary evidence claimsI want you to imagine for a minute that you have been selected for jury duty, and you are sitting in the courtroom listening to the evidence. As you watch, the prosecution calls an expert forensic scientist to the stand, and they carefully explain the facts concerning the forensic evidence, all of which point towards the defendant’s guilt. Then, the defendant’s lawyer stands up and shouts, “Objection, this witness has been paid off to lie about my client!” After a moment of shocked silence, the judge says, “That is quite a claim. What evidence do you have to support it?” The lawyer then responds simply by saying, “only someone who had been bought off would say things like that against my client.” Now, what do you think is going to happen next? Is that a reasonable defense that the judge will accept? Obviously not! The defense is making an extraordinary assumption, and it is clearly invalid to do so.

In an example like that, the problem is obvious. You can’t just make things up to dismiss facts that are inconvenient for you. Indeed, when a person’s fate hangs in the balance, we all want the arguments and evidence to be based on facts, not assumptions. Nevertheless, when it comes to science and many aspects of our daily lives (such as politics), people are often more than happy to accept assumptions, and people frequently state them as if they were facts. Therefore, I am going to provide several examples of this flawed line of reasoning, and explain why it not only doesn’t work, but often commits a logical fallacy.

The first example is probably the most common one that I encounter, and it is highly analogous to my courtroom example. I recently engaged in a Facebook debate with someone who made the bold claim that there is no empirical evidence for climate change. I responded to that assertion by providing him with multiple papers that did in fact provide empirical evidence for climate change. How do you think he responded? You probably guessed it. He simply claimed that all of the scientists involved in those studies had been paid off (just like my fictional lawyer did). That argument was clearly fallacious in my courtroom example, and it is equally flawed here. You can’t just assume that scientists have been paid off any time that they publish a result that you don’t like. You have to provide actual evidence of corruption, otherwise you are making a baseless assumption. In fact, journals actually require scientists to declare any conflicts of interests, and failure to do so is a serious offense (it can result in loss of funding, papers being retracted, etc.). So there is no need to make assumptions about conflicts of interest, because you can easily check and see if they exist. Nevertheless, this is one of the most common arguments among those who reject the results of modern science, and I constantly encounter it among anti-vaccers, those who oppose GMOs, climate change deniers, people who use alternative medicine and/or reject modern medicine, etc.

A very closely related line of attack is simply to accuse the person that you are debating of being a shill, astroturfer, etc.  Indeed, I constantly have to deal with people accusing me of having been bought off by big pharma, Monsanto, the government, lizard people, etc. In every case, however, it is a complete assumption (and a faulty one, as I actually pay money out of my own pocket to maintain this blog). You can’t just go around assuming that everyone who disagrees with you has been bought off. That’s not rational.

I could continue giving copious examples of this type of reasoning (e.g. assuming that “big pharma” has bought of the FDA and CDC), but I think that you get the basic picture, so I want to move on to explaining a bit more about why this type of argument is invalid. There are actually two logically fallacies being committed here. First, all of the specific examples that I have given so far commit a genetic fallacy (sometimes specifically an ad hominem fallacy). These fallacies occur when you make an irrelevant or unjustified attack against the source of the information, rather than addressing the information itself (even a flawed source can sometimes be correct). I have previously talked about these fallacies at length (see previous hyperlinks), so I want to focus instead on the second fallacy. This one is more general and can occur even when a source is not being attacked. It is what is known as an ad hoc fallacy. It generally occurs as a response to an argument, rather than an argument itself, and it has the defining characteristic of proposing a solution that you would never accept unless you were already convinced by the claim that was being defended. In other words, an ad hoc fallacy is a response that is not based on evidence and whose sole purpose is to “solves” a problem in an argument that you are fond of.

That may have seemed a bit complicated, so let me explain by applying it to one of the examples that I used previously. In the example of the climate change denier, his claim was that there is no empirical evidence of climate change (that is the position that he was defending). I presented him with actual evidence, thus clearly contradicting his claim. He then responded to that evidence by asserting that all of those scientists were really paid shills. However, there is no evidence to support that assertion, and there is no reason to think that the assertion is true other than a desire to believe the initial claim. So he was simply assuming that the scientists had been paid off, even though he had no evidence to support that claim. Do you see how this works? It is an assumption that is stated as if it was a fact, but there is no actual reason to think that the assumption is true. It serves no function other than patching the hole in the argument, and you wouldn’t accept it unless you were already convinced that the initial claim was true.

Nevertheless, not all assumptions are ad hoc fallacies, but that doesn’t make them any less flawed when they are used in debates. You simply cannot state assumptions as if they are facts. The hidden cancer cure conspiracies provide a great example of these, because they involve assumptions stacked on other assumptions. In order to argue that pharmaceutical companies are hiding a cure for cancer, first have to assume that a cure actually exists, but that is a meritless assumption. Can you provide me with actual evidence that pharmaceutical companies have a cure hidden on their shelves? No, you can’t, which means that you don’t get to state the claim that they have a cure as if that claim is an evidence-based fact. Similarly, you also have to assume a rather large conspiracy in which everyone involved is willing to sit back and watch their friends and family members die. This conspiracy would have to involve not just the CEOs of companies, but also people on the ethics committee who approved the research, people in the FDA who approve the trials, accountants who handled the budgets, lab techs, the scientists themselves, etc. That’s an awful lot of people to be involved, and you cannot just go around inventing conspiracies like that without providing actual hard evidence of their existence. This conspiracy theory (and the vast majority of conspiracy theories) relies heavily on unsubstantiated assumptions, which is a big part of why it is irrational.

Indeed, if it was permissible to substitute assumptions for facts, then we could all do this anytime that we wanted.  I could, for example, claim that smoking actually has health benefits, and the tobacco industry scientists were actually correct about the safety of cigarettes, but big pharma paid off a ton of scientists to falsify data showing that cigarettes were dangerous because big pharma wanted to profit off of sick people. Now, you probably think, “that’s ridiculous!” and indeed it is, but it is no more ridiculous than claiming that pharmaceutical companies paid off scientists to falsify information on vaccines, claiming that marijuana is only illegal because legalizing it would hurt pharmaceutical companies, claiming that climate scientists are falsifying data, claiming that Monsanto controls the world’s agriculture scientists, etc. All of these are baseless assumptions, and you cannot present them as if they are facts.

As I approach the end of this post, I want to remind everyone of one of the most important concepts in debates and rational thinking: the burden of proof. This states that the person making the claim is always required to provide legitimate evidence to support it. In other words, if you want to claim that scientists have been paid off, then it is your duty to provide actual evidence to support that claim, and if you cannot do that, then you are stating an assumption, not a fact, and your argument is illegitimate. Similarly, if you want to claim that companies are hiding cures, a conspiracy is afoot, etc., you must provide evidence to substantiate those claims. You simply cannot assume things that haven’t been verified, because if you could, then we could all dismiss every single argument that we don’t like simply by assuming the existence of some contrary evidence. Also, it is worth explicitly stating that you have to show the evidence, not the other way around. In other words, if you are claiming that a conspiracy exists, you have to provide evidence that it exists, whereas I do not have to provide evidence that it doesn’t exist. That’s the way that the burden of proof works. In fact, saying “you can’t prove that it doesn’t exist, therefore it is valid to think that it does exist” is a logical fallacy known as an argument from ignorance.

Finally, although I have been speaking specifically in the context of debates, you really should apply this to your own views (regardless of which side of a topic you stand on). Think through your arguments and make sure that you can provide proper evidence from high-quality sources to back up each component of your reasoning. Seek out your assumptions, test them against the evidence, and if they cannot be verified or strongly supported, reject them. Admitting that, “I don’t know” is far, far better than insisting that you do know, when in fact you are simply making an assumption.

Note: I fully expect someone to read this and say, “but we use assumptions all of the time in our daily lives and they are useful.” That is true, but it is irrelevant for several reasons. First, the assumptions that we make in our daily lives are very often flawed and frequently get us into trouble. They are, in fact, unreliable, which is why they aren’t valid substitutes for evidence and can’t be used in scientific debates. Second, many of the things in daily life that we often call “assumptions” are actually evidence based conclusions. For example, if my PhD adviser emails me and asks me to meet him in his office at 3:00, I am going to “assume” that he means 3:00PM, but that is not really an assumption (at least not in the way that I have been using the word in this post). Rather, it is a probabilistic conclusion based on evidence. I say that, because every single time that my adviser has ever asked to meet with me, it has been a meeting during daylight hours. Thus, there is no rational reason to think that he actually means 3:00AM. Situations like that are clearly extremely different from creating conspiracy theories or assuming corruption just because you don’t like what a scientist published.

Posted in Uncategorized | Tagged , , , , , , , , , | 7 Comments

Debunking the creationist myth that mutations don’t produce new and useful information

Genetic variation is a fundamental requirement for evolution, but many of the evolutionary mechanism (such as selection and genetic drift) actually remove variation from populations. Therefore, evolution is entirely reliant on the formation of new genetic information, and without it, evolution would grind to a halt. Creationists often seize this fact, and erroneously claim that evolution is impossible because we have never found a mechanism that is capable of creating new genetic information. This claim is, however, completely false, because mutations do, in fact, create new genetic information. Nevertheless, many creationists respond to that fact by insisting that mutations simply “rearrange” existing genetic information, rather than creating “new” information. Therefore, I want to briefly explain why this argument is fundamentally flawed.

Hopefully everyone recognizes this, but if not, it's Mr. DNA from Jurassic Park

Hopefully everyone recognizes this, but if not, it’s Mr. DNA from Jurassic Park

Before I can explain the problems with this argument, we need to be clear about the basics, and you need to have at least a rudimentary understanding about how DNA works. DNA (or deoxyribonucleic acid if you prefer) consists of four bases adenine (A) guanine (G) thymine (T) and cytosine (C). These four bases get arranged into groups of three, and each group of three codes for an amino acid. Those amino acids then get strung together to form a protein, and those proteins combine to form tissues. Thus, your DNA is your body’s blue-print, and it tells your body which amino acids to make, and how to combine them to make proteins, tissues, organs, etc.

Please note that although what I have presented here is the most fundamental concept for you to understand, actual DNA also includes stop codons, start codons, and many other complexities that are irrelevant for this post.

 Now that you understand the basics of DNA, we can talk about mutations. I explained them in far greater detail here, but in short, they are simply random changes to the genetic code (the ones that are important for evolution usually occur during the formation of egg and sperm cells). There are many different types of mutations such as inversions (which flip a segment of DNA), deletions (which remove base pairs), insertions (which add extra base pairs), substitutions (which insert the wrong base into a chain [e.g. a T instead of a C]), duplications (which duplicate a segment of DNA), and several others. In every case, however, they change the genetic code, and by changing the code, they can change the amino acids and ultimately the proteins that are produced (note: there is redundancy in the way that amino acids are coded, so not all mutations result in changes downstream).

At this point, we can examine creationists’ claim that mutations simply rearrange information rather than producing new information. The most obvious problem with this is simply that some mutations (like insertions) do actually insert entirely new base pairs. In other words, they don’t rearrange the code that is already there. Rather, they add new bases (i.e., new information) to that code.

The second problem is really the more important and fundamental one. Namely, this argument seems to be using the word “new” in a rather peculiar sense, because rearranging the existing bases does in fact produce a new code that often results in the production of different amino acids and new proteins. Let me illustrate. A coding strand of DNA that contains the sequence CTT would code for the amino acid Leucine. However, if a mutation rearranged those three bases so that they were TCT, that strand of DNA would code for the amino acid Serine instead of Leucine. Thus, by simply rearranging the existing bases, we created new information which produced a different amino acid. In other words, we added genetic variation to the population, because the individuals who receive that mutation will produce Serine, while the result of the population is producing Leucine. Objecting to that by claiming that “no new information has been created” is really quite silly because the fact remains that individuals with that mutation are producing a different amino acid than everyone else. Something that codes for the production of an entirely different amino acid is, by any reasonable definition, “new information” (i.e., it is information that was not there before).

A useful analogy to help you conceptualize this is to think about letters in the alphabet. The English language has 26 letters (bases), and we combine those letters to from words (amino acids). We then arrange those words into sentences (proteins), and we arrange those sentences into paragraphs (tissues). Ultimately, we can use those paragraphs to make books, essays, etc. (organisms). Now, according to creationists’ reasoning, it should be impossible to make any new information by simply rearranging those 26 letters, but that is clearly absurd. We can arrange them one way and produce the works of Shakespeare. We can arrange them another way and produce “The Origin of the Species.” We can rearrange them yet again and produce the script to a Stargate episode. Indeed, there are a virtually infinite set of possibilities, each of which contains different information, and the situation is no different for DNA. We can arrange the bases one way and get a dinosaur, and we can rearrange that code (via mutations) and get a chicken. We can rearrange it yet again and get a whale, human, tree, bacteria, mushroom, etc. This notion that making new arrangements of the four existing bases doesn’t produce new information is absurd because everyone agrees that different arrangements of those bases produce very different organisms.

A big part of the problem here once again comes back to the definition of the word “new.” Creationists seem to think that evolution requires something that is completely and totally novel, such as a new base pair or, at the very least, an entirely new amino acid that has never existed anywhere before, but that is a straw man fallacy. Evolution does not require something that has never previously existed anywhere. Rather, it simply needs to have variation. Thus, any change to the genetic code is “new information” in an evolutionary sense, because it provides variation. A useful way to think about this is that evolution doesn’t need “new” information. Rather, it needs “different” information. In other words, all that it needs is a code that is different than the one that was there before.

To further illustrate what I mean by this, it is worth mentioning that even mutations that remove bases can actually produce new information in an evolutionary sense. Going back to the alphabet example, imagine that the document in question is a recipe, and imagine that the instructions get “mutated” by the random deletion of one of the steps. That actually provides new information because the end-product will be something different than the intended product (i.e., there will be variation for the trait). The same thing happens with DNA. If you remove the bases for a particular amino acid, then the final protein product will often be different from the one that was originally coded for, and that variation is all that evolution needs, regardless of whether or not you want to describe it as “new.” To give an actual example of this, the virus that causes HIV typically uses the CKR5 protein to enter macrophages and complete its lifecycle. However, people who have deletions (a type of mutation that removes bases) on both copies of their genes for CKR5 are resistant to HIV because those mutations alter the protein, and HIV cannot bind to this new, modified protein (Dean et al. 1996; Sullivan et al. 2001). Thus, a loss of genetic material results in a new set of instructions, which causes cells to produce a protein with a new modification, and that new modification provides a beneficial function. Creationists may try to say that this example doesn’t illustrate the formation of “new” information because an existing protein was simply modified, but that is, once again, a straw man fallacy, because evolution just needs variation, and this mutation provides that (thus, from an evolutionary perspective, it is new information).

Finally, you might be tempted to protest to all of this on the grounds that, “mutations are nearly always harmful,” but that claim is a myth. Most mutations are actually neutral at the time that they occur (i.e., they are neither beneficial nor harmful; Nachman and Crowell 2000; Eyre-Walker et al. 2007). Further, although harmful mutations certainly do exist, they are selected against, so they are really irrelevant. In other words, even if there were 10,000 times as many harmful mutations as beneficial mutations (which there aren’t), that wouldn’t preclude evolution, because nature would select against the harmful ones and for the beneficial ones. On that note, I want to be absolutely, 100% clear that beneficial mutations do exist and have been well documented (Newcomb et al. 1997; Dean et al. 1996; Sullivan et al. 2001; Shaw et al. 2002, 2003; Joseph and Hall 2004; Perfeito et al. 2007; see Halligan and Keightley 2009 for a good review). Indeed, we have done experiments with bacteria were we monitored populations for many generations, and observed the formation of novel, beneficial mutations that provided the bacteria with new information that allowed them to perform a novel function that they were previously incapable of (Blount et al. 2008; more detailed explanation in the citrate section of this post).

From the Star Trek TOS episode "I, Mudd"

From the Star Trek TOS episode “I, Mudd”

In short, this notion that mutations can’t produce new genetic information is laughably absurd. DNA is simply the code that tells organisms what proteins and structures to make, and modifications to that code result in novel modifications, proteins and structures. Indeed, if you accept that dogs and cats are different because of differences in their DNA, then you have already accepted that rearranging genetic codes can in fact produce new information and result in vastly different organisms. Indeed, saying that rearranging DNA can’t produce new information is no different from saying that rearranging the letters of the alphabet can’t produce new information. Obviously it can, as is evidenced by this new blog post that I am writing. Additionally, when we say that evolution needs “new information,” we don’t mean that it needs something completely novel and totally different from anything else that has ever existed. Rather, we simply mean that there needs to be variation for traits. In other words, any slight modification to an existing trait qualifies as “new information” when talking about evolution. Finally, we have experimentally documented that mutations can produce that variation, and we have demonstrated that beneficial mutations do in fact occur and result in novel proteins that perform novel functions (i.e., the mutations created new information).

Literature Cited

Posted in Science of Evolution | Tagged , , , , | 12 Comments

Don’t attack the straw men: Straw man fallacies and reductio ad absurdum fallacies

strawmanPeople love to argue. We all have views and opinions, and we tend to promote them prominently and viciously attack opposing ideas. There is nothing inherently wrong with that as long as your views are evidence-based and you use proper logic when attacking your opponent’s position; however, many people fail at this and succumb to logical fallacies. One of the most common blunders is something known as a straw man fallacy. This occurs anytime that you misrepresent your opponent’s argument, then attack that misrepresentation instead of the view that they actually hold. It is a fairly simple concept, but it is often misunderstood, and it is rampant in debates (this year’s presidential election has been full of a sickening number of these fallacies). Therefore, I want to talk a bit about this fallacy and when it does and does not occur, as well as explaining a particular subset of straw man fallacies known as reductio ad absurdum fallacies.

 

Straw man fallacies

Let’s begin with the basics, what are straw man fallacies? To put it simply, they are distortions of an argument that usually present a weak and easily defeated version of the actual argument. In other words, one debater will claim that their opponent believes view X (which is a distorted and weakened version of what their opponent actually believes), then they will explain why X is wrong. The problem with this should be obvious. If the opponent does not actually believe X, then showing that X is wrong does nothing to address the opponent’s actual beliefs. In other words, it doesn’t matter if X is wrong if X isn’t actually what your opponent is claiming. Nevertheless, this fallacy can be an extremely persuasive (albeit invalid) debate tactic that many people are duped by.

On that note, it is worth mentioning that although straw man fallacies can be deliberate, and many people use them with the intention of deceiving their audience, they can also occur unintentionally. This usually happens when someone is ignorant about the topic that they are debating, and I frequently encounter these arguments when talking to people who reject scientific results. For example, one of the most common creationist arguments is, “if we evolved from monkeys, then why are there still monkeys?” This is a straw man fallacy because evolution does not state that we evolved from monkeys (or even great apes). Rather, it states that we share a common ancestor with them. Thus, by making this argument, creationists are not in any way shape or form presenting a legitimate criticism of the theory of evolution, because they are attacking a claim that evolution does not actually make. Similarly, I often encounter religious people who say that climate change can’t be true because their religion says that the earth won’t be destroyed, and climate change says that it will be destroyed. If you actually understand climate change, however, then the problem with that line of reasoning is obvious. Namely, climate change does not claim that we are going to destroy the earth. Climate change is a serious problem, but it won’t cause our extinction.

I wanted to use those two examples not to attack creationists and climate change deniers, but rather to illustrate an important point: you need to understand a given topic before you decide whether or not to accept it. Otherwise, your arguments will often be straw men fallacies, and they will make your opponents think that you are ignorant, rather than making them actually consider your position. Further, this is important for far more than just winning debates. I personally care more about knowing what is true than I care about winning a debate, but if I have not even bothered to learn the fundamental concepts of the opposing position, then I can’t have any confidence in my conclusions. You need to actually study a topic thoroughly, before you reach a conclusion, and defiantly before you try to debate someone on it.

What isn’t a straw man fallacy

When it comes to the internet articles, public debates, and other venues where someone is not specifically debating you, do not assume that someone is committing a straw man fallacy just because they did not address a specific argument that you personally think works. In other words, if they attacked an argument that essentially no one actually uses, then they committed a straw man fallacy. However, if they attacked an argument that many people use, then they did not commit a fallacy even if you do not personally use that argument.

Let me give you an example of what I mean. Last week, I wrote a post debunking 25 common arguments against climate change (mostly arguments that climate change isn’t happening or we aren’t causing it). All 25 of them are arguments that I personally encounter frequently when debating people. Nevertheless, some people were quick to accuse me of committing a straw man fallacy, and they did so based on the grounds that they personally accept that we were causing climate change, but simply debate the amount of change that will happen (which is not a line of reasoning that I addressed). So, did I commit a straw man fallacy? No! Every argument that I addressed is an argument that many people actually use. The fact that some people have arguments that I did not address does not make the arguments that I did address fallacious. I cannot predict the argument that every single reader of my blog will use.

To be clear, if I had made grand, generalizing claims like, “everyone who debates climate changes believes these arguments” or “these are the only arguments against climate change,” then I would have committed a straw man fallacy. Similarly, if I was actually directly debating one of these people, and they said, “I accept that climate change is true, but I disagree about its extent” and I responded by providing them evidence that it was true, then I would have committed a straw man fallacy, because I would not actually be addressing the argument that they had made to me. I cannot, however, be held responsible for failing to predict every single argument that anyone anywhere would ever make.

A similar example frequently occurs with anti-vaccers. I often write and share posts about vaccine effectiveness, and almost every time that I do, I get some angry anti-vaccer yelling at me with statements like, “This is such nonsense. The issue is about whether or not vaccines are safe, not whether or not they work!” As with the climate change arguments, however, there certainly are people who accept that vaccines work but erroneously think that the costs outweigh the benefits; however, there are also many people who do, in fact, deny that vaccines even work. So unless I am specifically addressing a group of people who are arguing about safety (rather than effectiveness), there is nothing fallacious about discussing vaccine effectiveness, because many people do actually argue that vaccines aren’t effective.

 

Reductio ad absurdum fallacies

At this point, I want to shift gears slightly and talk about another type of logical fallacy that is really just a special case of the straw man fallacy: reductio ad absurdum. That may sound like a Harry Potter spell, but it is actually a logical fallacy that occurs when you take a position, stretch it to an absurd conclusion that would not actually be supported by the original statement, then claim that the original statement must be wrong because the conclusion is clearly absurd. That may have sounded complicated, so let me give you a few examples.

On several occasions I have shared posts which explain that most people don’t need to take extra vitamins and dietary supplements because they already get a sufficient amount from their diet and their body can’t really utilize excess amounts. Whenever I share these posts, however, I almost invariably get responses like, “You’re such an idiot! You claim to be a scientist and you don’t even know that vitamins are important!? You would die without them!” Let’s think about this for a second. Did I claim that vitamins aren’t important or that your body doesn’t need them? No, I didn’t even imply it. There is a huge difference between saying that you don’t need to take excess vitamins and saying that you don’t need any vitamins. In other words, the argument that I presented states that most people in industrialized countries already get the vitamins that their bodies need from their diets, and they don’t need to take extras. Internet trolls then took that argument and stretched it to the absurd conclusion that vitamins weren’t necessary at all, then accused me of being an idiot based on that clearly absurd conclusion. Do you see the problem? The conclusion that they presented was based on a distortion of my argument, rather than the argument itself.

To give one more example, on several occasions I have shared posts that explain why juice cleanses and “super foods” can’t actually boost an already healthy immune system, and the wonderful people of the internet usually respond by asserting that it is obvious that a healthy diet is important and your immune system won’t function well if you’re malnourished. As with the vitamin argument, however, I never asserted that a healthy diet isn’t necessary. I was talking about boosting an immune system above its normal functioning levels, not basic nutrition. In other words, saying that you can’t boost a healthy immune system is not the same thing as saying that you can eat nothing but junk and expect to be healthy.

 

Reductio ad absurdum logic

This illustrates the correct use of reductio ad absurdum logic. The second stick figure is sarcastically illustrating that if the argument that science has been wrong in the past actually invalidated a current scientific result, then we could use that argument anytime that we wanted, but that would obviously lead to absurd conclusions. Note: sarcasm is not a requirement for reductio ad absurdum logic, but it is often included.

This illustrates the correct use of reductio ad absurdum logic. The second stick figure is sarcastically illustrating that if the argument that science has been wrong in the past actually invalidated a current scientific result, then we could use that argument anytime that we wanted, but that would obviously lead to absurd conclusions. Note: sarcasm is not a requirement for reductio ad absurdum logic, but it is often included.

Finally, it is important to note that reductio ad absurdum logic can actually be applied without committing a fallacy if you can show that the actual argument that your opponent is using would lead to an absurd conclusion if it was applied consistently. As long as you do not distort the original argument, then this technique is not only valid, but it is extremely powerful (it is one of my favourite tools).

Let me give you an example. I often encounter people who say things like, “all that I need to know about climate change is that Al Gore thinks it is happening. If he thinks that it is true, then it must be wrong!” This argument is technically a guilt by association fallacy, but we can easily demonstrate the flaw in it by using reductio ad absurdum logic. In this case, I usually counter this claim by pointing out that Al Gore also thinks that we are breathing oxygen, so if we use this argument consistently, then we must conclude that we are not in fact breathing oxygen. Do you see why that response works? I did not distort the argument, rather I showed that it actually would lead to an absurd conclusion if it was a good argument. I can prove this by setting up two identical syllogisms.

Original argument:

  1. If Al Gore thinks that something is true, then it must be wrong
  2. Al Gore thinks that climate change is true
  3. Therefore, climate change is wrong

Analogous argument using reductio ad absurdum logic:

  1. If Al Gore thinks that something is true, then it must be wrong
  2. Al Gore thinks that we are breathing oxygen
  3. Therefore, we aren’t breathing oxygen

See how this works? I simply took the original argument, applied it to a different topic, and showed that if we apply that argument consistently, we arrive at an absurd conclusion. I provided many more examples of this debate tactic in this post on consistent reasoning, so please see it if you are confused. You should also watch John Oliver, because he wields this logical tool brilliantly (sometimes he does also slip into reductio ad absurdum fallacies, but that is generally to set up a joke rather than make a serious argument).

 

Conclusion

In short, straw man fallacies are simply distortions and misrepresentations of your opponent’s argument. They can be intentional or unintentional, but they are easy to avoid by simply being well-informed on the topic that you are debating. Nevertheless, many people continue to use them and incorrectly accuse other people of using them. Additionally, these fallacies contain a special subset of fallacies known as reductio ad absurdum fallacies. These occur when an argument is stretched to an absurd conclusion that is not supported by the original argument. Although that strategy is fallacious when the argument is distorted in the process, it can also be a very powerful debate tool if you can demonstrate that the original argument itself actually leads to an absurd conclusion when it is applied consistently.

Note: I want to be clear that on topics like vaccines, climate change, evolution, etc. there really aren’t “two sides.” So I when I say that you need to thoroughly study the topic before reaching a conclusion, I am not suggesting that you need to read a bunch of conspiracy blogs, creationist websites, etc. Rather, you need to study the peer-reviewed literature (including the handful of studies that disagree with the consensus). You don’t need to read unreliable sources in order to be well-informed. However, if you want to actually debate people about these topics, then you really should spend time studying those unreliable sources, because if you don’t, you will often end up committing straw man fallacies. Indeed, I have seen my fellow skeptics do that on several occasions (and I’ve probably unknowingly done it myself at some point).  

Related posts

Posted in Rules of Logic | Tagged , , , | 11 Comments

Debunking 25 arguments against climate change in 5 sentences or less (each)

Climate change is arguably one of the most misunderstood and controversial topics among the general public. Misinformation abounds, and many people are left debating whether or not we are causing it, and even whether or not it is happening at all. Among scientists, however, there is no serious debate, and there hasn’t been for many years. The evidence for climate change is extremely solid, despite what many blogs and politicians will tell you. Therefore, I want to try to correct some of that misinformation. Yesterday, I posted an extremely lengthy article debunking 25 myths and bad arguments about climate change. Today, I am posting the same information, but in a much more condensed form. I have attempted to address each argument in under 5 sentences. Obviously I had to leave out a lot of information, so if you want the more detailed explanations, please see the original post (each short response is accompanied by a link for the full-length explanation).

 

stephen-colbert-global-warmingBad Argument/Myth #1: It snowed, so global warming must not be true

Reality: Climate and weather are not the same thing. Climate change predicts that on average the earth’s temperature will increase ,but that does not mean that it will always be hot everywhere or that it will never snow. This is a straw man fallacy/reductio ad absurdum fallacy.

click here for the full version of #1

Bad Argument/Myth #2: The ice in Antarctica is actually increasing

Reality: This argument is a Texas sharpshooter fallacy because it focuses on one ice shelf and ignores the fact that Arctic sea ice has declined substantially (Stroeve et al. 2015), glaciers are rapidly retreating all over the world (WGMS 2013), sea levels are rising (Yi et al. 2015;  NOAA), temperatures are increasing, etc. Also, it is important to note that climate change does not predict that every part of the earth will be warmer all of the time. The average temperature is increasing, but that does not mean that every single spot will be warmer.

click here for the full version of #2

This shows the temperature data once the effects of El Ninos, solar fluctuations, and volcanoes. Image via Open Mind.

This shows the temperature data once the effects of El Ninos, solar fluctuations, and volcanoes. Image via Open Mind.

Bad Argument/Myth #3: Global warming has paused

Reality: Not it hasn’t. To make this claim you have to cherry-pick your data set, cherry-pick your years, and ignore the confounding factors, none of which is logically or scientifically valid. Indeed, the actual scientific analyses of the data show that warming has not paused at all (Easterling and Wehner 2009; Santer et al. 2011; Karl et al. 2015; Lewandowsky et al. 2015a,b), especially when you look at the oceans (Balmaseda et al. 2013; Rhein et al. 2013; Glecker et al. 2016) and account for confounding variables (Foster and Rahmstorf 2011).

click here for the full version of #3

Bad Argument/Myth #4: Global warming wasn’t happening so they changed to name to climate change

Reality: Scientists have been talking about climate change since day one. They changed the name because the term “global warming” is misleading and was leading to faulty arguments (like #1 and 2). The planet is warming on average, but climate change is also about shifts in rainfall patterns, sea level changes, ocean current changes, etc., not just warming.

click here for the full version of #4

Bad Argument/Myth #5: The models have all been wrong

Reality: No they haven’t. They have actually been quite accurate (Hansen et al. 2006; Frame and Stone 2012; Rahmstorf et al. 2012; Cowtan et al. 2015;  Marotzke and Firster 2015). Claims to the contrary are based on a misleading and deceptive distortion of statistics.

click here for the full version of #5

Bad Argument/Myth #6: Polar bear numbers are actually increasing!

Reality: No they aren’t. They are decreasing (Schliebe et al. 2006; Stirling and Derocher 2012). Also, it’s not just that the populations are declining. Rather, cub mass is going down, juvenile recruitment is going down, body condition is going down, etc. (Rode et al. 2010, 2012). All of these factors are because the habitat that they need is vanishing, which means that they can’t find enough food, can’t raise proper-sized young, and will ultimately disappear from much of their current range if climate change isn’t halted (Hunter et al. 2010; Molnar et al. 2011).

click here for the full version of #6

Bad Argument/Myth #7: The climate has changed in the past, so the current warming is natural. It’s the sun, volcanoes, Milankovitch cycles, etc.

Reality: The fact that climate changed naturally in the past only tells us that it is possible for the climate to change naturally. It does not indicate or even suggest that the current warming is natural. Scientists have carefully examined the sun, volcanoes, Milankovitch cycles, etc., and none of them can explain the current warming trend (Meehl, et al. 2004; Wild et al. 2007; Lockwood and Frohlich 2007, 2008; Lean and Rind 2008; Foster and Rahmstorf 2011; Imbers et al. 2014). When you add anthropogenic greenhouse gasses into the statistical models, however, you get a tight match between the observed and expected values (Stott et al. 2001; Meehl et al. 2004; Allen et al. 2006; Lean and Rind 2008; Imbers et al. 2014; more details here).

This figure from Hansen et al. 2005 shows the effect of both the natural and anthropogenic drivers of climate change. Notice how only anthropogenic sources show a large warming trend. Also, see figure 2 of Meehl et al. 2004.

This figure from Hansen et al. 2005 shows the effect of both the natural and anthropogenic drivers of climate change. Notice how only anthropogenic sources show a large warming trend. Also, see figure 2 of Meehl et al. 2004.

click here for the full version of #7

Bad Argument/Myth #8: During past climate changes, the CO2 follows the temperature increase

Reality: This is only true at first. What happened in the past was that a small amount of warming (usually regional) from factors other than CO2 (such as Milankovitch cycles) caused the oceans to warm up and release the CO2 stored in them (Martin et al. 2005; Toggweiler et al. 2006; Schmittner and Galbraith 2008; Skinner et al. 2010). Then, that increase in CO2 caused the majority of the warming (Shakun et al. 2012). So CO2 was actually the major driver of past climate changes (Lorius et al. 1990; Tripati et al. 2009; Shakun et al. 2012).

click here for the full version of #8

Bad Argument/Myth #9: CO2 only makes a small portion of the atmosphere

Reality: The fact that something is not abundant does not mean that it is not important. Indeed, that tiny percentage of CO2 is the difference between our nice warm world and an inhospitably cold world, and we know that past climate changes have been largely driven by CO2 levels (Lorius et al. 1990; Tripati et al. 2009; Shakun et al. 2012). Additionally, satellites have provided direct empirical evidence that the earth is currently trapping more heat than it used to, specifically at the frequencies that are absorbed by CO2 (Harries et al. 2001; Griggs and Harries 2007).

click here for the full version of #9

Bad Argument/Myth #10: We only emit a tiny portion of the earth’s CO2

Reality: Before us, the system was in balance, with roughly the same amount being removed and produced (plants, the ocean, etc. all remove some CO2). Thanks to humans, however, that balance has shifted and now more CO2 is being produced than is being removed. As a result, CO2 levels have increased rapidly since the start of the industrial revolution, and they are currently at their highest point in past 14–16 million years (Tripati et al. 2009). Also, we have verified that the increased CO2 is from us via changes in carbon isotope ratios (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009; details here).

click here for the full version of #10

Bad Argument/Myth #11: Water vapor is a far more potent greenhouse gas than CO2

Reality: Water vapor increases in response to an increase in temperature. So, water vapor is a feedback mechanism, wherein CO2 from us causes some warming, that warming increases the amount of water vapor in the atmosphere, and that water vapor causes even more warming (Held and Soden 2000; Philipona et al. 2005). So ultimately, the warming is still from us producing CO2.

click here for the full version of #11

Bad Argument/Myth #12: In the 70’s scientists predicted an ice age

Reality: No they didn’t. There were a total of seven papers on global cooling, and during that exact same time, 42 papers were published on global warming (Peterson et al. 2008). The media may have said that we were entering an ice age, but that was never what the majority of scientists were saying.

click here for the full version of #12

Bad Argument/Myth #13: It’s just a theory, not a fact

Reality: In science, the difference between a theory and a fact has nothing to do with our certainty. Rather, a fact is a single observation, result, etc. whereas a theory is a broad and rigorously-tested explanatory framework that both explains the facts and allows us to make predictions about what future experiments should show. For example, if I drop a pen, then I have just demonstrated the fact of gravity (i.e., it is a fact that gravity caused my pen to drop), but the theory of universal gravity, explains that fact by stating that all bodies that have mass produce gravity and are acted upon by the gravity of other bodies.

click here for the full version of #13

Bad Argument/Myth #14: But scientists have been wrong in the past, and we can’t be totally certain that climate change is true

Reality: The fact that scientists have been wrong before and might be wrong now does not mean or even suggest that they actually are wrong now. You have to present actual evidence that they are wrong now, otherwise this is an argument from ignorance fallacy. Indeed, if this argument worked, you could use it anytime that you wanted. For example, you could say, “scientists say that gravity is true, but scientists have been wrong before and we can’t be totally sure that gravity is true, so I don’t have to accept them now.”

click here for the full version of #14

scientific consensus on global climate change, global warming

Image via James L. Powell. More details on the scientific consensus here.

Bad Argument/Myth #15: There are thousands of scientists who disagree (e.g., the Oregon Petition)

Reality: The overwhelming majority of climatologists (somewhere in the high 90’s) agree that we are causing climate change. The Oregon petition is a fraud. Most of its signatories weren’t real scientists, and only 39 of them were climatologists. Also this argument is a blatant appeal to authority fallacy.

click here for the full version of #15

Bad Argument/Myth #16: “Climategate” showed that scientists are falsifying data

Reality: It did no such thing. The stolen emails were full of conversations about real data, but a handful were taken out of context and twisted to make them appear corrupt. In context, however, nothing unethical was occurring. Indeed, the situation has been examined by multiple different independent investigations (including the National Science Foundation, US Environmental Protection Agency, UK House of Commons Science and Technology Committee, Pennsylvania State University, and University of East Anglia), and all of them concluded that there was no evidence that the scientists were manipulating data, involved in a conspiracy, etc.

click here for the full version of #16

Bad Argument/Myth #17: Scientists are manipulating the data to make it look like warming!

Reality: No they aren’t. Almost all real data sets have to be adjusted for biases in the collection methods, and climate change data are no different. The data have been collected over many years using different methods, and different methods have different biases. Therefore, the only way to use all of those data sets is to adjust for those biases in methodologies (details here and here). Scientists have been open about these corrections and have publicly documented them from day one (details here and here).

click here for the full version of #17

global warming money

Details and sources here.

Bad Argument/Myth #18: It’s a liberal conspiracy/It’s all about the money!

Reality: These are ad hoc fallacies (i.e., unless you provide actual evidence that they are corrupt, these claims are baseless assumptions). Additionally, if we are going to go down this road, then let’s flip things upside down and ask the opposite question: who would benefit from opposing climate change research? The answer to that question is pretty obvious: oil companies. If scientists could really be bought off so easily, then why haven’t multi-billion-dollar oil companies been able to buy off more than a handful of them? Given the vast wealth of oil companies, the millions of dollars that they have poured into denialist organizations, and the economically unstable state of most governments, surely oil companies could offer scientists more than governments could.

click here for the full version of #18

Bad Argument/Myth #19: But politicians and the media…

Reality: They are irrelevant. I don’t give a crap what politicians and the media think or say. Both of them are repeatedly wrong about the science (on both “sides” of the topic). So I don’t care what Al Gore said or thinks, I don’t care what erroneous claims CNN has made, etc. I care about the science, and the scientific evidence overwhelmingly shows that climate change is happening and it is our fault. Using politicians and the media to attack science is a guilt by association fallacy, because what they think, say, and do is completely, 100% irrelevant to whether or not the science is correct.

click here for the full version of #19

Bad Argument/Myth #20: Climate change is being caused by the ozone hole (or vice versa)

Reality: Climate change and the ozone hole are separate phenomena that sometimes interact. Climate change is caused by greenhouse gasses trapping heat before it leaves the earth, whereas the ozone hole is caused by chlorofluorocarbons and other gases that deplete the ozone, thus allowing high levels of UV radiation to enter the earth. They are caused by separate sets of gases, and they act very differently.

click here for the full version of #20

Bad Argument/Myth #21: But CO2 is actually good for plants

Reality: It is true that increased CO2 levels will generally result in more plant growth, but that relationship is complicated (Robinson et al. 2012). Indeed, there are lots of other factors to consider, such as changing precipitation patterns, which are often very harmful to plants (Allen et al. 2010; Carnicer et al. 2011). So many plants will actually be negatively impacted. Finally, this argument is really quite irrelevant, because even if plants would universally benefit from increased CO2, that wouldn’t mitigate the sea level rise, increased heat waves, etc. (see #22). In other words, the CO2 levels are increasing, so the plants clearly can’t keep up (i.e., plant growth isn’t increasing fast enough to balance out the CO2 that we are producing; see #10).

That was the full version for this one.

Bad Argument/Myth #22: It’s not really a big problem because the planet will only warm by a few degrees

Reality: Even a slight increase in temperature will have huge consequences. Indeed, we are already seeing the changes. Glaciers and ice caps are melting (WGMS 2013; Stroeve et al. 2015), the sea is rising (Yi et al. 2015), animals and plants are shifting their ranges and behaviours (Root et al. 2003; Tingley et al. 2012), forests are being affected (Allen et al. 2010; Carnicer et al. 2011),  heat waves and heat associated mortalities are increasing (Patz et al. 2005; Luber and McGeehin 2008; Kuglitsch et al. 2010), extreme weather events are increasing (Coumou and Rahmstorf 2012), coral reefs are bleaching (Hoegh-Guldberg and Bruno 2010), droughts are increasing (Dai 2013), etc., and all of these things will only get worse with time. In many parts of the world, it will be harder to grow crops (Schlendker and Roberts 2009), coastal properties will be lost, diseases will expand their ranges, etc.

click here for the full version of #22

Bad Argument/Myth #23: It will make humans go extinct/it will be the end of the world

Reality: Following #22, it is important to clarify that although climate change is a very serious problem that will make life on earth more difficult, it is not going to end life as we know it. The media loves to sensationalize things, but very, very few professional climatologists think that it will end the world.

click here for the full version of #23

Bad Argument/Myth #24: God is in control

Reality: If you actually believe this and think that climate change isn’t a problem because it won’t happen unless God allows it, then you should never take action on anything. If, for example, you see that a child is about to get hit by a bus, there is no point in trying to save him because the bus won’t hit him unless God allows it. Do you see the problem? If this argument worked, then it would absolve you of all responsibility for anything. To put this another way, even if God exists and nothing happens without him allowing it, what makes you think that he wouldn’t allow us to cause something really harmful, like climate change?

click here for the full version of #24

Bad Argument/Myth #25: Man is not powerful enough to cause climate change

Reality: This is an ad hoc fallacy. It is a baseless assumption that I would never accept unless I was already convinced that climate change wasn’t true. Even if God is real and the Bible is true, there is no Biblical support for this argument, nor is there any reason to think that God drew and arbitrary line at climate change and said, “this far, no further!” If God exists, he has obviously allowed us to do a great many terrible things.

click here for the full version of #25

Related Posts

Literature Cited

  • Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
  • Allen et al. 2010. A global overview of drought and heat-induced tree mortality reveals emerging climate change risks for forests. Forest Ecology and Management 259:660–684.
  • Balmaseda et al. 2013. Distinctive climate signals in reanalysis of global ocean heat content. Geophysical Research Letters 40:1754–1759.
  • Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
  • Carnicer et al. 2011. Widespread crown condition decline, food web disruption, and amplified tree mortality with increased climate change-type drought. Proceedings of the National Academy of Sciences 108:1474–1478.
  • Coumou and Rahmstorf 2012. A decade of weather extremes. Nature Climate Change 2:491–496.
  • Cowtan et al. 2015. Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters 42:6526–6534.
  • Dai. 2013. Increasing drought under global warming in observations and models. Nature Climate Change 3:52–58.
  • Easterling and Wehner 2009. Is the climate warming or cooling? Geophysical Research Letters 36.
  • Foster and Rahmstorf 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
  • Frame and Stone 2012. Assessment of the first consensus prediction on climate change. Nature Climate Change 3:357–359.
  • Gillett and Thompson 2002. Simulation of recent Southern Hemisphere climate change. Science 302:273–275.
  • Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
  • Gleckler et al. 2016. Industrial-era global ocean heat uptake doubles in recent decades. Nature Climate Change.
  • Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
  • Hansen et al. 2006. Global temperature change. Proceedings of the National Academy of Sciences 10314288–14293.
  • Harries et al. 2001. Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature 410:355–357.
  • Held and Soden 2000. Water vapor feedback and global warming. Annual Review of Energy and the Environment 25:441–475.
  • Hunter et al. 2010. Climate change threatens polar bear populations: a stochastic demographic analysis. Ecology 91:2883–2897.
  • Hoegh-Guldberg and Bruno 2010. The Impact of climate change on the world’s marine ecosystems. Science 328:1523–1528.
  • Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
  • Karl et al. 2015. Possible artifacts of data biases in the recent global surface warming hiatus. Science 348:1469–1472.
  • Kuglitsch et al. 2010. Heat wave changes in the eastern Mediterranean since 1960. Geophysical Research Letters 37:L04802.
  • Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
  • Lefevre et al. 2013. Antarctic ozone loss in 1979–2010: first sign of ozone recovery. Atmosphere Chemistry and Physics 13:1625–1635.
  • Lewandowsky 2011. Popular consensus climate change is set to continue. Psychological Science 22:460–463.
  • Lewandowsky et al. 2015a. On the definition and identifiability of the alleged hiatus in global warming. Scientific Reports 5: 16784.
  • Lewandowsky et al. 2015b. The “pause” in global warming: Turning a routine fluctuation into a problem for science. Bulletin of the American Meteorological Society 96:723–733.
  • Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
  • Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
    Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
  • Luber and McGeehin 2008. Climate change and extreme heat events. American Journal of Preventative Medicine 35:429–435.
  • Marotzke and Firster 2015. Forcing, feedback and internal variability in global temperature trends. Nature 517:565–570.
  • Martin et al. 2005. Role of deep sea temperature in the carbon cycle during the last glacial. Paleoceanography 20:PA2015.
  • Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
  • Molnar et al. 2011. Predicting survival, reproduction and abundance of polar bears under climate change. Biological Conservation 143:1612–1622.
  • Patz et al. 2005. Impact of regional climate change on human health. Nature 438:310–317.
  • Peterson et al. 2008. The myth of the 1970s global cooling scientific consensus. Bulletin of the American Meteorological Society 89:1325–1337.
  • Philipona et al. 2005. Anthropogenic greenhouse forcing and strong water vapor feedback increase temperature in Europe. Geophysical Research Letters 32:L19809.
  • Rahmstorf et al. 2012. Comparing climate projections to observations up to 2011. Environmental Research Letters 7:044035.
  • Rhein et al. 2013. Observations: Ocean. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Stocker (eds.). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA
  • Robinson et al. 2012. A meta-analytical review of the effects of elevated CO2 on plant-arthropod interactions highlights the importance of interacting environmental and biological variables. New Phytologist 194:321–336.
  • Rode et al. 2010. Reduced body size and cub recruitment in polar bears associated with sea ice decline. Ecological Applications 20:768–782.
  • Rode et al. 2012. A tale of two polar bear populations: ice habitat, harvest, and body condition. Population Ecology 54:3–18.
  • Root et al. 2003. Fingerprints of global warming on wild animals and plants. Nature 421:57–60.
  • Schlendker and Roberts 2009. Nonlinear temperature effects indicate severe damages to U.S. crop yields under climate change. Proceedings of the National Academy of Sciences 10615594–15598.
  • Schliebe et al. 2006. Range-wide status review of the polar bear (Ursus maritimus). U.S. Fish and Wildlife Service.
    Shakun et al. 2012. Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484:49–54.
  • Schmittner and Galbraith 2008. Glacial greenhouse-gas fluctuations controlled by ocean circulation changes. Nature 456:373–376.
  • Skinner et al. 2010. Ventilation of the deep Southern Ocean and deglacial CO2 rise. Science 328:1147-1151.
  • Stirling and Derocher 2012. Effects of climate warming on polar bears: a review of the evidence. Global Change Biology 18:1694–2706.
  • Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
  • Stroeve et al 2012. The Arctic’s rapidly shrinking sea ice cover: a research synthesis. Climate Change 110:1005–1027.
  • Swart et al. 2015. Influence of internal variability on Artic sea-ice trends. Nature Climate Change 5:86–89.
  • Tingley et al. 2012. The push and pull of climate change causes heterogeneous shifts in avian elevational ranges. Global Change Biology 18:3279–3290.
  • Toggweiler et al. 2006. Mid-latitude westerlies, atmospheric CO2, and climate change during the ice ages. Paleoceanography 21:PA2005.
    Trenberth 2009. An imperative for climate change planning: tracking Earth’s global energy. Current Opinion in Environmental Sustainability 1:19–27.
  • Tripati et al. 2009. Coupling CO2 and ice sheet stability over major climate transitions of the last 20 million years. Science 326:1394–1397.
  • Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
  • WGMS 2013. Glacier Mass Balance Bulletin. World Glacier Monitoring Service12.
  • Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters
  • Williams et al. 2012. Modifications to pairwise homogeneity adjustment software to address coding errors and improve run-time efficiency. NOAA (GHCNM-12-02).
  • Yi et al. 2015. An increase in the rate of global mean sea level rise since 2010. Geophysical Research Letters 42:3998–4006.
  • Zhang 2007. Increasing Antarctic sea ice under warming atmospheric and oceanic conditions. Journal of Climate 20:2515–2529.

 

Posted in Global Warming | Tagged , , , , , , | 1 Comment