Evolutionary mechanisms part 7: Gene flow

So far in this series, we have talked about natural selection, sexual selection, genetic drift, and mutations, but that still leaves us with one final and very important evolutionary mechanism: gene flow. This simply refers to the movement of genetic information among populations (migration, for example, causes gene flow). At the surface, that may sound quite simple and boring, but it is actually fascinating, complex, and immensely important. For example, gene flow often plays a critical role in determining whether or not an organism will speciate (i.e., split into two species). Similarly, gene flow interacts with genetic drift and selection, and low levels of gene flow may predispose a population for extinction, whereas high levels may be equally harmful by preventing a population from adapting. Therefore, in the final post of this series, I want to talk about gene flow and explain how it interacts with other evolutionary mechanisms and determines the fate of many populations.

Maintaining connectivity

When we talk about gene flow, we are generally talking about the movement of genetic information from one population to another. This can occur either by individuals moving among populations (and mating) or by mechanisms that just transfer genetic material (e.g., wind and animals moving the seeds and pollen of plants).

One of the most powerful traits of gene flow is its ability to prevent populations from splitting apart. If you recall from previous posts, selection and genetic drift are constantly shifting the allele frequencies of populations, but which allele becomes common can vary from one population to the next. Genet drift is, of course, random. Thus, for a given genetic locus, one allele might become common in one population, while becoming quite rare in another. Similarly, selection (which is non-random) simply adapts populations for their current environment. As such, we expect that two populations that are living in different habitats would be under different selection pressures and thus evolve in different directions. Finally, mutations randomly create new genetic information, so if a mutation occurs in population A but not in population B, that will result in yet another genetic difference between the two populations.

The result of all of this is that when two populations are isolated, they both slowly accumulated genetic differences. Different mutations will arise in each one, different alleles will be selected for, and different alleles will be affected by genetic drift. Over many generations, these populations will eventually accumulate so many differences that we would consider them to be different species. This is called allopatric speciation (i.e., the formation of two new species from isolated populations).

Gene flow can, however, counteract that process by moving genetic information from one population into another. Imagine, for example, that a new mutation arises in population A but not population B. However, in each generation, a large number of individuals migrate between the two populations. As a result, some of the individuals who have the mutation carry it with them into population B where they mate with individuals of that population and pass the mutation on. Similar patterns also occur with selection and drift (more on that in a minute). Gene flow constantly shuffles the genetic information around and causes alleles to be exchanged among populations. Thus, it mixes the populations together and can prevent them from splitting apart.

Note: Although low levels of gene flow can be a very important driver of speciation, they are not required. There is also sympatric speciation which occurs when a connected population evolves in two separate directions and becomes two separate species (differences in mating patterns are a good way for this to happen).

 Interactions with genetic drift

Gene flow is often extremely important for the long-term survival of a population, and maintaining gene flow is often a conservation priority. This is because gene flow is a powerful way to counteract genetic drift. If you remember from previous posts, genetic drift is constantly removing variation from a population, but that variation is important because it is what allows populations to cope with future changes to the environment, new diseases or predators, etc. Indeed, it is well established that genetically diverse populations are superior to low-diversity populations (Hedrick and Kalinowski 2000, Keller and Waller 2002, Reed and Frankham 2003, Spielman et al. 2004, Whiteman et al. 2006, Hughes et al. 2008). Additionally, in very small populations, genetic drift can actually overpower selection and cause harmful alleles to rise to prominence (Lacy, 1987; Whitlock, 2000; see this post).

This shows gene flow's ability to counteract genetic drift. In the top panel (A) there is no gene flow, and alleles became fixed in each population by 200 generations. In contrast, in the lower panel (B) there is gene flow, and alleles never stay fixed. Thus, gene flow allowed the populations to maintain a higher diversity.

This shows gene flow’s ability to counteract genetic drift. In the top panel (A) there is no gene flow, and alleles became fixed in each population by 200 generations. In contrast, in the lower panel (B) there is gene flow, and alleles never stay fixed. Thus, gene flow allowed the populations to maintain a higher diversity.

Gene flow can, however, counteract this by constantly restocking the genetic diversity of a population. Consider the examples illustrated on the left. In the top example, I simulated the frequency of a given allele in10 populations of 25 individuals. At the start of each simulation, the allele had a frequency of 50% and there was no selection. In the first set of 10 populations, there was no gene flow, and you can see that the frequency of that allele changed randomly (sometimes increasing, sometimes decreasing), and by 200 generations, in each population it had either become fixed (i.e. everyone had it and the alternative allele was lost) or it was totally removed from the population. In other words, in every case, diversity was lost because of genetic drift. Now, look at the second set of 10 populations (bottom). That simulation was set up identically, but at the start of each generation, I had one individual immigrate from a large neighboring population that had an allele frequency of 50% for the allele in question. As you can see, neither allele ever becomes fixed for more than a few generations, because even if one is lost, gene flow reintroduces it. Thus, the populations maintain a higher diversity than they would without gene flow.

Gene flow can also cause something called the “rescue effect.” Many things such as disease outbreaks, over-harvesting, natural disasters, etc. can cause populations to go through large genetic bottlenecks where they lose a large portion of their genetic diversity. That is very bad for them because, as mentioned earlier, diversity is important for the long-term survival of a population. Gene flow can, however, rescue those populations by resupplying them with some of the diversity that they had lost (Wright 1931, Slatkin 1985, 1987, Whiteley et al. 2015). In other words, if you have two neighboring populations, and one goes through a bottleneck while the other does not, the population that did not go through a bottleneck can rescue the bottlenecked population via gene flow.

Interactions with selection

When gene flow interacts with genetic drift, it is usually beneficial, but when it interacts with selection, it can be very harmful, because it can introduce alleles that aren’t locally adaptive, and those alleles can swamp selection (García-Ramos and Kirkpatrick 1997, Lenormand 2002, Kawecki and Ebert 2004, Foster et al. 2007, Funk et al. 2012). Remember, selection just adapts populations for their current environment. So selection for a give trait may be strong in one population, but weak, non-existent, or even reversed in a neighboring population. As a result, selection may be acting to remove an allele from one population, while gene flow is constantly reintroducing that allele from a neighboring population where the allele is actually beneficial (or at least not harmful).

This shows gene flow's ability to counteract natural selection. In the top panel (A), there is no gene flow, and selection removes one allele from the population as expected. In contrast, in the lower panel (B) there is a high level of gene flow, and selection never manages to remove the harmful recessive allele from the populations.

This shows gene flow’s ability to counteract natural selection. In the top panel (A), there is no gene flow, and selection removes one allele from the population as expected. In contrast, in the lower panel (B) there is a high level of gene flow, and selection never manages to remove the harmful recessive allele from the populations.

I simulated this and have presented the results on the right. The populations (top) were set up in exactly the same way as for my gene flow example, but this time, I added selection. Individuals that had the dominant phenotype had a 100% chance of surviving and reproducing, whereas individuals with the recessive phenotype only had a 90% chance (i.e., there was selection against the recessive allele). As you can see, selection quickly removed the recessive allele (thus the dominant allele had a 100% frequency after only a few generations). If I showed you the survivorship for these populations, you would also see that their survivorship has increased (i.e., they adapted).

Now, let’s do the exact same thing, but let’s have one immigrant per generation, just like in the gene flow example (bottom). As with the gene flow, neither allele ever became fixed. The frequency of the beneficial dominant allele did shift up a bit from the starting point on average, but it never stayed at 100% like it did without gene flow because the gene flow was constantly reintroducing the harmful recessive allele. As a result, these populations could not adapt as well as the populations without gene flow, and they had a lower survivorship. This can be very problematic for populations, especially if they are being faced with something like an introduced disease. It is also a big concern for conservation efforts that move individuals among populations (thus creating artificial gene flow). The flip side of that, however, is that you can, in concept, do targeted genw flow where you take individuals from a population that has a beneficial allele and move those into a population that would benefit from that allele (e.g., take individuals from a population that has adapted to a disease and move them into a population that is still declining from the disease; Kelly and Phillips 2015).

Conclusion

In short, gene flow simply refers to the movement of genetic information among populations. It is an incredibly important evolutionary mechanism because it can prevent populations from splitting apart and it can counteract genetic drift and rescue low diversity populations. However, in some situations it can also counteract natural selection and flood a population with maladaptive alleles. Thus, although it is often beneficial, gene flow can be harmful in some situations.

Related posts

Literature cited (I ran out of time to look up the hyperlinks for all of the citations, sorry. I’ll try to add them in later, but in the mean time you should have no trouble finding them if you just past the article title into Google Scholar)

  • Foster, J. T., B. L. Woodworth, L. E. Eggert, P. J. Hart, D. Palmer, D. C. Duffy, and R. C. Fleischer. 2007. Genetic structure and evolved malaria resistance in Hawaiian honeycreepers. Molecular Ecology 16:4738–4746.
  • Funk, W. C., J. K. McKay, P. A. Hohenlohe, and F. W. Allendorf. 2012. Harnessing genomics for delineating conservation units. Trends in Ecology and Evolution 27:489–496.
  • García-Ramos, G., and M. Kirkpatrick. 1997. Genetic models of adaptation and gene flow in peripheral populations. Evolution 51:21–28.
  • Hedrick, P. W., and S. T. Kalinowski. 2000. Inbreeding depression in conservation biology. Annual Review of Ecology and Systematics 31:139–162.
  • Hughes, A. R., B. D. Inouye, M. T. J. Johnson, N. Underwood, and M. Vellend. 2008. Ecological consequences of genetic diversity. Ecology Letters 11:609–623.
  • Kawecki, T. J., and D. Ebert. 2004. Conceptual issues in local adaptation. Ecology Letters 7:1225–1241.
  • Keller, L. F., and D. M. Waller. 2002. Inbreeding effects in wild populations. Trends in Ecology and Evolution 17:230–241.
  • Kelly, E., and B. L. Phillips. 2015. Targeted gene flow for conservation. Conservation Biology 30:259–267.
  • Lacy, R. C. 1987. Loss of genetic diversity from managed populations: interacting effects of drift, mutation, immigration, selection, and population subdivision. Conservation Biology 1:143–158.
  • Lenormand, T. 2002. Gene flow and the limits to natural selection. Trends in Ecology and Evolution 17:183–189.
  • Reed, D. H., and R. Frankham. 2003. Correlation between fitness and genetic diversity. Conservation Biology 17:230–237.
  • Slatkin, M. 1985. Gene flow in natural populations. Annual Review of Ecology and Systematics 16:393–430.
  • Slatkin, M. 1987. Gene flow and the geographic structure of natural populations. Science 236:787–792.
  • Spielman, D., B. W. Brook, D. A. Briscoe, and R. Frankham. 2004. Does inbreeding and loss of genetic diversity decrease disease resistance? Conservation Genetics 5:439–448.
  • Whiteley, A. R., S. W. Fitzpatrick, W. C. Funk, and D. A. Tallmon. 2015. Genetic rescue to the rescue. Trends in Ecology and Evolution 30:42–49.
  • Whiteman, N. K., K. D. Matson, J. L. Bollmer, and P. G. Parker. 2006. Disease ecology in the Galapagos hawk (Buteo galapagoensis): host genetic diversity, parasite load and natural antibodies. Proceedings of the Royal Society of London B: Biological Sciences 273:797–804.
  • Whitlock, M. C. 2000. Fixation of new alleles and the extinction of small populations: drift load, beneficial alleles, and sexual selection. Evolution 54:1855–1861.
  • Wright, S. 1931. Evolution in Mendelian populations. Genetics 16:97–159.
Posted in Uncategorized | Tagged , , , , | 2 Comments

No, homeopathic remedies can’t “detox” you from exposure to Roundup: Examining Séralini’s latest rat study

Image by Mark Philpott, shared via the Creative Commons license.

Image by Mark Philpott, shared via the Creative Commons license.

One of my main goals for this blog is to help people learn how to evaluate scientific studies. To that end, I have written several posts that dissect papers and explain either why they are robust or why they are untrustworthy (for example, see my posts on Splenda, GMOs, and vaccines). These posts have the dual goals of debunking bad science and helping people think critically, and the time has come for me to write another one of these posts. Earlier this week, someone showed me a recent study which they claimed proved that detoxing is a real thing and there are natural remedies that help your body rid itself of toxins. The study in question is, “Dig1 protects against locomotor and biochemical dysfunctions provoked by Roundup.” As you might imagine, it is less than an exceptional paper. Indeed, it was such a blatantly horrible paper that I thought it would make a good teaching tool to illustrate some of the things that you should watch out for in scientific studies. I’ll summarize the main points below, but I encourage you to read the paper for yourself and see if you can spot the problems with it before you read any further.

I have organized this post in a progression starting with problems that are concerning, but not fatal, then moving to issues that limit the papers conclusions, and ending with problems that completely nullify the paper. I have chosen this order because it is also the progression of knowledge required to spot the problems. Most people should be able to see the red flags that I will start with, so even if you don’t have the statistical knowledge to spot the more technical problems, you can still use those early warning signs as clues that the paper should be scrutinized closely before accepting it.

Authors and conflicts of interest

It is always a good idea to look at both the authors who wrote the paper and the funding sources. Some scientists have reputations for publishing crappy or even fraudulent research, and you should be wary of them. Similarly, financial conflicts of interest should make you more skeptical of a study. Having said that, I want to be absolutely, 100% clear that you cannot claim that a study is wrong simply because of the people who wrote it or their funding sources. Those things are red flags that should make you cautious and should make you look at a paper more closely, but they are not in and of themselves enough to sink a paper (i.e., using them as the basis for outright rejection is an ad hominem/genetic fallacy). Let me put it this way, if I have a study that has some sections that are unclear, but it was written by reputable scientists and did not have any conflicts of interest, then I will probably give the authors the benefit of the doubt. In contrast, if that same paper had been written by notoriously second-rate scientists and/or had serious conflicts of interest, I would be far less willing to give the authors a pass. Another consideration is the general body of literature surrounding the paper. Extraordinary claims require extraordinary evidence, and it is always suspicious when a paper that conflicts with a large body of literature was also written by a fringe scientist and funded by people who stand to benefit from the paper.

Now that all of that has been said, let’s look at the paper itself. The first thing that jumps out is the fact that the final author on this paper is Gilles-Éric Séralini (the last author position is usually reserved for the most senior scientist who was in charge of the overall project). Séralini, for anyone who doesn’t know, is infamous for publishing low-quality, fringe studies in opposition to biotechnology (specifically GMOs). Indeed, he was the author on the infamous rat study which purported to show that GMOs caused cancer in rats, but actually only showed that Séralini doesn’t understand the concept of a proper control. Indeed, the study was so horrible that it was retracted, at which point, Séralini re-published it through a minor and questionable journal that didn’t even bother to send the paper out for review (hardly the actions of a proper scientist).

We aren’t off to a good start, but things get even worse when we look at the funding. The paper is about the supposed benefits of a homeopathic product known as Digeodren (aka Dig1), but it was funded by the company that produces Digeodren (Sevene Pharma). The authors try to get around this by saying, “The authors declare that they have no competing interests. The development of Dig1 by Sevene Pharma was performed completely independently of its assessment,” but that is just a red herring. The fact that the development and testing of Digeodren were separate is completely irrelevant. The point is that the study was funded by the same company that both produces Digeodren and stands to benefit from it. That is, by any reasonable definition, a serious conflict of interest.

Again, to be 100% clear, I am not saying that the study is invalid because it was funded by Sevene Pharma, nor am I saying that it is invalid because it was conducted by Séralini, but, both of those things are serious red flags, and the rest of the study will need to be impeccable if we are going to overlook them.

The journal that published the paper

Another quick and easy thing to look at is the quality of the journal that published the paper. You need to be careful when using this tool, however, because there is plenty of good research that is published in minor journals simply because it is not of a “broad enough scope” or “wide enough impact” to interest major journals. So you need to judge journal quality against the claims being made in a paper. In other words, when a paper is making extraordinary claims but was published in a minor journal, you should be skeptical. As with the authors and conflicts of interest, however, this is not enough to sink a paper, but it is a red flag to watch out for.

So how does our paper do? Well, it is claiming not only that a homeopathic remedy works (more on that in a minute), but also that it can help to remove toxins from your body. Both of those are extraordinary claims that fly in the face of a large body of literature. In other words, if those claims were well supported, then this paper would be of extremely wide interest and should be published in a top journal. Therefore, the fact that it showed up in a fringe journal (BMC Complementary and Alternative Medicine) is yet another warning sign that something is seriously wrong with it.

Extraordinary claims require extraordinary evidence

As alluded to earlier, you should always consider the a prior plausibility of the claims being made in a paper (i.e., how likely are they to be true given the results of other studies). In other words, if a paper is simply reporting something that dozens of other papers have reported, then you don’t need to be too critical (you should still evaluate it, but it requires less scrutiny). In contrast, when a paper is reporting something extraordinary that conflicts with numerous other papers, then the paper needs to present extraordinary evidence to support its claims.

In this case, the claims of the paper are in fact quite extraordinary. First, it is testing a homeopathic remedy. I explained the folly of homeopathy in more detail here, but in short, it relies on the utterly absurd notions that diluting something makes it stronger, like cures like, and water has memory. In other words, homeopathy violates several of our most fundamental scientific concepts. Again, that does not automatically mean that it is wrong because it is always technically possible (albeit very unlikely) that those concepts are in fact flawed. However, if you want to claim that they are flawed, you need to provide some extraordinary evidence, and in the case of homeopathy, that evidence is nowhere to be found. Indeed, systematic reviews of the literature show that homeopathy is nothing more than a placebo. Similarly, detox supplements, shakes, foot baths, etc. are scams. Your body already does a very good job of keeping potentially harmful chemicals at safe levels, and no natural remedies have been shown to actually remove toxins.

Given the weight of evidence against the claims being made by this paper, it would need to be an outstanding study to be convincing. It would need enormous sample sizes, extremely rigorous controls, careful statistics, etc. In other words, it would need to meet an extremely high bar, but as I will demonstrate, it fails to do that.

The importance of the introduction

You can often tell a lot about a paper by its introduction (called the “Background” in this journal). This is where authors are supposed to review the current state of knowledge on the topic of the paper and make the case for why their study is interesting and important. When authors fail to do that convincingly, it is often a sign of underlying problems with the study.

In this case, the introduction is quite short and has several irregularities. First, multiple of the papers that were cited were other Séralini studies. That is not a good sign. There is a lot of other relevant literature out there that should have been included (much of which disagrees with Séralini’s studies). Similarly, several of the cited studies are questionable at best. Indeed, one of their central arguments hangs on a citation to the aforementioned GMO rat study that was so flawed that it was retracted.

Further, the authors cited several relevant papers about the properties of the active ingredients in Digeodren, but they totally failed to mention that Digeodren is a homeopathic remedy and those active ingredients are only present at extremely low concentrations (in this case about 1–10 parts in 100,000). They did mention this briefly in the methods, but its omission from the introduction is extremely troubling. If you are going to do a study on homeopathy, you had dang well better discuss the existing literature on that topic.

Methods: animal trials

Now we finally get to the core of the paper itself, and the first thing that jumps out is the fact that this was an animal trial. As I explained in more detail here, humans have a different physiology than other animals. As such, animal studies have a fairly limited applicability to humans. Therefore, they should be used to identify treatments that are good candidates for human trials, but you cannot jump from saying that something works in rats to saying that it works in humans. To be clear, I am not saying that the results of animal studies are wrong. Indeed, in many cases, the drug in question does in fact work in the species that was being tested, but the fact that it worked in that species does not automatically mean that it will work in humans. As a result, you need to be careful about applying the results of animal studies to humans.

Methods: experimental design

Their experimental design was pretty simple. They took a group of 160 rats and randomly divided them into four groups of 40. One group was kept as a control and did not receive any form of treatment, one group received Roundup in its water, one group received Digeodren in its water, and one group received both Digeodren and Roundup in its water. That’s not a terrible design, but it is also not a great design. A much better approach would have been to include a blocking element.

Imagine, for example, there was a slight thermal gradient in the lab where the rats were housed, and the cage rack containing the control mice ended up being on the warm end, while the cage rack with the Roundup mice ended up being on the cool end. That introduces a new variable and can have dramatic effects on the study. You’d be surprised how much a little thing like that can skew a result. Thus, a much better approach is to do what is known as “blocking.” To do this, instead of having four sets of cages, with each set containing a different group, you have members of each treatment group in each set of cages. In other words, for each set of cages, you randomly select 10 cages from each of your four treatment groups, that way, each set (what we would call a “block”) has 10 individuals from each treatment group (the position of the cages within each block should also be randomized). Now, if there is a thermal gradient (or any other confounding factor), it balances out because it affects all of your treatment groups equally. Further, you can (and should) including that blocking variable in your analyses to actually test for confounding factors across your sets of cages. Failing to block the experiment like that is not always fatal to an experiment (depending on the type of experiment), but it does make me far less confident in the results, and remember, to accept this particular paper, it needs to be an extraordinarily good paper.

This shows three different setups for the same experiment comparing four groups of 40 rats (I am assuming one rat per cage). On the far left, you have what seems to be being described by this study (each experimental group is separate). This is a weak design. A better design is what you see in the middle where you have representatives from each experimental group within each "block." The best design is then to randomize the location of the cages within those blocks (as seen on the far right).

This shows three different setups for the same experiment comparing four groups of 40 rats (I am assuming one rat per cage). On the far left, you have what seems to be being described by this study (each experimental group is separate). This is a weak design. A better design is what you see in the middle where you have representatives from each experimental group within each “block.” The best design is then to randomize the location of the cages within those blocks (as seen on the far right).

A second issue is that this experiment wasn’t blinded. In other words, the researchers knew which rats were in each treatment group. That makes it very easy for their biases to inadvertently influence the experiment, especially given that one of the researchers has a reputation for publishing agenda-driven papers (again, even a slight difference in how the rats were treated could have affected things).

Note: the authors were a bit vague about how their cages were set up, so it is not clear how many rats were in each cage or how many sets of cages there were. However, it is clear that they did not use a proper blocking design.

Methods: The doses

Anytime that you are looking at a toxicology study, you have to look at the doses to see if they are reasonable. Remember, everything (even water) is toxic at a high enough dose. So when a study is looking at an environmental chemical like Roundup, it is important that they use a dose that you would realistically be exposed to in the environment. Otherwise, the study has no real applicability.

In this study, the rats in the Roundup group were given 135mg/kg of Roundup daily. After converting that to a human dose, we find that it is the equivalent of a human consuming 21.9 mg/kg daily. That is an insanely high dose. The exact allowable daily intake (ADI) for glyphosate (i.e. Roundup) varies by country, but it is much lower than that. In Australia, for example, it is 0.3 mg/kg, whereas the WHO sets it as 1 mg/kg. The dose in the experiment is also well above the levels that people are normally exposed to. Even if I want to be generous, and assume the questionable estimates put forth by the “detox project” are correct and people in the US are eating up to 3 mg/kg of glyphosate daily, the dose that the rats received is still seven times that!

To put it simply, this study is worthless because the dose is so unrealistic. Even if the authors had successfully demonstrated that Digeodren did something useful when faced with those levels, that would not in any way shape or form indicate that it does anything useful when exposed to normal levels of Roundup.

Methods: Statistics

Finally, we get to the biggest problem with this study (IMO), and this one would sink it even if it was the only thing wrong with the paper. It is a problem that I write about a lot on this blog, so you may already know where I am going with this. The problem is multiple comparisons. In technical terms, the authors failed to control the family-wise type 1 error rate. In laymen’s terms, this was a statistical fishing trip. They simply did enough comparisons that they eventually got a few that were “significant” just by chance.

I’ve previously written lengthy posts about this, but to be brief, standard statistical tests like what the authors used rely on probabilities for determining statistical significance. In other words, the report “P values” that show you the probability of getting a result of the same effect size or greater than the effect size that you observed if there is not actually an effect. It’s not technically correct, but you can think of this as the probability that you could get your result just by chance. To apply this to our study, they were looking for differences among their groups, so the P values were the probabilities of getting differences as large or greater than the differences that they observed if the treatments don’t actually cause a difference. To actually determine if something is “statistically significant” we compare it to a pre-defined threshold known as “alpha.” In biology, the alpha value is usually 0.05, so any P value less than that is considered significant. What a P value of 0.05 really means, however, is that there is a 5% chance that you could get a difference that large or larger just by chance. This is really important, because it means that you will occasional get “significant” results that arose just by chance, and we call those statistical flukes type 1 errors.

Following all of that, it should make intuitive sense that as you make more comparisons, the odds of getting at least one false positive increase. In other words, if you do enough comparisons, you will eventually find some results that are statistically significant just by chance. So your error rate across all of your tests is actually much higher than 0.05. This is what we call the family-wise type 1 error rate, and it is extremely important. To compensate for it, you should do two things. First, at the outset of your study, you should have a clear prediction of what you expect to be true if your hypothesis is correct, and you should only make the comparisons that are necessary for testing that predication. You should not make a whole bunch of haphazard comparisons and hope that something significant comes out. Second, if you end up using multiple tests to answer the same question (e.g., does drug X work?) then you need to control the family-wise error rate by adjusting your alpha value (this is usually done through a Bonferroni correction). In its simplest terms, this makes the alpha more stringent as you increase the number of comparisons that you do.

So, how did our intrepid scientists do? In short, not well. They made a whopping 29 comparisons, only 8 of which showed any form of significance, and only 6 of which showed significance in a direction that would suggest that Digeodren does anything useful. Further, they did not control the error rate among these tests. In other words, they did exactly the opposite of what you are supposed to do. They went on a fishing trip looking for significance rather than only testing a small set of pre-defined expectations. They made so many comparisons that they got some statistically significant results just by chance. To put this another way, if I set up the exact same experiment with four groups of rats, but I did not give any of them Digeodren or Roundup, and I made the same 29 comparisons among those four groups, I would expect to get several significant results, even though I treated all four groups exactly the same. Their results are statistical flukes, nothing more.

Finally, they did not report their P values for each comparison, which means that we can’t even properly assess their results (see the note below). If they had reported a table of P values like they should have, we could do the Bonferroni correction ourselves, but since they failed to do that, we have nothing to go on.

To be clear, in most cases, the fact that an author did not control their error rate would not automatically mean that their results were statistical flukes, but it would mean that we should consider their paper untrustworthy and reject it. However, in this particular case, there is another important factor to consider. Namely, all of the existing evidence that homeopathy doesn’t work. When you consider that evidence, and the low quality of the experimental design of this particular study, the most rational conclusion is that the results are wrong rather than simply untrustworthy.

Note: If you read the paper, you will see a reference to a Bonferroni test as well as P values, but they only used those within a test rather than across tests. In other words, the tests that they were using (ANOVA and Kruskal-Wallis) make comparisons among several groups (in this case the four treatment groups) and report a single P value that tells you whether or not at least one significant difference exists among those groups. Then, if you get a significant result, you make pairwise comparisons among all of your groups and get individual P values for each comparison. So they reported the P values and controlled the error rates for those individual comparisons within each ANOVA, but I am talking about the P values across ANOVAs, because you should never even do the individual comparisons unless the ANOVA itself is significant, and if you don’t control the error rate across ANOVAs (as they didn’t), a lot of your ANOVAs will be false positives. In other words, they did 29 ANOVAs/Kruskal-Wallis tests, each of which compared four groups, and they controlled the error rates for the post-hoc comparisons of the four groups, but not for the ANOVAs themselves.

 Conclusion

In summary, this paper is riddled with problems and is little more than a steaming pile of crap. It had major conflicts of interest, was written by an author with a reputation for publishing shoddy, agenda-driven studies, it was published in a fringe journal, it made inadequate references to the relevant literature, the experimental design was sub-par and failed to incorporate blinding procedures, and (most importantly) it made an astounding 29 comparisons without bothering to control the error rate. This paper is a statistical fishing trip. The authors simply made so many comparisons that they eventually got a few that were significant just by chance. This is a common tactic that is frequently employed by pseudoscientists (and sometimes legitimate researchers as well) and you should learn to spot it.    

Posted in Nature of Science | Tagged , , , , , | 13 Comments

Don’t mistake an assumption for a fact

Carl Sagan quote extraordinary evidence claimsI want you to imagine for a minute that you have been selected for jury duty, and you are sitting in the courtroom listening to the evidence. As you watch, the prosecution calls an expert forensic scientist to the stand, and they carefully explain the facts concerning the forensic evidence, all of which point towards the defendant’s guilt. Then, the defendant’s lawyer stands up and shouts, “Objection, this witness has been paid off to lie about my client!” After a moment of shocked silence, the judge says, “That is quite a claim. What evidence do you have to support it?” The lawyer then responds simply by saying, “only someone who had been bought off would say things like that against my client.” Now, what do you think is going to happen next? Is that a reasonable defense that the judge will accept? Obviously not! The defense is making an extraordinary assumption, and it is clearly invalid to do so.

In an example like that, the problem is obvious. You can’t just make things up to dismiss facts that are inconvenient for you. Indeed, when a person’s fate hangs in the balance, we all want the arguments and evidence to be based on facts, not assumptions. Nevertheless, when it comes to science and many aspects of our daily lives (such as politics), people are often more than happy to accept assumptions, and people frequently state them as if they are facts. Therefore, I am going to provide several examples of this flawed line of reasoning, and explain why it not only doesn’t work, but often commits a logical fallacy.

The first example is probably the most common one that I encounter, and it is highly analogous to my courtroom example. I recently engaged in a Facebook debate with someone who made the bold claim that there is no empirical evidence for climate change. I responded to that assertion by providing him with multiple papers that did in fact provide empirical evidence for climate change. How do you think he responded? You probably guessed it. He simply claimed that all of the scientists involved in those studies had been paid off (just like my fictional lawyer did). That argument was clearly fallacious in my courtroom example, and it is equally flawed here. You can’t just assume that scientists have been paid off any time that they publish a result that you don’t like. You have to provide actual evidence of corruption, otherwise you are making a baseless assumption. In fact, journals actually require scientists to declare any conflicts of interests, and failure to do so is a serious offense (it can result in loss of funding, papers being retracted, etc.). So there is no need to make assumptions about conflicts of interest, because you can easily check and see if they exist. Nevertheless, this is one of the most common arguments among those who reject the results of modern science, and I constantly encounter it among anti-vaccers, those who oppose GMOs, climate change deniers, people who use alternative medicine and/or reject modern medicine, etc.

A very closely related line of attack is simply to accuse the person that you are debating of being a shill, astroturfer, etc.  Indeed, I constantly have to deal with people accusing me of having been bought off by big pharma, Monsanto, the government, lizard people, etc. In every case, however, it is a complete assumption (and a faulty one, as I actually pay money out of my own pocket to maintain this blog). You can’t just go around assuming that everyone who disagrees with you has been bought off. That’s not rational.

I could continue giving copious examples of this type of reasoning (e.g. assuming that “big pharma” has bought of the FDA and CDC), but I think that you get the basic picture, so I want to move on to explaining a bit more about why this type of argument is invalid. There are actually two logically fallacies being committed here. First, all of the specific examples that I have given so far commit a genetic fallacy (sometimes specifically an ad hominem fallacy). This fallacy occurs when you make an irrelevant or unjustified attack against the source of the information, rather than addressing the information itself (even a flawed source can sometimes be correct). I have previously talked about this fallacy at length (see previous hyperlinks), so I want to focus instead on the second fallacy. This one is more general and can occur even when a source is not being attacked. It is what is known as an ad hoc fallacy. It generally occurs as a response to an argument, rather than an argument itself, and it has the defining characteristic of proposing a solution that you would never accept unless you were already convinced of the claim that was being defended. In other words, an ad hoc fallacy is a response that is not based on evidence and whose sole purpose is to “solve” a problem in an argument that you are fond of.

That may have seemed a bit complicated, so let me explain by applying it to one of the examples that I used previously. In the example of the climate change denier, his claim was that there is no empirical evidence of climate change (that is the position that he was defending). I presented him with empirical evidence, thus clearly contradicting his claim. He then responded to that evidence by asserting that all of those scientists were really paid shills. However, there is no evidence to support that assertion, and there is no reason to think that the assertion is true other than a desire to believe the initial claim. So he was simply assuming that the scientists had been paid off, even though he had no evidence to support that claim. Do you see how this works? It is an assumption that is stated as if it is a fact, but there is no actual reason to think that the assumption is true. It serves no function other than patching the hole in the argument, and you wouldn’t accept it unless you were already convinced that the initial claim was true.

Nevertheless, not all assumptions are ad hoc fallacies, but that doesn’t make them any less flawed when they are used in debates. You simply cannot state assumptions as if they are facts. The hidden cancer cure conspiracies provide a great example of these, because they involve assumptions stacked on other assumptions. In order to argue that pharmaceutical companies are hiding a cure for cancer, you first have to assume that a cure actually exists, but that is a meritless assumption. Can you provide me with actual evidence that pharmaceutical companies have a cure hidden on their shelves? No, you can’t, which means that you don’t get to state the claim that they have a cure as if that claim is an evidence-based fact. Similarly, you also have to assume a rather large conspiracy in which everyone involved is willing to sit back and watch their friends and family members die. This conspiracy would have to involve not just the CEOs of companies, but also people on the ethics committee who approved the research, people in the FDA who approve the trials, accountants who handled the budgets, lab techs, the scientists themselves, etc. That’s an awful lot of people to be involved, and you cannot just go around inventing conspiracies like that without providing actual hard evidence of their existence. This conspiracy theory (and the vast majority of conspiracy theories) relies heavily on unsubstantiated assumptions, which is a big part of why it is irrational (they are also often ad hoc).

Indeed, if it was permissible to substitute assumptions for facts, then we could all do this anytime that we wanted.  I could, for example, claim that smoking actually has health benefits, and the tobacco industry scientists were actually correct about the safety of cigarettes, but big pharma paid off a ton of scientists to falsify data showing that cigarettes were dangerous because big pharma wanted to profit off of sick people. Now, you probably think, “that’s ridiculous!” and indeed it is, but it is no more ridiculous than claiming that pharmaceutical companies paid off scientists to falsify information on vaccines, or claiming that marijuana is only illegal because legalizing it would hurt pharmaceutical companies, or claiming that climate scientists are falsifying data, or claiming that Monsanto controls the world’s agriculture scientists, etc. All of these are baseless assumptions, and you cannot present them as if they are facts.

As I approach the end of this post, I want to remind everyone of one of the most important concepts in debates and rational thinking: the burden of proof. This states that the person making the claim is always required to provide legitimate evidence to support it. In other words, if you want to claim that scientists have been paid off, then it is your duty to provide actual evidence to support that claim, and if you cannot do that, then you are stating an assumption, not a fact, and your argument is illegitimate. Similarly, if you want to claim that companies are hiding cures, a conspiracy is afoot, etc., you must provide evidence to substantiate those claims. You simply cannot assume things that haven’t been verified, because if you could, then we could all dismiss every single argument that we don’t like simply by assuming the existence of some contrary evidence. Also, it is worth explicitly stating that you have to show the evidence, not the other way around. In other words, if you are claiming that a conspiracy exists, you have to provide evidence that it exists, whereas I do not have to provide evidence that it doesn’t exist. That’s the way that the burden of proof works. In fact, saying “you can’t prove that it doesn’t exist, therefore it is valid to think that it does exist” is a logical fallacy known as an argument from ignorance.

Finally, although I have been speaking specifically in the context of debates, you really should apply this to your own views (regardless of which side of a topic you stand on). Think through your arguments and make sure that you can provide proper evidence from high-quality sources to back up each component of your reasoning. Seek out your assumptions, test them against the evidence, and if they cannot be verified or strongly supported, reject them. Admitting that, “I don’t know” is far, far better than insisting that you do know, when in fact you are simply making an assumption.

Note: I fully expect someone to read this and say, “but we use assumptions all of the time in our daily lives and they are useful.” That is true, but it is irrelevant for several reasons. First, the assumptions that we make in our daily lives are very often flawed and frequently get us into trouble. They are, in fact, unreliable, which is why they aren’t valid substitutes for evidence and can’t be used in scientific debates. Second, many of the things in daily life that we often call “assumptions” are actually evidence based conclusions. For example, if my PhD adviser emails me and asks me to meet them in their office at 3:00, I am going to “assume” that they mean 3:00PM, but that is not really an assumption (at least not in the way that I have been using the word in this post). Rather, it is a probabilistic conclusion based on evidence. I say that, because every single time that my adviser has ever asked to meet with me, it has been a meeting during daylight hours. Thus, there is no rational reason to think that they actually meant 3:00AM. Situations like that are clearly extremely different from creating conspiracy theories or assuming corruption just because you don’t like what a scientist published.

Posted in Uncategorized | Tagged , , , , , , , , , | 7 Comments

Debunking the creationist myth that mutations don’t produce new and useful information

Genetic variation is a fundamental requirement for evolution, but many evolutionary mechanisms (such as selection and genetic drift) actually remove variation from populations. Therefore, evolution is entirely reliant on the formation of new genetic information, and without it, evolution would grind to a halt. Creationists often seize this fact, and erroneously claim that evolution is impossible because we have never found a mechanism that is capable of creating new genetic information. This claim is, however, completely false, because mutations do, in fact, create new genetic information. Nevertheless, many creationists respond to that fact by insisting that mutations simply “rearrange” existing genetic information, rather than creating “new” information. Therefore, I want to briefly explain why this argument is fundamentally flawed.

Hopefully everyone recognizes this, but if not, it's Mr. DNA from Jurassic Park

Hopefully everyone recognizes this, but if not, it’s Mr. DNA from Jurassic Park

Before I can explain the problems with this argument, we need to be clear about the basics, and you need to have at least a rudimentary understanding about how DNA works. DNA (or deoxyribonucleic acid if you prefer) consists of four bases adenine (A), guanine (G), thymine (T), and cytosine (C). These four bases get arranged into groups of three, and each group of three codes for an amino acid. Those amino acids then get strung together to form a protein, and those proteins combine to form tissues. Thus, your DNA is your body’s blue-print, and it tells your body which amino acids to make, and how to combine them to make proteins, tissues, organs, etc.

Please note that although what I have presented here is the most fundamental concept for you to understand, actual DNA also includes stop codons, start codons, and many other complexities that are irrelevant for this post.

Now that you understand the basics of DNA, we can talk about mutations. I explained them in far greater detail here, but in short, they are simply random changes to the genetic code (the ones that are important for evolution usually occur during the formation of egg and sperm cells). There are many different types of mutations such as inversions (which flip a segment of DNA), deletions (which remove base pairs), insertions (which add extra base pairs), substitutions (which insert the wrong base into a chain [e.g. a T instead of a C]), duplications (which duplicate a segment of DNA), and several others. In every case, however, they change the genetic code, and by changing the code, they can change the amino acids and ultimately the proteins that are produced (note: there is redundancy in the way that amino acids are coded, so not all mutations result in changes downstream).

At this point, we can examine creationists’ claim that mutations simply rearrange information rather than producing new information. The most obvious problem with this is simply that some mutations (like insertions) do actually insert entirely new base pairs. In other words, they don’t rearrange the code that is already there. Rather, they add new bases (i.e., new information) to that code.

The second problem is really the more important and fundamental one. Namely, this argument seems to be using the word “new” in a rather peculiar sense, because rearranging the existing bases does, in fact, produce a new code that often results in the production of different amino acids and new proteins. Let me illustrate. A coding strand of DNA that contains the sequence CTT would code for the amino acid Leucine. However, if a mutation rearranged those three bases so that they were TCT, that strand of DNA would code for the amino acid Serine instead of Leucine. Thus, by simply rearranging the existing bases, we created new information which produced a different amino acid. In other words, we added genetic variation to the population, because the individuals who receive that mutation will produce Serine, while the rest of the population is producing Leucine. Objecting to that by claiming that “no new information has been created” is really quite silly because the fact remains that individuals with that mutation are producing a different amino acid than everyone else. Something that codes for the production of an entirely different amino acid is, by any reasonable definition, “new information” (i.e., it is information that was not there before).

A useful analogy to help you conceptualize this is to think about letters in the alphabet. The English language has 26 letters (bases), and we combine those letters to from words (amino acids). We then arrange those words into sentences (proteins), and we arrange those sentences into paragraphs (tissues). Ultimately, we can use those paragraphs to make books, essays, etc. (organisms). Now, according to creationists’ reasoning, it should be impossible to make any new information by simply rearranging those 26 letters, but that is clearly absurd. We can arrange them one way and produce the works of Shakespeare. We can arrange them another way and produce “The Origin of the Species.” We can rearrange them yet again and produce the script to a Stargate episode. Indeed, there are a virtually infinite set of possibilities, each of which contains different information, and the situation is no different for DNA. We can arrange the bases one way and get a dinosaur, and we can rearrange that code (via mutations) and get a chicken. We can rearrange it yet again and get a whale, human, tree, bacteria, mushroom, etc. This notion that making new arrangements of the four existing bases doesn’t produce new information is absurd because everyone agrees that different arrangements of those bases produce very different organisms.

A big part of the problem here once again comes back to the definition of the word “new.” Creationists seem to think that evolution requires something that is completely and totally novel, such as a new base pair or, at the very least, an entirely new amino acid that has never existed anywhere before, but that is a straw man fallacy. Evolution does not require something that has never previously existed anywhere. Rather, it simply needs to have variation. Thus, any change to the genetic code is “new information” in an evolutionary sense, because it provides variation. A useful way to think about this is that evolution doesn’t need “new” information. Rather, it needs “different” information. In other words, all that it needs is a code that is different than the one that was there before.

To further illustrate what I mean by this, it is worth mentioning that even mutations that remove bases can actually produce new information in an evolutionary sense. Going back to the alphabet example, imagine that the document in question is a recipe, and imagine that the instructions get “mutated” by the random deletion of one of the steps. That actually provides new information because the end-product will be something different than the intended product (i.e., there will be variation for the trait). The same thing happens with DNA. If you remove the bases for a particular amino acid, then the final protein product will often be different from the one that was originally coded for, and that variation is all that evolution needs, regardless of whether or not you want to describe it as “new.” To give an actual example of this, the virus that causes HIV typically uses the CKR5 protein to enter macrophages and complete its lifecycle. However, people who have deletions (a type of mutation that removes bases) on both copies of their genes for CKR5 are resistant to HIV because those mutations alter the protein, and HIV cannot bind to this new, modified protein (Dean et al. 1996; Sullivan et al. 2001). Thus, a loss of genetic material results in a new set of instructions, which causes cells to produce a protein with a new modification, and that new modification provides a beneficial function. Creationists may try to say that this example doesn’t illustrate the formation of “new” information because an existing protein was simply modified, but that is, once again, a straw man fallacy, because evolution just needs variation, and this mutation provides that (thus, from an evolutionary perspective, it is new information).

Finally, you might be tempted to protest to all of this on the grounds that, “mutations are nearly always harmful,” but that claim is a myth. Most mutations are actually neutral at the time that they occur (i.e., they are neither beneficial nor harmful; Nachman and Crowell 2000; Eyre-Walker et al. 2007). Further, although harmful mutations certainly do exist, they are selected against, so they are really irrelevant. In other words, even if there were 10,000 times as many harmful mutations as beneficial mutations (which there aren’t), that wouldn’t preclude evolution, because nature would select against the harmful ones and for the beneficial ones. On that note, I want to be absolutely, 100% clear that beneficial mutations do exist and have been well documented (Newcomb et al. 1997; Dean et al. 1996; Sullivan et al. 2001; Shaw et al. 2002, 2003; Joseph and Hall 2004; Perfeito et al. 2007; see Halligan and Keightley 2009 for a good review). Indeed, we have done experiments with bacteria were we monitored populations for many generations, and observed the formation of novel, beneficial mutations that provided the bacteria with new information that allowed them to perform a novel function that they were previously incapable of (Blount et al. 2008; more detailed explanation in the citrate section of this post).

From the Star Trek TOS episode "I, Mudd"

From the Star Trek TOS episode “I, Mudd”

In short, this notion that mutations can’t produce new genetic information is laughably absurd. DNA is simply the code that tells organisms what proteins and structures to make, and modifications to that code result in novel proteins and structures. Indeed, if you accept that dogs and cats are different because of differences in their DNA, then you have already accepted that rearranging genetic codes can, in fact, produce new information and result in vastly different organisms. Indeed, saying that rearranging DNA can’t produce new information is no different from saying that rearranging the letters of the alphabet can’t produce new information. Obviously it can, as is evidenced by this new blog post that I am writing. Additionally, when we say that evolution needs “new information,” we don’t mean that it needs something completely novel and totally different from anything else that has ever existed. Rather, we simply mean that there needs to be variation for traits. In other words, any slight modification to an existing trait qualifies as “new information” when talking about evolution. Finally, we have experimentally documented that mutations can produce that variation, and we have demonstrated that beneficial mutations do in fact occur and result in novel proteins that perform novel functions (i.e., the mutations created new information).

Literature Cited

Posted in Science of Evolution | Tagged , , , , | 12 Comments

Don’t attack the straw men: Straw man fallacies and reductio ad absurdum fallacies

strawmanPeople love to argue. We all have views and opinions, and we tend to promote them prominently and viciously attack opposing ideas. There is nothing inherently wrong with that as long as your views are evidence-based and you use proper logic when attacking your opponent’s position; however, many people fail at this and succumb to logical fallacies. One of the most common blunders is something known as a straw man fallacy. This occurs anytime that you misrepresent your opponent’s argument, then attack that misrepresentation instead of the view that they actually hold. It is a fairly simple concept, but it is often misunderstood, and it is rampant in debates (this year’s presidential election has been full of a sickening number of these fallacies). Therefore, I want to talk a bit about this fallacy and when it does and does not occur, as well as explaining a particular subset of straw man fallacies known as reductio ad absurdum fallacies.

 

Straw man fallacies

Let’s begin with the basics, what are straw man fallacies? To put it simply, they are distortions of an argument that usually present a weak and easily defeated version of the actual argument. In other words, one debater will claim that their opponent believes view X (which is a distorted and weakened version of what their opponent actually believes), then they will explain why X is wrong. The problem with this should be obvious. If the opponent does not actually believe X, then showing that X is wrong does nothing to address the opponent’s actual beliefs. In other words, it doesn’t matter if X is wrong if X isn’t actually what your opponent is claiming. Nevertheless, this fallacy can be an extremely persuasive (albeit invalid) debate tactic that many people are duped by.

On that note, it is worth mentioning that although straw man fallacies can be deliberate, and many people use them with the intention of deceiving their audience, they can also occur unintentionally. This usually happens when someone is ignorant about the topic that they are debating, and I frequently encounter these arguments when talking to people who reject scientific results. For example, one of the most common creationist arguments is, “if we evolved from monkeys, then why are there still monkeys?” This is a straw man fallacy because evolution does not state that we evolved from monkeys (or even great apes). Rather, it states that we share a common ancestor with them. Thus, by making this argument, creationists are not in any way shape or form presenting a legitimate criticism of the theory of evolution, because they are attacking a claim that evolution does not actually make. Similarly, I often encounter religious people who say that climate change can’t be true because their religion says that the earth won’t be destroyed, and climate change says that it will be destroyed. If you actually understand climate change, however, then the problem with that line of reasoning is obvious. Namely, climate change does not claim that we are going to destroy the earth. Climate change is a serious problem, but it won’t cause our extinction.

I wanted to use those two examples not to attack creationists and climate change deniers, but rather to illustrate an important point: you need to understand a given topic before you decide whether or not to accept it. Otherwise, your arguments will often be straw men fallacies, and they will make your opponents think that you are ignorant, rather than making them actually consider your position. Further, this is important for far more than just winning debates. I personally care more about knowing what is true than I care about winning a debate, but if I have not even bothered to learn the fundamental concepts of the opposing position, then I can’t have any confidence in my conclusions. You need to actually study a topic thoroughly, before you reach a conclusion, and defiantly before you try to debate someone on it.

What isn’t a straw man fallacy

When it comes to the internet articles, public debates, and other venues where someone is not specifically debating you, do not assume that someone is committing a straw man fallacy just because they did not address a specific argument that you personally think works. In other words, if they attacked an argument that essentially no one actually uses, then they committed a straw man fallacy. However, if they attacked an argument that many people use, then they did not commit a fallacy even if you do not personally use that argument.

Let me give you an example of what I mean. Last week, I wrote a post debunking 25 common arguments against climate change (mostly arguments that climate change isn’t happening or we aren’t causing it). All 25 of them are arguments that I personally encounter frequently when debating people. Nevertheless, some people were quick to accuse me of committing a straw man fallacy, and they did so based on the grounds that they personally accept that we are causing climate change, but simply debate the amount of change that will happen (which is not a line of reasoning that I addressed). So, did I commit a straw man fallacy? No! Every argument that I addressed is an argument that many people actually use. The fact that some people have arguments that I did not address does not make the arguments that I did address fallacious. I cannot predict the argument that every single reader of my blog will use.

To be clear, if I had made grand, generalizing claims like, “everyone who debates climate changes believes these arguments” or “these are the only arguments against climate change,” then I would have committed a straw man fallacy. Similarly, if I was actually directly debating one of these people, and they said, “I accept that climate change is true, but I disagree about its extent” and I responded by providing them evidence that it was true, then I would have committed a straw man fallacy, because I would not actually be addressing the argument that they had made to me. I cannot, however, be held responsible for failing to predict every single argument that anyone anywhere in the world would ever make.

A similar example frequently occurs with anti-vaccers. I often write and share posts about vaccine effectiveness, and almost every time that I do, I get some angry anti-vaccer yelling at me with statements like, “This is such nonsense. The issue is about whether or not vaccines are safe, not whether or not they work!” As with the climate change arguments, however, there certainly are people who accept that vaccines work but erroneously think that the costs outweigh the benefits; however, there are also many people who do, in fact, deny that vaccines even work. So unless I am specifically addressing a group of people who are arguing about safety (rather than effectiveness), there is nothing fallacious about discussing vaccine effectiveness, because many people do actually argue that vaccines aren’t effective.

 

Reductio ad absurdum fallacies

At this point, I want to shift gears slightly and talk about another type of logical fallacy that is really just a special case of the straw man fallacy: reductio ad absurdum. That may sound like a Harry Potter spell, but it is actually a logical fallacy that occurs when you take a position, stretch it to an absurd conclusion that would not actually be supported by the original statement, then claim that the original statement must be wrong because the conclusion is clearly absurd. That may have sounded complicated, so let me give you a few examples.

On several occasions I have shared posts which explain that most people don’t need to take extra vitamins and dietary supplements because they already get a sufficient amount from their diet and their body can’t really utilize excess amounts. Whenever I share these posts, however, I almost invariably get responses like, “You’re such an idiot! You claim to be a scientist and you don’t even know that vitamins are important!? You would die without them!” Let’s think about this for a second. Did I claim that vitamins aren’t important or that your body doesn’t need them? No, I didn’t even imply it. There is a huge difference between saying that you don’t need to take excess vitamins and saying that you don’t need any vitamins. In other words, the argument that I presented states that most people in industrialized countries already get the vitamins that their bodies need from their diets, and they don’t need to take extras. Internet trolls then took that argument and stretched it to the absurd conclusion that vitamins weren’t necessary at all, then accused me of being an idiot based on that clearly absurd conclusion. Do you see the problem? The conclusion that they presented was based on a distortion of my argument, rather than the argument itself.

To give one more example, on several occasions I have shared posts that explain why juice cleanses and “super foods” can’t actually boost an already healthy immune system, and the wonderful people of the internet usually respond by asserting that it is obvious that a healthy diet is important and your immune system won’t function well if you’re malnourished. As with the vitamin argument, however, I never asserted that a healthy diet isn’t necessary. I was talking about boosting an immune system above its normal functioning levels, not basic nutrition. In other words, saying that you can’t boost a healthy immune system is not the same thing as saying that you can eat nothing but junk and expect to be healthy.

 

Reductio ad absurdum logic

This illustrates the correct use of reductio ad absurdum logic. The second stick figure is sarcastically illustrating that if the argument that science has been wrong in the past actually invalidated a current scientific result, then we could use that argument anytime that we wanted, but that would obviously lead to absurd conclusions. Note: sarcasm is not a requirement for reductio ad absurdum logic, but it is often included.

This illustrates the correct use of reductio ad absurdum logic. The second stick figure is sarcastically illustrating that if the argument that science has been wrong in the past actually invalidated a current scientific result, then we could use that argument anytime that we wanted, but that would obviously lead to absurd conclusions. Note: sarcasm is not a requirement for reductio ad absurdum logic, but it is often included.

Finally, it is important to note that reductio ad absurdum logic can actually be applied without committing a fallacy if you can show that the actual argument that your opponent is using would lead to an absurd conclusion if it was applied consistently. As long as you do not distort the original argument, then this technique is not only valid, but it is extremely powerful (it is one of my favourite tools).

Let me give you an example. I often encounter people who say things like, “all that I need to know about climate change is that Al Gore thinks it is happening. If he thinks that it is true, then it must be wrong!” This argument is technically a guilt by association fallacy, but we can easily demonstrate the flaw in it by using reductio ad absurdum logic. In this case, I usually counter this claim by pointing out that Al Gore also thinks that we are breathing oxygen, so if we use this argument consistently, then we must conclude that we are not in fact breathing oxygen. Do you see why that response works? I did not distort the argument, rather I showed that it actually would lead to an absurd conclusion if it was a good argument. I can prove this by setting up two identical syllogisms.

Original argument:

  1. If Al Gore thinks that something is true, then it must be wrong
  2. Al Gore thinks that climate change is true
  3. Therefore, climate change is wrong

Analogous argument using reductio ad absurdum logic:

  1. If Al Gore thinks that something is true, then it must be wrong
  2. Al Gore thinks that we are breathing oxygen
  3. Therefore, we aren’t breathing oxygen

See how this works? I simply took the original argument, applied it to a different topic, and showed that if we apply that argument consistently, we arrive at an absurd conclusion. I provided many more examples of this debate tactic in this post on consistent reasoning, so please see it if you are confused. You should also watch John Oliver, because he wields this logical tool brilliantly (sometimes he does also slip into reductio ad absurdum fallacies, but that is generally to set up a joke rather than make a serious argument).

 

Conclusion

In short, straw man fallacies are simply distortions and misrepresentations of your opponent’s argument. They can be intentional or unintentional, but they are easy to avoid by simply being well-informed on the topic that you are debating. Nevertheless, many people continue to use them and incorrectly accuse other people of using them. Additionally, these fallacies contain a special subset of fallacies known as reductio ad absurdum fallacies. These occur when an argument is stretched to an absurd conclusion that is not supported by the original argument. Although that strategy is fallacious when the argument is distorted in the process, it can also be a very powerful debate tool if you can demonstrate that the original argument itself actually leads to an absurd conclusion when it is applied consistently.

Note: I want to be clear that on topics like vaccines, climate change, evolution, etc. there really aren’t “two sides.” So I when I say that you need to thoroughly study the topic before reaching a conclusion, I am not suggesting that you need to read a bunch of conspiracy blogs, creationist websites, etc. Rather, you need to study the peer-reviewed literature (including the handful of studies that disagree with the consensus). You don’t need to read unreliable sources in order to be well-informed. However, if you want to actually debate people about these topics, then you really should spend time studying those unreliable sources, because if you don’t, you will often end up committing straw man fallacies. Indeed, I have seen my fellow skeptics do that on several occasions (and I’ve probably unknowingly done it myself at some point).  

Related posts

Posted in Rules of Logic | Tagged , , , | 11 Comments