The fallacy fallacy: Reject the argument not the conclusion

Two weeks ago, I wrote a post on the importance of understanding logical fallacies, and in that post, I made the following claim, “anytime that an argument contains a fallacy, that argument must be rejected.” Much to my surprise, many people took issue with this and brought up the fallacy fallacy (that’s not a typo). Some of those comments were simply pointing out the existence of the fallacy fallacy (which I actually did in the aforementioned post as well), but many of them were arguing that I was wrong or at least on shaky ground because of the fallacy fallacy. For example, one person said, “of course simply pointing out that someone’s argument is a fallacy is a fallacy in and of itself,” another said that although I was not committing a fallacy fallacy I was, “flirting with encouraging individuals to commit ‘the fallacy fallacy’” (those are exact quotes, not paraphrases). Thus, it appears that this topic may not be very well understood, so I want to spend this post talking about it, because it is an important concept to grasp. My original statement was correct and in no way misleading. Any time that an argument contains a logical fallacy, that argument is flawed and you must reject that argument. However, it is possible to have a flawed argument that still has a true conclusion. So, the fallacy fallacy only occurs when a bad argument leads you to reject the conclusion rather than the argument.

Logical fallacies are indeed disturbing.

As I explained in the previous post, deductive logical arguments should be set up such that if the premises are true, then the conclusion must also be true. In other words, the conclusion must follow necessarily from the premises (an argument with this property is known as a “valid” argument). However, logical fallacies often present an invalid logical structure in which the conclusion does not follow necessarily from the premises (in other cases they may operate by doing things like assuming false premises). Thus, logical fallacies are errors in reasoning and result in arguments that either aren’t valid or aren’t sound (a sound argument is one that is valid and has only true premises). Therefore, anytime that an argument contains a fallacy, the argument itself is flawed. The logical structure does not work, and you simply cannot use that argument in support of the conclusion. This is fundamental and vitally important to understand: you must always reject a flawed argument. If an argument contains a fallacy, then the argument does not work, and you cannot use it. However, that does not necessarily mean that the conclusion is false.

This is where fallacy fallacies come in. If you tell someone that their argument is wrong because it contains a fallacy, then you are adhering to the rules of logic and have not done anything wrong. However, if you tell them that their conclusion is wrong because the argument contains a logical fallacy, then you have committed a fallacy fallacy, because a bad argument tells you absolutely nothing about the conclusion.

Let me illustrate this using an example from the previous post. The following argument is not valid because it contains an affirming the consequent fallacy.

  • Premise 1: All men are mortals
  • Premise 2: Socrates is a mortal
  • Conclusion: Therefore, Socrates is a man

This is a bad argument. Because of the affirming the consequent fallacy, the conclusion does not follow necessarily from the premises (i.e., not all mortals are men). Thus, we must reject this argument. We simply cannot use this argument as a reason for thinking that Socrates is a man, but in this case, the conclusion is still true. Indeed, if you think about this, you should realize that it is always possible to construct a bad argument for a true conclusion. For example, I could say,

  • Premise 1: Aliens hate goats
  • Premise 2: Aliens like waffles
  • Conclusion: Therefore, the earth is spheroid

That argument is clearly nonsense. It doesn’t make the slightest bit of sense (it’s a non-sequitur fallacy) and both premises are rather bizarre assumptions, but the conclusion is still true! Nevertheless, although it is possible to have a bad argument and true conclusions, in many cases bad arguments do, in fact, lead to false conclusions (see previous post). In contrast, a sound logical argument guarantees that the conclusion is true. So, I reiterate that flawed arguments (including ones that contain logical fallacies) tell you nothing whatsoever about the conclusion. They provide you with absolutely zero evidence for or against it.

So, what does all of this mean practically for you? How should you deal with this in debates? Well, that really depends on whether or not the burden of proof is on you. Remember, the person making the claim is always responsible for providing evidence for that claim, whereas the other person is under no obligation to refute that claim (at least until actual evidence has been provided). So, let’s imagine first that you are not the one making the claim, and the burden of proof is on your opponent. Further, they claim that X is true because of argument Y (in other words, they are using argument Y to support conclusion X). However, you discover a logical fallacy in argument Y. At that point, you should point out that fallacy and reject argument Y, however, you should not make any claims about conclusion X without first introducing other evidence/arguments (more on that in a minute). In other words, the fact that argument Y is flawed tells you nothing about conclusion X, but because the burden of proof is not on you, you aren’t required to do anything else. They have to provide a new line of evidence/reasoning to demonstrate that conclusion X is true, and you are not obligated to accept X or take it seriously until they present that evidence.

Nevertheless, you may have evidence showing that conclusion X is in fact false, in which case, you are welcome to present that evidence and use it to refute X. In other words, saying “argument Y contains a fallacy, therefore conclusion X is false” is a fallacy fallacy, but there is absolutely nothing wrong with saying, “argument Y has a fallacy and, therefore, does not support conclusion X, however, we can tell that conclusion X is false because of argument/evidence Z.” In other words, you can (and indeed should) point out logical fallacies to demonstrate flaws in your opponents’ reasoning, but if you want to actually say that their conclusions are wrong (rather than simply that their arguments are wrong) then you have to present actual evidence to the contrary.

This brings me to the final scenario: situations where the burden of proof is on you. In these situations, you are making the claim and, therefore, it is your duty to present actual evidence. As such, if your opponent points out a logical fallacy in your argument, you must reject that argument and either present new evidence/reasoning or admit defeat. They are not obligated to disprove your conclusion, and you cannot continue to use the flawed argument. Thus, you are obligated to present a new, sound argument and real evidence in support of the conclusion.

In short, any time that an argument contains a logical fallacy, you must reject that argument. I stand by that initial claim. However, the presence of a fallacy (or other problem with the argument) tells you nothing about the conclusion. Therefore, you must always reject the argument, not the conclusion, otherwise you’re committing a fallacy fallacy. Further, to actually reject the conclusion, you need additional evidence/arguments that show the conclusion to be false.

Related posts

Posted in Rules of Logic | Tagged | 2 Comments

The importance of logical fallacies

From the Star Trek TOS episode “I, Mudd”

As anyone who frequents this blog knows, I spend a lot of time talking about logical fallacies. I frequently criticize peoples’ arguments for having them, and I present them as a reason for rejecting particular lines of thought. Nevertheless, many people fail to realize just how important they are, and showing someone that they have committed a fallacy rarely makes them reject their argument. Indeed, I once had someone say, “just because my argument technically contains a fallacy doesn’t mean that the underlying logic is wrong.” In reality, however, that is exactly what it means. Logical fallacies are, by definition, flawed lines of reasoning, and anytime that an argument contains a fallacy, that argument must be rejected. Therefore, understanding logical fallacies is critical for analyzing arguments and holding rational views, and in this post, I want to try to explain why fallacies are so important, how to detect them, and why their presence destroys an argument.

The structure of an argument

All arguments can be broken down into premises and conclusions. The premises are the facts that you are presenting, the conclusion is the thing that you are arguing for, and the goal is to set up the argument such that the conclusion must follow necessarily from the premises. In other words, for an argument to be a good argument, it must be set up such that if the premises are true, then the conclusion must also be true (this is what we call a “valid argument”). Additionally, the premises must, of course, actually be true (when both conditions are met, the argument is said to be “sound”). For now, I want to focus on the requirement that the conclusion must follow necessarily from the premises, but we will come back to the true premise requirement later.

 Note: I am talking specifically about deductive arguments here and throughout this post. There are other types (such as inductive and probabilistic) in which the premises show that the conclusion is most likely true, rather than that it must be true.

 To illustrate how this works, let me use the following example (this is set up in what is known as a syllogism).

  • Premise 1: Bill is larger than Bob
  • Premise 2: Bob is larger than Tom
  • Conclusion: Therefore, Bill is larger than Tom

This is a logically valid argument. In other words, as long as those premises are true, then the conclusion must also be true. There are no other options. If Bill is larger than Bob, and Bob is larger than Tom, then it must be true that Bill is larger than Tom (this example is also an illustration of something known as the law of transitive properties). Importantly, you should note that the underlying logical structure is what matters here. As long as that structure works (which it does), we can replace those premises with any other true premises, and the resulting conclusion will be true (as long as we haven’t changed the underlying structure). In other words, we can reduce this argument to the following logical structure:

  • Premise 1: A is larger than B
  • Premise 2: B is larger than C
  • Conclusion: Therefore, A is larger than C

Now, we can replace A, B, and C with any true facts, and the argument will work. For example,

  • Premise 1: Jupiter is larger than earth
  • Premise 2: The earth is larger than the moon
  • Conclusion: Therefore, Jupiter is larger than the moon

Or

  • Premise 1: A train is larger than an ant
  • Premise 2: An ant is larger than a bacterium
  • Conclusion: Therefore, a train is larger than a bacterium

I could keep going, but hopefully you get the point. It doesn’t matter what premises I use, or how disparate the items in them are. As long as the premises are true and I retain the same logical structure, then the conclusion must be true. Further, if you can find a single example in which this structure and true premises results in a demonstrably false conclusion, then you have shown that the argument’s structure must be flawed. In other words, for a deductive argument, the logical structure must work 100% of the time, or else the logical structure is flawed.

It may seem like I am off topic here, but understanding this is really important, because, as I will explain below, many logical fallacies operate by breaking an argument’s logical structure. In other words, they change the argument so that the conclusion does not follow necessarily from the premises.

Non-sequitur fallacies

Now that you understand the importance of a logical structure, let’s look at a large family of fallacies collectively known as non-sequitur fallacies. These occur anytime that an argument’s structure is such that the conclusion does not follow necessarily from the premises, but there are many specific subcategories and types of fallacies within that overarching umbrella term.

To begin, let’s look at what is probably the most common example in all of philosophy. Consider the following deductive argument:

  • Premise 1: All men are mortals
  • Premise 2: Socrates is a man
  • Conclusion: Therefore, Socrates is a mortal

We can reduce this argument to the following structure:

  • Premise 1: All X are Y
  • Premise 2: Z is X
  • Conclusion: Therefore, Z is Y

That may seem confusing, but if you think about it for a second, you should be able to convince yourself that it will work 100% of the time. If all X are Y, and Z is X, then Z must also be Y.

Now, consider the following extremely similar argument:

  • Premise 1: All men are mortals
  • Premise 2: Socrates is a mortal
  • Conclusion: Therefore, Socrates is a man

An example of an affirming the consequent fallacy

Now we have a problem. This argument does not work. The conclusion does not follow necessarily from the premises, and the reason for that is a logical fallacy known as affirming the consequent. This fallacy alters the logical structure in a way that prevents the premises from leading necessarily to the conclusion. We can write it as follows:

  • Premise 1: All X are Y
  • Premise 2: Z is Y
  • Conclusion: Therefore, Z is X

Again, if you think about that for a minute, you should see the problem. The fact that all X are Y does not mean that all Y are X. Thus, it is possible for Z to be Y, but not X. We can easily illustrate this with an example.

  • Premise 1: All men are mortals
  • Premise 2: My pet iguana is a mortal
  • Conclusion: Therefore, my pet iguana is a man

Obviously, that doesn’t work. It is clearly a bad argument. It has an invalid logical structure in which the conclusion does not follow necessarily from the premises, and, as a result, it produces an incorrect conclusion. Remember, if a deductive logical structure is valid, then it must produce true conclusions 100% the time (when supplied with true premises). Therefore, the fact that my example has an incorrect conclusion proves that this structure is invalid.

Now, what does this have to do with affirming the consequent fallacies? Well that name, “affirming the consequent” is simply the term that we use to describe this logical structure. In other words, by demonstrating that this logical structure is invalid, I have shown that an argument that contains this structure (i.e., that contains an affirming the consequent fallacy) is invalid. This is why it is so important to understand logical fallacies and take them seriously when they are pointed out to you: they result in arguments with invalid logical structures. In other words, they create arguments in which the truth of the premises does not guarantee the truth of the conclusion.

To further illustrate this, let’s move on from affirming the consequent fallacies and talk about a different fallacy: post hoc ergo propter hoc (or just “post hoc” for short). This is one of the most common fallacies that I encounter in debates about scientific topics, and it takes the following logical structure.

  • Premise 1: Q happened before U
  • Conclusion: Therefore, Q caused U

The problem with that should be pretty obvious: the fact that one thing happened before another doesn’t mean that one caused the other. In other words, the conclusion does not follow necessarily from the premise. We can easily illustrate this with simple examples.

  • Premise 1: I performed a sacrifice, then it rained
  • Conclusion: Therefore, my sacrifice caused the rain

Or

  • Premise 1: I read a book, then had a heart attack
  • Conclusion: Therefore, reading the book caused the heart attack

Do you see how that works (or, rather, doesn’t work)? The fact that one thing happened before another does not lead to the conclusion that there is a causal relationship. The logical structure is invalid, and any arguments containing this structure (i.e. containing a post hoc fallacy) must be rejected. On a side note, this is a fundamental reason why anecdotes are worthless as evidence of causation. The fact the you got better after taking something doesn’t mean that it worked, and the fact that you had an adverse event after taking something doesn’t mean that the treatment caused the event. Both of those arguments contain this structure (i.e., they are post hoc fallacies), and, as such, they are not valid, and the conclusion does not follow necessarily from the premise.

There are lots of other examples of this overarching type of fallacy, such as denying the antecedent, correlation fallacies, guilt by association, arguments from ignorance, etc., but they all have the same problem. Namely, they are invalid because they set up a logical structure in which the conclusion does not follow necessarily from the premises.

The fallacies of untrue premises

Another major “group” of fallacies work by either implicitly or explicitly making an untrue premise. The problem here should be obvious: if an argument relies on an untrue claim, then the argument must be rejected (i.e., it is not sound). As before, an easy way to test for this problem is to see if you can find any examples in which the argument doesn’t work.

Note: these groupings of fallacies are not officially recognized. They are just groupings that I personally find to be useful when thinking about fallacies and how/why they work (or don’t work, as the case may be).

Let me explain what I mean by using one of the most common variants of these fallacies: the appeal to nature fallacy. This fallacy occurs whenever someone asserts that something is good/useful/healthy because it is natural or that something is bad/useless/unhealthy because it is unnatural. When can set this argument up the following way.

  • Premise 1: X is natural
  • Conclusion: Therefore, X is good

That obviously doesn’t work, however, because there are plenty of true things that we can substitute for premise 1 that clearly result in false conclusions. For example:

  • Premise 1: The plague is natural
  • Conclusions: Therefore, the plague is good

Now, you could stop right there, and call this another variant of the non-sequitur fallacy, and you wouldn’t be wrong. This structure, as I have presented it, clearly is invalid because the conclusion does not follow from the premise. However, I think that there is a more useful way to think about this fallacy and others like it. Namely, this fallacy has an assumed premise that is false. It assumes that everything natural is good. Thus, there is really an implicit second premise.

  • Premise 1: X is natural
  • Premise 2: Everything natural is good
  • Conclusion: Therefore, X is good

That second premise is, however, clearly false, and as a result, the argument fails (i.e., it’s not sound). Importantly, that premise (or some variant thereof, including the inverse “everything unnatural is bad”) is present in all appeal to nature fallacies. Thus, anytime that this fallacy is present, the argument must be rejected, because it inherently assumes an untrue premise.

no matter what crackpot notion you believeThere are many other, “appeal to” fallacies, and they all have the same basic structure and problem. For example, appeal to authority fallacies occur when you say that something is true because of the person who says that it is true. When you do that, however, you are inherently invoking the premise that the person in question is infallible, which is clearly false. Other examples include appeals to popularity (which assume that everything popular is good/right), appeals to antiquity (which assume that anything old is good/right), appeals to tradition (which assume that anything traditional is good/right), etc. (note: the one exception to this structure is the appeal to emotion fallacy, which simply makes an argument based on emotions, rather than facts or logic).

Note: You could also apply my “implicit untrue premise” explanation to some of the non-sequitur fallacies that I described earlier. For example, you could say that post hoc ergo propter hoc fallacies include the assumed premise that if Q happens before U, then Q caused U. There is nothing wrong with that way of conceptualizing those fallacies, and you are welcome to use it, I just personally find that explanation to be more complicated when the premise isn’t as simple as “everything natural is good.” You can, however, think of these fallacies either way. You can think of them as having an implicit and untrue premise or as having an invalid structure. I don’t care which you use, just so that you understand the concepts.

Another common fallacy is much less subtle and directly states untrue premises. I am, of course, referring to the straw man fallacy. This occurs whenever you attack a weakened or misrepresented version of your opponent’s argument, then claim to have defeated their actual view.  In other words, you say, “My opponent believes X, and X is wrong for reasons Y” when, in reality, X is a distortion or misrepresentation of what your opponent believes. Thus, your first premise is false (there are also subsets of this fallacy such as reductio ad absurdum).

Fallacies of the false dilemma are yet another example of fallacies that operate via untrue premises. These take the form of “Either X or Y is true, X is false, therefore Y is true.” This sounds great, until you realize that premise one is false, and there was actually a third option (Z) that wasn’t stated.

Detecting logical fallacies

Finally, I want to briefly talk about some tools for detecting whether a logical fallacy has been committed. Obviously, your best bet is to study the different types of fallacies and learn how each of them works. I have compiled a list of common fallacies to help with that, as have many other sites (e.g. Internet Encyclopedia of Philosophy [this is probably the most comprehensive one], Skeptical Raptor, Your Logical Fallacy Is, and many others) . Let’s assume, however, that you don’t have time for that, you can’t be bothered, or maybe you have studied them, but still struggle with particular arguments (don’t worry, that happens to all of us). Fortunately, there are some simple things that you can do.

First, I strongly recommend that you practice breaking an argument down into a syllogism like I have done throughout this post (start with the actual facts in the argument). Often, when you do that the problems will jump right out at you. If nothing immediately jumps out at you however, then try replacing the facts with letters (again, like I have done throughout). Then, look carefully at that structure and see if it is valid. See if the conclusion has to follow from those premises, and see if there are any implicit premises that need to be added. If, at that point, it is clear that either the conclusion does not follow necessarily from the premises or that there is an implicit and untrue premise, then you are done. The argument is flawed and you should reject it. If neither of those are obvious, then move onto the next tool.

The second tool is simply to try to find examples where the logical structure of the argument fails. Use the syllogism that you constructed before, but this time, make actual premises that are true but unrelated to the topic of debate (like I did by using a sacrifice to show that post hoc fallacies were invalid). If you can find any examples (hypothetical or actual) where the premises are true, but the conclusion is clearly false, then you have just demonstrated that the logical structure is invalid (assuming that you were careful and did not alter the structure, otherwise you’ve committed a straw man fallacy). This is a very useful tactic that you should get in the habit of using (I explained it in more detail here).

Although those two tools are useful, unfortunately, they aren’t all-encompassing. There are many other types of fallacies that I have not covered here because they are more specialized and difficult to generalize. Many of these are actually errors in debate tactics more than errors in reasoning. For example, a red herring fallacy occurs when, in a debate, you ignore your opponent’s argument/question and go off on an irrelevant side tangent in order to dodge a problem that they pointed out (politicians are masters of this). This type of fallacy is much harder to detect via a simple key like what I have presented, because there is no way to really construct a syllogism. It’s not a proper argument. Rather, it is a means of avoiding an argument. Similarly, for both straw man fallacies and false dilemma fallacies, you need to have enough knowledge on the topic at hand to tell that a false premise has been presented. That is the only way to detect them. So, although the tools that I have presented are useful and work in many situations, there really is no substitute for actually studying fallacies and becoming familiar with them.

Conclusion

Obviously, this post has been far from exhaustive, and there are many other fallacies (and even types of fallacies) that I didn’t address. However, this should give you a basic understanding of why fallacies are a problem, as well as some tools for detecting them. Anytime that a fallacy is present, the argument must be rejected, because you cannot be confident that the conclusion is actually supported by that argument. Thus, you should be mindful of logical fallacies and strive to avoid them in your arguments and views. Further, if someone points out that you have committed a fallacy, take that accusation seriously and look closely at their claim to see if it is correct. No one is immune to these flaws in reasoning, but there is no excuse for ignoring them once they have been pointed out to you.

 Note: It is worth emphasizing that when an argument contains a logical fallacy you must reject the argument not the conclusion (rejecting the conclusion rather than the argument is actually a fallacy known as the fallacy fallacy [that’s not a typo]). It is entirely possible to have an invalid argument, but a true conclusion. In other words, your conclusion may be true, but you cannot use that particular argument to support it, and it must be supported by other lines of evidence/reasoning.

Related posts

Posted in Rules of Logic | Tagged , , , , , , , , , , | 17 Comments

Vaccine injuries and confirmation biases

Humans are remarkably bad at accurately discerning patterns. All of us (skeptics included) are prone to confirmation biases and logical flaws in reasoning. We don’t like to be wrong, and, as a result, we tend to cling to things that appear to support our preconceived notions, while blindly ignoring things that discredit our views. Even worse, we tend to seek out others who share our views, and we enter echo chambers where we only hear from people who agree with us. This can happen for any of our views, whether they are views on science, politics, religion, etc. and it sets up a dangerous cycle, wherein we first settle on a position, then only accept information that agrees with that position, while simultaneously allowing that information to reinforce our original position. It’s a circle of ignorance that constantly bolsters our view, even if that view is totally incorrect. Again, this can happen to anyone on any topic, but in this post, I want to briefly use anti-vaccers and “vaccine injuries” as an instructive example in how this plays out.

Image via Things Anti-vaccers Say. Also, note that this whole line of reasoning is absurd. DNA doesn’t work the way that they are proposing.

If you ask people who oppose vaccines why they do so, you will likely receive a litany of “vaccine injuries” that they have personally seen or heard reported by others. Some of the more well-known of these include things like autism and SIDs, but the list generally doesn’t stop there. Depending on which anti-vaccer you ask, you may also hear that vaccines cause asthma, allergies, depression, violence, lupus, fibromyalgia, behavioral ticks, shaken baby syndrome, and just about every other issue that you can think off. Indeed, anti-vaccers seem content to view almost everything as a vaccine injury (even homosexuality [again, depending on who you ask]). The Facebook page Things Anti-vaccers Say does a good job of documenting these assertions, and I will include several of their screenshots throughout this post as both examples and discussion points.

It’s important to pause here to note several things. First, to be clear, vaccines do have side effects (as do all real medical interventions), but those side effects are either extremely mild or extremely rare, and the known benefits far outweigh the risks. Further, most of the things that anti-vaccers blame vaccines for simply are not caused by vaccines. For example, the association between vaccines and autism has been well-studied, and those studies have consistently found that vaccines do not cause autism (details and sources here). Similarly, not only do vaccines not cause SIDs, but they may actually reduce the rate of SIDs (details and sources here [see #64]). This raises the obvious question of why anti-vaccers see vaccine injuries everywhere that they look?

See this post for sources and more details.

Part of the answer is simply a reliance on shoddy sources. Anti-vaccers frequently get their information from notoriously counterfactual websites like Natural News, Green Med Info, Info Wars, and a host of others. Further, they readily accept anecdotes as evidence, which is where the topic of “vaccine injuries” really comes into play. What I see happening over and over again is that one parent tells a tale of how their child received a vaccine then developed autism, started coughing, died from SIDs or shaken baby syndrome, etc., and everyone in the anti-vaccine community makes an astronomical and logically invalid leap to the conclusion that the vaccine was responsible. In reality, however, anecdotes cannot establish causation, and the fact that one event followed another does not mean that the first event caused the second one. Indeed, that line of reasoning is a logical fallacy known as post hoc ergo propter hoc. Things do sometimes just happen to occur together. For example, in a previous post, I looked that the rate of autism and the rate of vaccination and showed that, mathematically, we would expect there to be thousands of cases each year where the first signs of autism were noticed shortly after a vaccination just by chance! This is a major reason why anecdotes aren’t valid evidence: coincidences do happen. Further, using anecdotes for things like vaccines and autism is particularly absurd because of the overwhelming scientific evidence that vaccines do not cause autism. You don’t get to reject a scientific study based on an anecdote. That isn’t how science works.

Note: Many anti-vaccers also site things like VAERS, package inserts, and the VICP, but those are also unacceptable sources for establishing causation as I explained here.

The explanation that I have just offered seems good at first, but it really just raises more questions than it answers. Most importantly, why are anti-vaccers willing to accept shoddy evidence? This is where confirmation biases come in, and I’d like to use the screenshot to the right as an example. When most people hear a report like this of a “healthy 100-year-old woman dying in six months” (paraphrasing) the cause of death seems pretty obvious: she was 100 years old! It is well established that the very elderly have weakened immune systems and  a reduced ability to recover from illnesses and injuries, and even someone who is in good health (for a 100-year-old) can succumb very quickly to even a minor aliment. Nevertheless, as you can see in the comments, to some anti-vaccers, this had to be a vaccine injury! She received a vaccine (or so they assume) then died a few months later, so the vaccine had to be the cause! (See what I mean about anecdotes being worthless as evidence of causation?)

I can, of course, give lots of other examples of this, and some are even worse. For example, when faced with evidence that homeopathic teething tablets were potentially causing sickness and even deaths in young children, the anti-vaccer on the left chose to blame vaccines instead. Think about how incredible this is. Here, you have the FDA saying that they have evidence showing that this product is likely dangerous, but instead of listening to the FDA, this anti-vaccer shifted the blame to vaccines despite a complete and total lack of evidence to support that claim! They were so convinced of the evil of vaccines that they were willing to blindly overlook an actual danger and place the blame on vaccines instead.

So, what is going on here? This answer is confirmation biases and, more generally, motivated reasoning. Anti-vaccers believe strongly that vaccines are dangerous, and that belief causes them to see vaccine injuries everywhere. That’s what confirmation biases do: they make you latch onto anything that seems to support your position, even if the evidence is shoddy. Thus, when most of us hear about something like a 100-year-old woman who “suddenly died,” we realize that there are lots of potential causes that are far more plausible than a vaccine, but if you are a committed anti-vaccer, you don’t see those other causes, because they don’t fit your mental narrative. All that you see is the vaccine. Similarly, for the homeopathic anti-vaccer, accepting that a homeopathic product was dangerous didn’t fit their mental narrative, but thinking that vaccines were at fault did fit their mental narrative. So, despite the evidence suggesting that the homeopathic product was dangerous and despite the total lack of evidence to suggest that vaccines were the cause, they blamed the vaccines. That is how powerful confirmation biases and motivated reasoning can be. They can make you think that something dangerous is actually safe and that something safe is actually dangerous.

confirmation biases

Although this image is targeted specifically at anti-vaccers, this circle of ignorance can occur for anyone on any topic. Even if your conclusion is correct, it may bias you and cause you to both accept shoddy sources and use those sources to bolster your position.

Now that we have gone over a few examples, let’s turn to what I think is the most dangerous aspect of confirmation biases: their ability to reinforce a belief. Every time that an anti-vaccer sees a non-existent vaccine injury, it reinforces their belief that vaccines are dangerous, which is obviously a big problem. This goes back to the circle of ignorance that I mentioned earlier. For one reason or another, anti-vaccers decided that vaccines are dangerous. Because of that belief, they see “vaccine injuries” that aren’t really there, and those “vaccine injuries” then make them even more convinced that vaccines are dangerous. It is a never-ending circle that sinks them deeper and deeper into ignorance and conspiracy theories. Further, this situation is made even worse by online groups where anti-vaccers can see the anecdotes of other anti-vaccers. Here again, their confirmation biases cause them to blindly accept these stories as evidence against vaccines, even though it is illogical to do so.

At this point, you may be ready to laugh at anti-vaccers for being stupid, but you shouldn’t be. As I have previously argued, most people who reject science aren’t stupid or crazy, they are just misinformed and have succumbed to their confirmation biases and motivated reasoning. Importantly, this is not a problem that is limited to science-deniers. It is a problem that affects everyone. All of us are prone to confirmation biases, and we all should constantly check our views to make sure that we are following facts and logic rather than following what we want to be true. We need to carefully consider evidence rather than blindly latching onto the things that fit our mental narratives. Perhaps most importantly, we should always consider the possibility that we are wrong. Being willing to be wrong is (in my opinion) the only way to break the circle of ignorance. Unless you are actually willing to be wrong, you will never be able to accept evidence that is contrary to your beliefs, and it is important that you are capable of doing that. To quote an excellent Dr. Who episode, “the circle must be broken.”

Note: Please do not use the VAERS, package inserts, or VICP as evidence unless you have read this post.

Note: You may be wondering how a homeopathic tablet could be dangerous since they are usually just water, so here is the story in a nutshell. This product is supposed to contain very tiny amounts of belladonna, a substance which is quite harmful in higher doses (as is true of many active ingredients in homeopathy). However, the dose makes the poison, so when diluted to extremely low levels (as homeopathic products generally are) this product is harmless (it’s also useless, but that’s beside the point). However, the FDA kept receiving numerous reports of children becoming sick or even dying after taking the tablets, and the symptoms matched the known symptoms of belladonna poisoning. So, the FDA started investigating, and they found that the amount of belladonna in the tablets was inconsistent, which is a huge problem because, as I said, this chemical is toxic at higher doses. So, it is extremely important that it only be present in very low doses. Thus, although this does not conclusive demonstrate that the homeopathic tablets were at fault, the combination of inconsistent doses of a potentially toxic chemical and the fact that children who took it where experiencing the symptoms of receiving a high dose of that chemical certainly makes a good case against the homeopathic product. Again, a more proper study is necessary for a conclusive answer, but the evidence is certainly strong enough that parents should take it seriously, and blindly rejecting that evidence in favor of arbitrarily blaming vaccines is insane. Finally, please note that this is a very different situation from the anecdotes that parents use against vaccines, because those anecdotes are all over the map, don’t match the symptoms of vaccine ingredients at the doses at which they are present, and frequently fly in the face of actual studies. In most cases, there is no plausible mechanism through which the vaccine could be at fault.

Posted in Rules of Logic, Vaccines/Alternative Medicine | Tagged , , , , , , , | 3 Comments

Genetics provide powerful evidence of evolution

Many people are under the false impression that evolution is just a guess or a belief, when in reality, it is one of the most well-supported concepts in all of science. The evidence for it is overwhelming and comes from many different disciplines such as paleontology, comparative anatomy, biogeography, and perhaps most significantly, genetics. Indeed, modern genetic tools have allowed us to repeatedly test evolution’s predictions, and those predictions have consistently come true. Therefore, I am going to explain in simple terms what the genetic evidence is and why it is so compelling. As I will show, the evidence perfectly matches the predictions that the theory of evolution made decades before we could test those predictions. Further, the patterns do not make sense if our modern organisms were specially created, because there is no reason why a creator would have had to make life with these patterns. In other words, if you want to say that God created our modern organisms, then you are left in the awkward position of arguing that out of an infinite range of possibilities available to him, God chose to create life in the one and only way that would be consistent with the predictions of evolution.

Note: Throughout this post I will use the term “creationist” to refer to people who deny evolution. There are many sub-categories within that, and there are also Christians who accept both evolution and the Bible (theistic evolutionists). I am not attacking Christianity or religion here. Rather, I am simply explaining why the evidence overwhelmingly supports evolution and refutes creationism.

 Note: I am going to talk about relationships based on genetic similarities and shared genetic traits throughout this post, but please realize that I am doing this for simplicity. Actual phylogenetic studies employ rigorous statistical analyses to look not just at the proportion of shared DNA, but also at parsimony and various other factors. So I am being simplistic to avoid losing anyone, but the actual science is more complex, and the more that you understand it, the clearer it becomes that evolution is correct.

 

That basics that everyone agrees on

To start this post, I need to explain the most basic concepts of how we use genetics to assign evolutionary relationships, and the easiest way to do that is with human families. Imagine that you gave me blood samples from yourself and five relatives, all of whom were in your generation. I then extracted and sequenced the DNA from those samples, and I found that there was one person whose genetic code was very similar to yours but more different from the other four samples. Thus, you two share a substantial portion of the variable regions of your DNA. From that, I would conclude that you two share a more recent common ancestor with each other than with your other relatives. In this case, that ancestor would probably be your parents (i.e., you’re probably siblings). This should make good sense. You obviously got your DNA from your parents, as did your sibling. Thus, since you both got your DNA from the same source, we naturally expect your genetic code to be more similar to your sibling’s than to the codes of people who have different parents.

As I look at the data further, I also find another pair of two individuals who are more similar to each other than they are to either you or your sibling, thus suggesting that they share a recent common ancestor that you do not have. However, both of them are more similar to you and your sibling than they are to the final two relatives. This would suggest that you and they share a recent common ancestor that is not shared by the final two relatives (e.g., you’re cousins who share grandparents). Finally, the last two individuals are again closely related to each other, but they are more distant to the rest of you. This would suggest that the six of you have a more distant common ancestor (perhaps you share a great grandparent).

This is an example of a cladogram (aka phylogenetic tree) showing the relationships between you and your five relatives in my example.

This is an example of a cladogram (aka phylogenetic tree) showing the relationships between you and your five relatives in my example.

As you can see, we can use those genetic data to reconstruct your family tree (what we like to call your phylogeny in science), and we can illustrate it using a phylogenetic tree or cladogram like the one on the right. On these diagrams, vertical lines represent common ancestors. Thus, you can see that you and your sibling share a recent ancestor (your parents), and you, your sibling, and your cousins share an ancestor slightly further back (your grandparents), and all six of you share an ancestor even further back (your great grandparents). Again, this should all make good sense when you think about how DNA is passed. All six of you share a certain amount of DNA because you all inherited it from your great grandparents. After that, however, things began to diverge. One of your great grandparents’ children went on to become your grandparent, while another one went on to produce your more distant relatives. Thus, you, your sibling, and your cousins are more alike because you all received DNA from the same source (your grandparent). Then, one of your grandparents’ children went on to become your parents, while another became your aunt/uncle and produced your cousins. Does that make sense?

I want to pause here for a moment to make a crucially important point. In my example, we did not need actual DNA from your ancestors. Rather, we were able to infer their existence from the patterns that we saw in the DNA from the current generation. This is a very important strength of genetic analyses: we can use data from the current generation to infer the existence of past ancestors.

 

Broadening the scope

Everything that I have said thus far is universally accepted. No one disagrees that these genetic tools can determine family relationships like this, and even the most die-hard creationist would have no problem with what I have said. However, the power of these tools doesn’t stop there. We can also use them for an entire species. For example, we can trace the ancestry of all humans back to a common source. Here again, creationists have no problems. They agree that these methods are reliably showing true relationships, and it’s not simply a case of some people happening to have similar DNA. They agree that the similarities are similar by descent and indicate common ancestry (i.e., they accept that these methods can reliably identify ancestors that we do not have DNA samples from). In other words, they agree that these are actually showing real, evolutionary relationships within people (they would argue that the tree goes back to Noah and his family as the common ancestor).

We can, however, go even further than just a species, we can also use it for complex species with many breeds (such as dogs) or even for entire families of animals (in the scientific classification of organisms, family is the third most specific classification, followed by genus and species). We can, for example, show that all species of ducks (family Anatidae) descended from a common ancestor. We can also show that all tree frogs (family Hylidae) share a common ancestor, all pthyons (family Pythonidae) share a common ancestor, all kangaroos (family Macropodidae) share a common ancestor, etc. Again, creationists are OK with this. At the family level, they agree that these methods are showing true relationships. You see, young-earth creationists argue that on Noah’s ark, Noah did not take two of each species, but rather took two of each “kind,” which they arbitrarily define as being roughly equivalent to scientists’ term “family.” Thus, they agree with these data, because they think that all modern ducks descended from a single pair of ducks on the ark, all modern tree frogs descended form a pair of tree frogs on the ark, etc. I have even seen some of them go as far as saying that the genetic evidence within families is evidence of creationism/Noah’s Ark (that is a logical fallacy known as affirming the consequent).

Cladogram of dog breeds. Figure 1a from vonHoldt et al. 2010.

Cladogram of dog breeds. Figure 1a from vonHoldt et al. 2010.

Creationist’s disagree, however, the instant that we start extending beyond the family level. Take marsupials (pouched mammals) for example. Using these genetic techniques, we can tell that many carnivorous marsupials, like Tasmanian devils and quolls are all in a single family (Dasyuridae) and share a common ancestor. Creationists are fine with that, and agree that the methods are showing true relationships. However, we can use exactly the same methods to broaden the scope just a little bit further and show members of Dasyuridae are more closely related to the Myrmecobiidae family than they are to any other living marsupials. Thus, we can tell that Dasyuridae and Myrmecobiidae evolved from a common ancestor, and we group them together into the order Dasyuromorphia (order is one step broader than family). At that point, creationists suddenly disagree. Suddenly they insist that these methods are just showing similarities, not true relationships. They are even more upset when we use exactly the same techniques to show that the order Dasyuromorphia evolved from the same common ancestors as the orders Notoryctemorphia and Peramelemorphia (Gallus et al. 2015). Further, we can keep going with thus until eventually we have a cladogram for all marsupials that shows that all of them share a common ancestor and are more related to each other than they are to other mammals (just like you are more related to your sibling than to your cousins; Cardillo et al. 2004).

A phylogenetic tree of several marsupial families. Figure 7 from Cardillo et al. 2004.

A phylogenetic tree of several marsupial families. Figure 7 from Cardillo et al. 2004.

We don’t have to stop there, however. We can continue to use the same methods to show that all mammals share a common ancestor, all animals share a common ancestor, and ultimately that all life on planet earth evolved from a common ancestor. Creationists, of course, object to this in the strongest possible terms. They insist that these genetic similarities aren’t actually showing real relationships, and they are adamant that the fact that two groups share more DNA with each other than with some other group doesn’t indicate that those two groups evolved from a common ancestor. As you can hopefully now see, however, that argument is logically inconsistent because it is completely and totally arbitrary to say that these methods work within families, but don’t work for taxonomic levels higher than that. That reasoning is logically invalid and completely ignores the evidence. Look at the cladogram above, for example. It shows some of the relationships that I described in marsupials, and I have colored the parts that creationists agree with green and the parts that the disagree with red. As you can see, within each family, they accept quite a few common ancestors. They agree that these methods can reliably show ancestry, yet as soon as we move beyond the family level, they say that the methods don’t actually show common ancestry. They agree, for example, that all members of the genera Dasyurus, Neophascogale, and Phascolosorex descended from a common ancestor, yet that disagree that the families Dasyuridae and Myrmecobiidae descended from a common ancestor. That belief is completely arbitrary and has no scientific basis or logical credibility. To put this another way, look at the clodagram that I showed early for dog breeds (which creaitonists have no problems with), then look at the cladogram below for all life on planet earth, and tell me what the difference is. Explain to me why we should accept that these methods work for dogs but arbitrarily believe that they don’t work for higher taxonomic levels.

Phylogenetic tree of life on planet earth. Via the University of Texas.

Phylogenetic tree of life on planet earth. Via the University of Texas.

 

 

 

 

 

 

 

 

 

Extraordinary predictions

I want to take a minute here to try to impress on you just how extraordinary these genetic results are. Scientific theories are often judged by their predictive power. In other words, good theories are ones that can accurately predict the results of future experiments, and the more extreme the predictions, the better. In this case, the theory of evolution made the astounding prediction that we should see these genetic patterns decades before we actually had the ability to test them.

When Darwin first proposed the theory of evolution, genetics were unknown. No one knew what DNA was or how traits where inherited (see note). In fact, Darwin himself was totally wrong about how inheritance worked (he subscribed to the “blending” hypothesis wherein the traits of two parents blended together). Nevertheless, despite being wrong about the mechanism, it was clear that there had to be some way that the information for traits was passed from parents to offspring, and if evolution was true, then scientists realized that the information should record evolutionary history. In other words, if evolution was true, it should be possible to use that information in exactly the way that I described to show that all life traces back to a single common ancestor.

That was already an extreme prediction, but it didn’t stop there. You see, it wasn’t enough for there to be a pattern. Rather, the pattern had to match overarching morphological patterns. In other words, it had to show that all of the parrot families share a common ancestor, all frogs share a common ancestor, all marsupials share a common ancestor, etc., and that is exactly what we find. Further, this pattern had to match the fossil record, which is where things get even more extraordinary. You see, it may make intuitive sense to you to expect that all frogs would be genetically similar, even if they were specially created (more on that later), but why would genetics show that modern amphibians and modern reptiles share a common ancestor? That’s not something that you would expect under creationism, but it is what evolution predicted, because the fossil record clearly showed that both modern amphibians and reptiles evolved from ancient amphibians. Thus, evolution predicted that modern amphibians and reptiles should share a common ancestor. Similarly, the fossil record showed that amphibians evolved from fish, and that both reptiles and birds evolved from amphibians. Therefore, if those fossilized patterns are correct, we should see the same patterns in DNA, and we do! Think about how amazing that is. Evolution predicted the existence of an extremely precise pattern long before we could test that prediction. If evolution isn’t actually true, then you have to say that the patterns that we see in morphology, the fossil record, and genetics just happen to perfectly match up. That’s insane! Further, let’s be clear that I am only naming a handful of the predictions here. They also extend to all plants, bacteria, archaea, invertebrates, and other chordates. We are talking about thousands of predictions that evolution nailed! That is extremely strong evidence that evolution is correct. To put that another way, what are the odds that evolution would have gotten all of those predictions right if evolution wasn’t actually true?

Crocodiles are more closely related to birds than other reptiles. Image via Green et al. 2014.

Crocodiles are more closely related to birds than other reptiles. Image via Green et al. 2014.

To really drive this home, let’s talk more about birds for a minute, because their story is incredible. As I explained in a previous post, there is a ton of fossil evidence showing that birds evolved from dinosaurs. We have lots of transitional fossils showing that this occurred. Further, the fossil record shows the existence of a large phylogenetic group known as archosaurs, which included both ancient crocodilians and the group of dinosaurs that evolved into birds (more details at the University of California). This tells us that crocodiles and birds should actually be each other’s closest living relatives, and it leads to an absolutely incredible prediction. Genetically, not only should birds fall out as reptiles, but crocodiles should actually be more closely related to birds than they are to other reptiles. That is an amazing prediction that makes no sense under creationism. Why would God give crocodiles a genetic code that shares more in common with birds than other reptiles? As you might have guessed, however, this prediction totally came true! Genetically, birds are actually reptiles, and crocodiles share more DNA with birds than with other reptiles (Green et al. 2014)! Again, this is because birds and crocs share a common ancestor (just as you and your sibling are genetically similar because of a common ancestor). If you stop and think about this for a second, it is mind-blowing. Genetically, crocodiles are more similar to birds than they are to other reptiles. If that doesn’t make you question everything, then I don’t know what will.

Note: Technically, Gregor Mendel (who discovered how genetic inheritance works) was Darwin’s contemporary, but Mendel’s work was largely unknown until well after his death.

 

Functionally arbitrary similarities

At this point, you might be tempted to think that these genetic patterns are there by necessity. For example, you might think that all frogs have similar genetic codes simply because they all have to have similar codes in order to have the characteristics of a frog. Thus, you might think that these genetic patterns are functionally necessary and would have to exist even if modern organisms were specially created. There are, however, numerous problems with that line of reasoning.

First, that argument would only have the potential to apply to the patterns within fairly narrow taxonomic units, and it would not explain the overarching patterns. In other words, the fossil record tells us that modern amphibians evolved from ancient fish, modern reptiles evolved from ancient amphibians, modern mammals evolved from ancient reptiles, birds and crocodiles both evolved from an ancient archosaur (reptile), etc. As I have already explained, genetics show us exactly the same progression, and there is no reason why that pattern had to exist. An all-powerful being could easily have created birds, reptiles, amphibians, mammals, etc. without making this pattern. Indeed, he could have created life such that each “kind” was unique and did not show any patterns of relatedness to the other “kinds.” To put this another way, why did God make crocodiles more similar to birds than to turtles?

Second, even within more narrow taxonomic groups (defined by morphology in this case), there is still actually no need for the level of genetic similarities that we observe. As I will explain, the genetic code is remarkably redundant and pliable, and you can have two very similar organisms with very different genetic codes and evolutionary histories (conversely, you can also have two very different organisms with comparatively similar genetic codes, think about crocodiles and birds again). I will explain more details about how that works in a moment, but let me give you the big picture first. There is a process known as “convergent evolution” wherein similar habitats and life histories cause two distantly related species to evolve to have similar morphological or physiological traits, but because they evolved independently, their genetics are quite different.

A sugar glider (left) and flying squirrel (right). Despite appearing similar, they are actually very distantly related, and each species evolved to be similar via convergent evolution.

A sugar glider (left) and flying squirrel (right). Despite appearing similar, they are actually very distantly related, and each species evolved to be similar via convergent evolution.

Sugar gliders (Petaurus breviceps) and northern flying squirrels (Glaucomys sabrinus) provide a really nice example of convergent evolution. As you can see in the image, they look extremely similar, and they both possess remarkable adaptations such as a large flap of skin that they can use to glide, a large bushy tail to steer with, large forward-set eyes for good night vision, etc. If you didn’t know any better, you would probably think that they are close relatives, but you’d be very wrong. You see, sugar gliders are marsupials, whereas flying squirrels are placental mammals. So genetically, flying squirrels are far closer to you and me than to a sugar glider, and sugar gliders are far more related to kangaroos than to flying squirrels. Nevertheless, despite having very different genetic codes, they have very similar morphology (with regards to the adaptations for gliding) because they both adapted to similar habitats/life styles. There are tons of other examples like this that I could give, but hopefully you see my point: there are often multiple ways to achieve the same basic outcome, and you don’t need to have similar genetics to be morphologically similar.

Note: Lest anyone try to say that this example actually discredits evolution because it shows that morphology and genetics don’t always match up, there are other traits that distinguished them long before genetics (e.g., the pouch), so this was not a case of morphology and genetics disagreeing. Nevertheless, my point stands that both species evolved many of the same traits in different ways, and different genetic codes can achieve the same outcome.

 So why is it that the genetic code is so malleable? Why can organisms with different genes evolve the same basic structures? To answer that, you need to understand how DNA works. It consists of four base pairs (represented as A, T, C, and G), and those bases are arranged in groups of three, with each group coding for an amino acid. The arrangement of those amino acids then determines what proteins are formed. Thus, a string of DNA codes for a series of amino acids which in turn forms a protein. That code is, however, highly redundant, and several different groups of bases can form the same amino acid (and therefore same protein). For example, the amino acid proline can be formed by the codes CCT, CCC, CCA, or CCG. They all form the same amino acid, and therefore, the same subsequent proteins. Indeed, most amino acids can be formed by at least two different sets of bases. Therefore, because proteins are formed from strings of numerous amino acids, you can have tons of organism all producing the same protein, but doing so via different genetic codes (there is also redundancy in the proteins themselves in that you can swap some amino acids and still get the same basic protein, this does have an effect on the function of the protein, but not a significant enough one to really make creationists’ argument persuasive).

Additionally, large portions of the genomes of most organisms are what are referred to as “junk DNA” (Rands et al. 2014ENCODE Project Consortium). Exactly what these are and what they do is still the subject of much debate, but it does appear that they are not actively coding nearly as much as regular DNA (if at all), and mutations in those regions are unlikely to have large impacts on organisms. Indeed, when you combine the presence of junk DNA and the redundancy in the genetic code, it turns out that for many species, most mutations are actually “neutral” and have no effect on the organism (Eyre-Walker et al. 2007).

The consequence of all of this is really important. It means that there can be a lot of variation in genetic codes without it affecting functional traits (or in some cases, with it only have minor affects). In other words, an omnipotent, all powerful being could easily have designed two organisms that were nearly identical in morphology and physiology, but had extremely different genetic codes. To put that another way, as it turns out, it is not at all necessary for two species that look and behave like frogs to have similar DNA. To be clear, there certainly are conserved sections of DNA, and some sections of the genetic code are similar for functional reasons, but there is no reason why the similarities should consistently extend across the entire genome. Because of the redundancies in the genetic code, you could easily have two “frogs” with radically different genetics. Indeed, it would be entirely possible for an all-powerful all-knowing God to make four identical “frogs” one of which had protein sequences that matched those of birds, one of which had protein sequences that matched those of fish, one of which had protein sequences that matched those of reptiles, and one of which had protein sequences that matched those of mammals! Lest you think that I am pulling your leg, think about birds and crocodiles again.

 

“God did it”
I want to conclude this post by talking about the most common response that I get to all of this. More often than not, when I present this evidence to a creationist, I get the following reply, “well, those patterns are just the way that God created everything, and the common patterns exist because of a common creator, not because of a common ancestor.” There are, however, numerous problems with this response, so let me lay them out for you.

First, as I explained at length earlier, this response is logically inconsistent. If you agree that genetics show true relationships at the family level (as all creationists seem to), then you cannot arbitrarily say that they don’t work at higher levels. That is not valid reasoning.

Second, this response is what is known as an ad hoc fallacy. It is a logically invalid cop-out that is not falsifiable (thereby violating a key requirement for science) and would never be accepted by anyone who wasn’t already convinced that creationism is true. You might as well propose that Barney the dinosaur is actually a real magical dinosaur who created these patterns just to screw with us. Just like the “God did it” response, I technically can’t disprove that claim, but it is clearly not a rational argument.

Third, this response has serious logical problems because of the nature of the genetic code. There are several parts to this, but first I want to address the one that I haven’t talked about yet, and it is easiest to do that by way of example. Like most modern scientists, I have been forced to learn some computer coding, which I use to write codes for organizing and sorting data, running statistical models, simulating data, and even making fictional examples for this blog. I am, however, a pretty horrible programmer. My codes always work in the end, but they tend to be clunky, inelegant, and redundant. Further, frequently when I need to code something, I simply take an existing code and modify it. That saves me time, but it generally produces codes with irrelevant lines that are left-overs from the codes’ original functions, as well as unnecessarily complicated processes that would have been far simpler if I had started from scratch. In contrast, someone who knew what they were doing and built each code from scratch, would be able to make codes that do exactly what mine do, but theirs would be very elegant and free of redundancies and irrelevant lines of code.

It may seem like I am off topic here, but computer codes are actually remarkably analogous to genetic codes. Zeros and ones tell computers what to do in much the same way that As, Ts, Cs, and Gs tell organisms what to do. Now, ask yourself this question, if all life was created by an omnipotent, omniscient God, would you expect elegant, well-written codes that were free of redundancies, or would you expect clunky, bulky codes, that were hodgepodged together from existing codes and are full of redundancies and lines that no longer do anything? I would certainly expect the former, but what we find is the latter. The more that we examine organisms’ genetic codes, the clearer it becomes that they were made by randomly modifying existing codes, rather than writing new codes from scratch. That is why we end up with large non-functional (or barely function) regions and codes that carry over from one group to the next. To put it simply, if God specially created modern organisms, then he is a terrible programmer.

This brings me to my final point, which is probably the most important one. As I have tried to make clear throughout this post, the genetic patterns that we see among organism are exactly what evolution predicted at every level. The relationships and patterns within groups are exactly what evolution predicted, and the overarching patterns of relationships among groups are exactly what evolution predicted. We are talking about thousands of predictions that evolution consistently got right. Further, as explained earlier, these patterns don’t have to exist for us to have organisms that look and function like our modern organisms. An all-powerful, all-knowing being could easily have created modern organisms such that there was no pattern at all. He could have scrambled protein sequences such that, for example, some bird proteins matched frogs, others matched fish, others matched reptiles, others matched trees, etc. Alternatively, he could have made extremely inconsistent patterns. He could have made some birds appear to be related to reptiles, others to fish, others to amphibians, etc. He even could have made a consistent pattern, but one that didn’t match evolution’s predictions. For example, he could have given all birds protein sequences that most closely match fishes. Any of those patterns would have been absolutely devastating for evolution. Anything other than exactly the pattern that we see would have falsified our understanding of life on this planet.

My point here is simple, if you want to say that God created all life on planet earth, then what you have to say is this: God (who according to the Bible is a God of truth, not deception) had a nearly infinite number of options for how to create life, yet out of all of those options, he chose the one and only pattern that would confirm the theory of evolution. To put that another way, life looks like it evolved. You absolutely cannot say that the evidence doesn’t support evolution, because evolution’s predictions have consistently come true. You can choose to ignore the evidence, but you cannot deny that it perfectly matches evolution’s predictions. So, you are left with saying that life on planet earth looks exactly the way it would if it evolved, but it didn’t actually evolve, God just created it in the one and only way that would make it look like evolved.

In closing, I would like to ask you a simple question. If you are going to write off these genetic patterns as “just similarities,” if you are going to ignore this overwhelming evidence and these astounding predictions, then what would convince you that evolution was true? If the fact that it accurately predicted the genetic patterns of all living things isn’t enough for you, then what would be? What would it take to convince you that you were wrong?

Note: Some creationists try to contest arguments like this by pointing to cases where scientists have disagreed about how two groups of animals are related, there are, however numerous problems with that counterargument. First, in the modern genetic era, those debates usually only occur for pretty narrow taxonomic boundaries, while the overarching patterns remain undisputed. Second, those debates arise from one of two things: unclear morphology or unclear genetics. To put that another way, often what happens is that scientists are working with incomplete fossils and it is difficult to use them to determine how things are related. Thus, disagreements arise not because evolution was wrong but simply because scientists are working with incomplete data sets that have been preserved for millions of years (convergent evolution can also sometimes confound things even for living organisms). Other times, this arises from using a limited number of genetic markers. Sometimes, genetic patterns are hard to decipher, particular if you are only using a tiny portion of the genome, and these tools aren’t perfect, but as more and more studies are done using more and more markers, the picture becomes increasingly clear, and it overwhelming matches what we expect to see from evolution.

Related posts

References

  • Cardillo et al. 2004. A species-level phylogenetic supertree of marsupials Journal of Zoology 264:11–31.
  • ENCODE Project Consortium. 2012. An integrated encyclopedia of DNA elements in the human genome. Nature 48957–74.
  • Eyre-Walker et al. 2007. The distribution of fitness effects of new mutations. Nature Reviews Genetics 8:610–618.
  • Gallus et al. 2015. Disentangling the relationship of the Australian marsupial orders using retrotransposon and evolutionary network analysis. Genome Biology and Evoltuion 4:985-992.
  • Green et al. 2014. Three crocodilian genomes reveal ancestral patterns of evolution among archosaurs. Science 346.
  • Rands et al. 2014. 8.2% of the human genome is constrained: variation in rates of turnover across functional element classes in the human lineage. PLoS Genetics 10:e1004525.
  • vonHoldt et al. 2010. Genome-wide SNP and haplotype analyses reveal a rich history underlying dog domestication. Nature 464:898-902.

 

Posted in Science of Evolution | Tagged , , , | 104 Comments

Basic Statistics Part 6: Confounding Factors and Experimental Design

The topic of confounding factors is extremely important for understanding experimental design and evaluating published papers. Nevertheless, confounding factors are poorly understood among the general public, and even professional scientists often fail to appropriately account for them, which results in junk science. Therefore, I want to briefly explain what they are, and how to deal with them. You should then apply this whenever you are reading a scientific study or proposing that a particular study should be done.

Before I can explain confounding factors, I need to explain some fundamentals of experimental design. As a general rule, when you are designing an experiment, you want to have an experimental group and a control group. These two groups should be identical except that one group (the experimental group) should receive some form or treatment (what we call the experimental factor or experimental variable) while the other should either receive nothing or receive a placebo (depending on the exact type of study being conducted). In other words, the two groups should be totally identical except for the experimental variable. When that condition is met, you can then infer that differences between the two groups are being caused by the experimental variable. In other words, if the two groups are completely and totally identical in every way except for experimental variable, then any differences in your response variable (i.e., the thing that you are measuring) must be being caused by the experimental variable, because it is the only thing that differs between the two groups (see Note 1; also note that you can also have multiple experimental groups testing different things simultaneously, but I will stick with two for now for the sake of simplicity).

This sounds all well and good, but the problem is that in reality, having two identical groups is almost never possible. This is where the topic of confounding factors come in. If you Google a definition of “confounding factors,” you’ll basically find two lines of thought. One is that a confounding factor is a third variable that actually has a causal effect. The second is that a confounding factor is simply a third variable that might have a causal effect, thus preventing you from being able to assign causation. I detest semantic debates, so I won’t waste time quibbling over which definition is better, but for the sake of this post, I am going to use the second definition, both because that is how I hear people use it more frequently in actual discussions among scientists, and because that is the concept that I am trying to convey even if you want to use a different term (i.e., I don’t care if you remember the term “confounding factor” but I do care if you remember the concept of a confounded experiment).

This is an illustration of a fully confounded experiment. Because different brands were used in each group, there is a third variable that confounds the experiment and makes it impossible to assign causation.

This is an illustration of a fully confounded experiment. Because different brands were used in each group, there is a third variable that confounds the experiment and makes it impossible to assign causation.

Let me give a simple example of how this might play out. Imagine that you have designed a product that is intended to prolong the life of a car. Thus, you get two groups of new cars, and pour the product (the experimental variable) into the engine of one group (the experimental group), while giving the other group an equivalent volume of regular engine oil (the control group). Then, you hire people to drive them around a racetrack at high speeds for weeks until they eventually die (or perhaps put them on a treadmill of sorts, I don’t pretend to know how car testing works). Then, you record how long the cars last (your response variable). Now, suppose that at the end of the experiment, you find that, on average, the experimental group lasted significantly longer than the control group. Thus, you conclude that the product works. There is one problem, however. All of the experimental cars were Toyotas, and all of the control cars were Nissans. This is what we would call a fully confounded experiment, because there is a third variable (car brand) that is also completely different between your groups. Thus, it would be impossible to use this experiment to say that the product works, because you have no way of knowing if the product actually worked or if Toyotas simply last longer than Nissans. Do you see how that works? Because of that third variable, there is no way to confidently assign causation.

That is obviously an extreme example, but this type of thing happens all the time in real experiments, and it can occur in very subtle ways. For example, let’s say that you are testing a drug on rats, and you have your rack of cages with control rats on one side of the room and your rack of cages with the experimental rats on the other side. That may not sound like a problem, but it can be. Imagine, for example, that there is a draft in your lab, and as a result, one rack experiences a different temperature than the other. Temperature can affect metabolism and a host of other biological processes, so that would confound your experiment. Similarly, perhaps people walk through one half of your lab more than the other. That could stress the rats, and stress also affects many biological processes. There could also be a slight difference in the lighting, or any other host of factors. All of that may seem minor, but it really can make big differences in your results, and when you have a fully confounded experiment like that, you simply can’t assign causation.

My rat example is also a bit extreme, because it is, once again, a fully confounded design, meaning that the confounding factor (cage position) is totally different between the two groups; however, there are also many cases where experiments are partially confounded, and they can be just as problematic. Let’s say, for example, that you are testing a drug, and your control group is made up of 90% men and 10% women, whereas your experimental group is made up of 20% men and 80% women. That is actually a problem, because women and men have biochemical differences and they often respond differently. As a result, any differences that you see could be being driven by male/female differences rather than control/treatment differences. Thus, sex is a confounding factor and makes it very difficult to assign causation (though in this case perhaps not impossible given a large enough sample size).

So how do scientists actually deal with this? As I said early on, confounding factors are everywhere, and having two totally identical groups is virtually impossible. Fortunately, there are several important strategies for dealing them: eliminating, randomizing, blocking, and measuring.

I’m going to try to stick with my car example as I explain how these work, because I think it is useful to use a hypothetical topic that is totally separate from any real or contentious issue. The experimental design that I mentioned originally where the experimental group was entirely Toyotas and the control group was entirely Nissans is a terrible one. If you actually did that experiment, you would be screwed, because there is no statistical test on the planet that could tease out the effects of Toyotas vs the effects of the treatment in a fully confounded experiment like that (see Note 2 at the end). Fortunately, most scientists aren’t that brainless. We spend a great deal of time thinking about confounding factors before doing our experiments, and we try as hard as possible to control them. So in this case, we could control this experiment by simply eliminating many of the confounding factors. For example, we could use only one brand and one model of car for both groups (thus eliminating brand as a factor). Further, we would want to control the year of the car, the factory that produced it, etc. Basically, the idea is that if you can remove a variable from your experiment, then you should do so (unless those variables are interesting to you, more on that later). Remember, ideally you want your groups to be totally, 100% identical. So to do that, you need to eliminate as many confounding factors as you possibly can.

Once you’ve done that, and you’ve gotten your two groups to be as similar as possible, you may think that you are good to go, but you actually aren’t. Once again, having two totally equal groups is virtually impossible. So even after you have controlled every confounding factor that you can think of, there there will almost certainly still be some slight variation that you aren’t aware of. For example, there may be slight inconsistencies in the manufacturing process, the steel that was used, etc. Because you don’t know what those differences are, you can’t eliminate them, but you can compensate for them by randomizing. In other words, you should take your entire pool of cars (which should already be as similar as possible) and randomly select which ones go into each group. Thus, any variation gets randomly dispersed into your two groups, rather than falling disproportionately into one group. This is an extremely powerful and important tool that should be used whenever possible. In any experiment, you should thoroughly randomize whenever possible (in this case we would also randomize our drivers).

So far so good, but what happens when you have a confounding variable that you know about, but can’t get rid of? In many cases there are practical reasons why it is impossible to get rid of confounding factors, and in other cases there are scientific reasons. For example, maybe you want to know if there is an interaction between the product and car brand (i.e., does it work better on Toyotas than Nissans?), or you may simply want your results to be as broadly applicable as possible. After all, if you only test it on Toyotas, then all that you have actually shown is that it works on Toyotas (assuming it works at all), and you are making an assumption when you apply that result to other car brands. That assumption is probably reasonable, but it would be better to actually test it.

This is a blocked experimental design. Because each car brand occurs in both the experimental group and the control group, car brand can be included as a factor in the analysis, and causation can be assigned.

This is a blocked experimental design. Because each car brand occurs in both the experimental group and the control group, car brand can be included as a factor in the analysis, and causation can be assigned.

This is where blocking and measuring your confounding factors come in. Let’s say that you want to test this product on Toyotas, Nissans, and Fords. So, you select one model and year of each brand and control for confounding factors within car brand as much as possible, just like before. Having done that, one option would be to simply pool all three car brands and randomly select your experimental cars and control cars from that. There is nothing technically wrong with that (assuming you still include car brand as a factor in your analyses, see Note 3), but it’s not the most powerful design available to you. A much better design would be to block or group your experiment. You could, for example, have 30 of each brand in each group, in which case brand would be a blocking variable. To be clear, you still must randomize, but the randomization would take place within blocks. In other words, you would take your 60 Toyotas, and randomly assign half to each group, then you would take your Nissans and randomly assign half to each group, etc. Then, when you do your statistics at the end of the test, you would include brand as a variable in your statistical analyses, and this would be a very robust design (I won’t go into the details of why this design is so powerful here, but if you want to learn more, looking into two-factor ANOVAs is a good place to start).

This is a more complex blocked experiment. It also includes "nesting" meaning that you have car models "nested" within car brands.

This is a more complex blocked experiment. It also includes “nesting” meaning that you have car models “nested” within car brands.

You can also build on this design by including additional blocks. For example, you could have several models of car within each brand (this would then introduce yet another concept known as nesting, which I won’t go into but you can read about here). Alternatively, perhaps you are interested in how the product works in heavy duty vehicles like SUVs vs standard cars. In that case, you could have one car model and one SUV model from each car brand and include vehicle class (SUV vs car) as an additional blocking variable (again you would want to randomize within each block). You could even go one step further and have several car models and several SUV models within each brand, at which point you would have three blocks (brand, class, and model) as well as nesting. As you can see, this all becomes very complicated very quickly, and I don’t expect you to be thinking at the three-block stage right now, but I want you to be aware that blocking is a very powerful tool that lets you make sense of complex experimental designs that may, at first, appear to have serious issues with confounding factors.

Finally, let’s imagine that for some reason you can’t block your experiment. In other word, there is some confounding factor that you know about, but for one reason or another you can’t block against it. For example, perhaps all of your cars are used, rather than brand new. That would obviously greatly increase the variation in your data and would likely force you to greatly increase your sample size, but even with a larger sample size and randomization, you would still need some mechanism for dealing with the fact that some cars had been driven more than others prior to your experiment. The solution is actually quite simple, you record the pre-existing mileage on each car and include those data as a factor known as a covariate in your analyses. The idea is basically that covariates explain some of the variation in your data, so by including them in the model, you get that explanation and can compensate for the variation caused by the confounding factor. As a general rule, you should do this anytime that you have some measurable variation that can’t be eliminated or blocked (see Note 4). You should measure it, then include those measurements in the analysis.

At this point, you may be wondering what on earth the point of all of this is. After all, most of my readers aren’t scientists who are going to be designing experiments, and if you are, you should be consulting a statistician or good stats book, not reading my blog. Nevertheless, if you have read this far, then I am going to assume that you are interested in science and understanding how the world works, and I’m going to assume that this interest will invariably lead you to read some scientific literature. That is where this comes in, because not all scientists know what they are doing, and we are all prone to mistakes, so when you read a scientific paper, you should look for things like this. See if they eliminated as many confounding factors as possible, see if they blocked the experiment and included those blocks in the analyses, see if they randomized correctly, and make sure that they included measurable variation in the analyses. If they didn’t do these things, then you should be dubious of their results. If you see confounding factors that they didn’t account for, or they didn’t randomize, etc. you should think twice before accepting their conclusions (for example, see my analysis of a rat/Roundup that was not done correctly). I should also clarify here that although I have been talking specifically about randomized controlled studies in this post, what I have said applies to other designs such as cohort studies as well. The techniques that I have laid out are extremely important for dealing with confounding factors, and you should make sure that they are being used correctly when you read a study.

Related posts

Notes: As you can probably tell, this topic can be quite complex, and I have had to make some simplifications and generalizations to try to get the key points across in a brief blog post. Nevertheless, there are a few things that I would like to provide some additional information on.

Note 1: When I said, “if the two groups are completely and totally identical in every way except for experimental variable, then any differences in your response variable (i.e., the thing that you are measuring) must be being caused by the experimental variable” I want to be completely clear that I was talking about two 100% identical groups (i.e. no variation between the groups whatsoever). Only then can you be totally certain that your relationships are causal, but as I explained in the post, there is always some variation. Therefore, in actual experiments, you can’t assign causation with 100% significance. This is one of the many reasons that we use statistics to tell us the probability that the result that we are observing could have arisen just by chance (more details on how that works here).

Note 2: When I said that no statistical tests could determine causation with a fully confounded experiment, I want to be clear that I am talking simply about taking the measurement variable (car lifespan) and doing a statistical test comparing care lifespan between your experimental group and treatment group. I am not talking about investigations into the mechanisms through which the product worked. In other words, if you had a hypothesis about the mechanism that the product should use, then perhaps you could examine the cars in the two groups to try to actually look at their mechanical properties, but that is a very different thing from the type of statistics that I have been talking about here.

Note 3: I have been talking about blocks as groups that you deliberately design, but I should point out that for many statistical tests, you can include blocking factors even if you didn’t set up the type of nice even design that I have laid out here. For example, if you took a pool of Toyotas, Nissans, and Fords and randomly assigned them into groups instead of treating them as blocks with randomization with blocks, you could (and indeed should) still include car brand as a factor in your analysis. Your groups will probably just be a bit lopsided (i.e. out of 60 cars, 40 might get randomly selected to go into the control group and only 20 might end up in the experimental group). Deliberately blocking is just generally a bit better because you can control the number of samples in each block.

Note 4: I said that you should always measure sources of variation that can’t be eliminated or blocked and include them in your experiment, but there are exceptions. Sometimes, the variation may not have anything whatsoever to do with the causation, and “over-fitting” statistical models (i.e., including unnecessary variables) is actually a problem. So, if you are in a situation where there is no reasonable mechanism through which a measurable source of variation could influence the results, don’t include it. For example, including the paint color of the cars as a factor would almost certainly be meaningless, so don’t include it. Additionally, depending on the type of statistical test that you are using, you may be able to construct several statistical models using different combinations of variables, then test which one expalins the data better using things like AICs and BICs. To be clear, you aren’t running a bunch of models to see which one gives you the results that you like, rather you are selecting the model that does the best job of explaining the data (this is a complex topic for another time).

Posted in Nature of Science | Tagged , , , | 1 Comment

I used to be a science denier: My journey from pseudoscience to skepticism

In this post, I decided to do something a bit different and write about my personal background rather than debunking woo, explaining a logical/scientific concept, or any of the other things that I usually do on this blog. The embarrassing reality is that I wasn’t always a skeptic, far from it, in fact, and I think that my journey towards skepticism (a Fallacy Man origin story, if you will) may be helpful and instructive both for people who are currently on a similar quest for truth, as well as my fellow skeptics who are endeavoring to teach science and critical thinking. Because of my background, I have an atypical perspective on things that I hope will be both enlightening and encouraging to others. Therefore, I will explain my background, as well as directing several comments specifically to those who reject “mainstream” science (as I once did), as well as comments  to my fellow skeptics.

Note: although this is specifically about science, it all applies to political topics as well, and people on both sides of the political spectrum should consider it.

 

My Background

I was raised in a very conservative Christian home, where I was taught to believe in a literal interpretation of the Bible, including young earth creationism (i.e., evolution is false, the earth is 6,000 years old, Noah’s flood was a real historical event, etc.). Nevertheless, I’ve been interested in science for as long as I can remember (particularly paleontology and zoology), and if you had asked the 8-year-old me what I wanted to be when I grew up, I would have passionately told you that I wanted to be a scientist. As a result, my parents furnished me with a multitude of books, but they were always given to me with a caveat. Namely, I was told to ignore all of the sections on evolution, and in addition to the actual science books, I was supplied with numerous books from Answers in Genesis and other creationist organizations so that I would learn the “truth” about the history of life and wouldn’t be led astray by the devil’s lies (that is literally how evolution is viewed among my family members).

I want to pause for a moment to make two quick clarifications. First, I am not attacking religion. As I have repeatedly stated, this blog is about science, and my posts only deal with religion as it relates to science. So, if you want to both believe in God and accept evolution, the big bang, etc., then I don’t really feel the need to debate you here (that isn’t to say that discussions about the existence of a God aren’t worth having, but simply that they aren’t within the scope of this blog). Second, I want to be clear that I am not attacking my family. Although I now disagree with them on a great many things, I still love and respect them. I appreciate the home that I was raised in, and I appreciate the fact that my parents really did foster my love of science by buying me tons of books, chemistry sets, microscopes, etc., putting up with me converting the back yard into a “dig site,” letting me bring home all manner of animals as pets/study subjects, tolerating me dragging home dead animals to dissect/reassemble their skeletons, etc. (I was a strange child, if you haven’t figured that out).

Getting back to my origin story, by the time that I graduated high school, I was thoroughly vested in young earth creationism. I was also extremely argumentative (a trait that I’ve never outgrown), and I was quick to pick fights over evolution. Indeed, I was one of the people who would get on the comments sections of blogs like this one and viciously defend creationism. I even wrote my high school senior term paper on why evolution wasn’t true, and in my college freshman speech class, I gave a speech about why abiogenesis wasn’t true. I got A’s on both assignments, and at the time I thought that my arguments were rock solid. Looking back, however, I can see the numerous scientific inaccuracies and logical fallacies that I was committing.

fact checking fake newsAlthough creationism was probably my biggest scientific blunder, it was far from the only one. I also denied anthropogenic climate change (although less adamantly), and was seriously mistaken about many other more minor aspects of science. Vaccines and GMOs were never hot topics in my household, so I didn’t know much about them at the time, but given my gullibility and inability to think critically, I am quite certain that I could easily have fallen for anti-GMO and anti-vaccine propaganda as well (indeed, I have a relative who is still a young earth creationist and still denies climate change, but he is now also opposed to vaccines/GMOs and believes in all sorts of woo).

The point of all of this is simply that going into college, I was as far from a skeptic as I possibly could have been, even though I was very interested in science. Now, you may think that I’m about to tell you that during college I took a bunch of science courses and they opened my eyes…but you’d be wrong. During my freshman year, I was quite impervious to facts. I was a biology major, so my freshman schedule included a zoology course, during which evolution was a frequent topic, but rather than seeing the errors of my ways, I clung to my misconceptions. Indeed, I was the type of student that you see on Youtube videos debating professors when the topic of evolution comes up, and I was totally deaf to their facts and well-reasoned arguments.

So, if science courses didn’t make any difference, then what did? The answer may surprise many of my fellow skeptics/scientists. It was a philosophy course. During the first semester of my sophomore year, I took an introductory philosophy course, and that class literally changed my life, because it did something invaluable: it taught me to think critically. It taught me the rules of logic and how to tell the difference between a good argument and a bad argument. It exposed my biases and made me realize just how blind and arrogant I was being. That course provided a revolutionary light bulb moment that has literally shaped my entire life since. It gave me my raison d’être. It was my radioactive spider bite, my exposure to gamma radiation, etc. I distinctly remember sitting in my dorm room feeling simultaneously empowered by the logical tools that I was being given and horrified by how gullible I had been.

Following that course, there was no going back. I wanted more. I wanted to get better at critical thinking, and most importantly, I wanted to actually understand the way that the universe worked. I was finally willing to challenge my views, and that is exactly what I did. I took more philosophy courses to hone my logical skills, I started actually listening to professors and considering their arguments, and I started questioning everything. I took each of my core views and for the first time in my life, I critically examined them. I actually tried to find problems with them, rather than blindly defaulting to the counterarguments I had been raised with, and if I’m honest, it was not easy or fun. I suffered many sleepless nights and spent months reading, studying, debating, and talking to people on both sides of each topic, but unlike every previous debate I had been in, I was trying to learn rather than trying to win. It was a slow and painful process that cost me friends and strained my relationship with my family, but it was worth it, because one by one nearly all of my core views crumbled. I suddenly could see the problems with the arguments that I had always used, and once I saw them, I couldn’t ignore them. As a result, I made a 180 degree turn on almost everything. My views on science, politics, ethics, religion, etc. all changed, and I emerged as a skeptic.

After that transformation, I continued to try to learn, both professionally and personally. Professionally, I went on to earn a Master’s degree in biology, and I am currently approaching the half way mark for my PhD. Personally, I continued to read and study as much as I could about skepticism, logic, and critical thinking. This ultimately lead me to join the skeptic movement and start the blog that you are currently reading. I often have people accuse me of being a paid shill (I make no money off of this, btw) or some other such nonsense, but hopefully you can now see that I am actually doing this because I care about this. I am deeply indebted to the philosophy professor who opened my eyes, as well as to the skeptic blogs, websites, books, etc. that helped me when I began wrestling with my views and beliefs, so the least that I can do to repay that debt is to join in the fight.

With that in mind, I want to do several things in the remainder of this post. First, I want to speak directly to the creationists, anti-vaccers, climate change deniers, naturopaths, etc. who read my blog. Second, I want to provide some advice for my fellow skeptics, and finally, I want to offer a brief word of encouragement to the other people who devote their time to defending science and teaching critical thinking.

 

To the creationists, climate change deniers, anti-vaccers, GMO-activists, homeopaths, naturopaths, conspiracy theorists, etc.,

This quote comes from Tim Minchin's excellent UWA 2013 graduation speech. It is worth watching in its entirety.

This quote comes from Tim Minchin’s excellent UWA 2013 graduation speech. It is worth watching in its entirety.

I want to encourage you to embark on a journey of discovery like I did. I understand how you think and why you hold the views that you do, because I used to be one of you. I realize that you are not consciously dismissing facts and that you truly believe that the evidence supports your position, because that is exactly how I felt. I really thought that I was right, and it seemed totally reasonable to me that the entire scientific community was involved in some sort of massive cover-up to avoid admitting that they were wrong. It seemed totally rational to me that everyone who disagreed with me was ignorant, biased, or corrupt, and I never once thought that I was denying science. Rather, I thought that the facts were on my side and I was simply one of the few people who was enlightened enough to see the truth. Further, I thought that I truly was fact checking. I had convinced myself that I had actually looked at both sides and was rationally assessing the facts, and that deluded self-conviction led me to 100% surety that I was right, when in reality, I was totally wrong. All the belief in the world doesn’t make something true, and no amount of personal conviction can change facts. So I encourage you not to fall for the same cognitive biases and pitfalls that once ensnared me.

In hindsight, my errors are obvious. I was engaging in what is known as motivated reasoning. Creationism and many other views were core convictions that I held. They were fundamental parts of who I was. As a result, any time that I was faced with evidence against them, I would subconsciously issue a red-alert and raise my mental shields. I would find some way to convince myself that my opponent’s arguments were false. For example, when I encountered evidence for evolution, I would immediately go to sources like Answers in Genesis, find an article or argument that agreed with me, then cling to that as evidence, even if it was total nonsense. I was so biased and so committed to creationism that I could not see my errors, and totally nonsensical creationist claims seemed to make perfect sense to me. I can distinctly remember particular debates where I used one creationist trope after another, my opponent soundly defeated every last one of them, yet I walked away feeling victorious. That’s what motivated reasoning does, and it is extremely dangerous. It prevents you from ever realizing that you are wrong, no matter how clearly wrong you actually are.

My point is simply this: I understand you. I know that you aren’t stupid, crazy, uneducated, etc. I know that you really think that you are correct, but you aren’t, and you will never be able to see that until you are willing to be wrong. Now, you might respond to that by insisting that you have actually carefully considered both sides of the situation (I would have said exactly the same thing), but have you really? I hadn’t. Actually think about this for a minute. Have you actually tried as hard as you can to disprove your own views, or have you simply looked for evidence to support them? As useful exercise, I want you to consider what it would take to convince you that you were wrong. In other words, what piece of evidence would be sufficient to discredit your view? If you can’t think of any reasonable evidence that would make you change your mind, then you have a very serious problem, and you are almost certainly engaging in the same type of motivated reasoning that misguided me for years.

 

To my fellow skeptics,

First, I would like to offer some constructive criticism. It is very, very easy to fall into the trap of thinking that skeptics are somehow special. It is easy to write off everyone else as crazy idiots, but that type of thinking is both wrong and unhelpful. If you honestly think that you are somehow vastly superior to everyone else, then you need to get over yourself. We all have flaws and cognitive biases, and we can all be led astray by misinformation. For me personally, my past is a constant and painful reminder of just how susceptible I am to cognitive biases and motivated reasoning. I am admittedly speculating here, but I really don’t think that I am an exception, and I think that under the right set of circumstances, anyone could be duped by pseudoscience.

On that note, I want to state explicitly that skepticism vs denialism isn’t about being smart. My IQ did not magically increase when I rejected creationism, nor was my brain’s computing power boosted. I was the same flawed idiot that I was before. The only differences were that I was willing to be wrong and I had been taught the necessary logical tools to tell whether or not I actually was wrong. In other words, it is easy to think that the people who disagree with you are crazy or stupid, but most of them aren’t. Most of them are perfectly sane people who are capable of being very rational in most aspects of their life. So intelligence is not the problem. Rather, the problems are biases and an unwillingness to be wrong. Those are difficult things to correct, and denigrating the people that we would like to convert (so to speak) isn’t helpful. To be fair, I have been guilty of this myself on multiple occasions, because it is a very easy trap to fall into, but I think that it is important the we acknowledge the flaws in this approach and try to keep our arrogance in check.

Having said all of that, I want to conclude with a few brief words of thanks and encouragement. Being a public skeptic is time consuming, frequently unpleasant, and often seems pointless, but it really does make a difference (at least it did for me). I cannot begin to express my gratitude for the women and men who wrote the skeptic books, blogs, websites, etc. that helped me to shake off my biases. Similarly, I am indebted to the people in forums, chat rooms, comment threads, etc. who took the time to debate me and point out my errors. You all have made a tremendous difference in my life, and it is a debt that I can never repay. Also, here again, I don’t think that I am an exception. There certainly are some people who will never change their views, but there are also people who are willing to challenge their preconceptions and embrace new ideas, and public skeptics do an incredible public service by taking the time to reach and help those people. So thank you all, and keep up the good work. This is a battle that is worth fighting.

Posted in Uncategorized | Tagged , , , | 12 Comments

The image that took me two months to obtain: A window into what it is like to do research

Most people don’t really understand what scientists actually do or what is required to do scientific research. As a result, they don’t appreciate the amount of training and work that goes into being a scientists and conducting research. I personally think that is a problem, because it leads to all sorts of confusion and misconceptions. So I want to try to clear some of that up by telling you all a bit about what I have been working on for the last few months. I hope that this example will help you to understand and appreciate just how much goes into scientific research, as well as clearing up a lot of misnomers. Many people seem to think that scientists are lazy and constantly take short cuts, falsify data, etc., but I hope that this will help you to see why that is rarely the case. I should clarify, however, that because I am going to talk about some of my own work here, there may be a few places where it sounds like I am boasting, but I assure that that isn’t the case. Quite the opposite, in fact. What I want to impress upon you is that experiences like mine are the norm in science, not the exception. I’m a 27-year-old who still carries a valid student ID. I am by no means a prodigy. At best, I’m average.

Two months to get one image

Before I can explain why it took my two months to get one image, I need to give you a bit of background about my current project. I won’t go into the details, but briefly, I am doing a microbiome project looking at how microbiomes (i.e., the bacteria, fungi, etc. that live on an organism) affect disease ecology. So I collected a bunch of samples (swabs in this case), and now I need to extract the microbial DNA from them. Once I have that DNA, I can sequence it and use those sequences to identify the species of micro-organisms that were living on my study subjects as well at looking at relative abundances.

That DNA extraction step has, however, proved quite challenging. You see, I am trying to get DNA from a range of organisms (gram negative bacteria, gram positive bacteria, and fungi), which is challenging because of differences between the groups. Gram positive bacteria, for example, have thick peptidoglycan cell walls that often require a mixture of chemicals to break them apart. In contrast, fungi have chitin cell walls that often require mechanical methods for breaking them apart (e.g., you add some tiny beads to the sample and use a “bead beater” to rapidly bash them against the sample, thus breaking apart the cells). However, not all of these methods work well together, and some methods that work really well for one group actually reduce the DNA yields from other groups. Further, to make things even worse for me, my swabs are very low yield samples, which means that there isn’t much DNA there to begin with.

This is a gel from an extraction method that completely failed. The two bright bands are control samples to ensure that the PCR and Gel worked correctly. If any of the other samples had worked, they would have shown up as similar bright bands. Thus, since my controls worked, but my samples did not, I concluded that this method does not work well for my samples.

This is a gel from an extraction method that completely failed. The two bright bands are control samples to ensure that the PCR and gel worked correctly. If any of the other samples had worked, they would have shown up as similar bright bands. Thus, since my controls worked, but my samples did not, I concluded that this method does not work well for my samples.

The result of all of this is simply that I have spent two months working in the lab trying to get an extraction method that will work reliably and consistently for all of the groups that I am interested in, even when they are very unabundant. Let me explain a bit of what that has looked like. First, I had to make a whole bunch of mock samples with known quantities of the organisms that I was interested in, that way I could test my extraction techniques on them to see if they worked. Then, I picked the method that I thought was the most promising, and I tried it on a bunch of test samples. That took a full day because there were several long steps where I had to let the samples incubate, precipitate, etc. The next day, I used a PCR (polymerase chain reaction) to try to amplify the DNA from those samples. This basically just replicates the regions of the DNA that I am interested in, that way there are enough copies for me to work with. Finally, I took those PCR products and ran them on a check gel. I won’t go into the details of what that means, but it is just a simple method for seeing wither or not you got DNA amplification, how much you have, and how long the DNA fragments are. So on a gel, if I was successful and my methods worked, I should get nice bright bands of DNA. However, I usually got gels that look like the one to the right. The only bands there are the positive controls and the ladder (you always run positives and negatives in a PCR so that you know if the it is the PCR or the extraction that failed, and the ladder is a standard that lets you tell how large the DNA fragments are).

Following that first failure, I did a bunch more reading on the method I was using, talked to several colleagues, then made some modifications to the protocol and tried again, but once again, I was met with failure. By this point, I was already four days in, but things were just getting started, because this cycle continued to repeat itself over and over again. I would try a method, it would fail, I would modify the method by changing the chemicals, temperatures, times, etc., that would fail, I would modify it again (or try an entirely different method), that would fail, so on and so forth.

This is a gel from a method that actually worked. This time, you will notice that nearly every well has a band, indicating that the extraction worked (the cells without bands were negative controls). You will also notice that some of these are smeared. That is because I was testing several methods, and I will not be using the method that produced the smears.

This is a gel from a method that actually worked. This time, you will notice that nearly every well has a band, indicating that the extraction worked (the cells without bands were negative controls). You will also notice that some of these are smeared. That is because I was testing several methods, and I will not be using the method that produced the smears.

This lasted for two months, during which, I worked in the lab at least six days a week, often working over ten hours a day trying to find a reliable extraction technique, and I want to stress that word “reliable” because it’s important. There were multiple times when I found a method that “worked” in that I could get DNA from many samples, but those methods did not give me good, consistent coverage over all of the groups of organisms that I was interested in, especially when low quantities were present. In other words, I could have done my study using one of those methods, and I would have gotten what appeared to be good results that I could easily have published, but the results would have been misleading because I didn’t use a reliable method that could consistently detect all of the groups I was interested in. Therefore, I kept trying over and over again, until finally, last week, I got the image that you see on your left. This time, you will notice that there are DNA bands, and although I have not labelled the image, I got bands even from samples that only had a handful of cells. I have also replicated these results, and they are consistent across my organisms of interest. So I finally have a reliable method, but it took me two months to get here.

Note: there are actually some issues with that gel such as smears and double bands, but that is just because I was running several methods at once. So the lanes that look messed up are from the method that I will not be using, and the lanes that look good are from the method that I will be using. There are also still a few tweaks (particularly to the PCR procedure) that I will make to clean things up a bit more before extracting the DNA from my actual samples.

The big picture: what it takes to do research

Now that you understand my suffering for the past two months, I want to talk a bit more generally about what has been required for this project. First, at the very outset I had to read a tremendous amount of scientific literature. I needed to understand the topic that I was going to study, I needed to know what other people had already found, and I needed to identify the current gaps in our knowledge. This type of background work is absolutely essential in science, and it can be quite time consuming. In many cases, this requires reading hundreds of papers.

Following that, I had to actually design a study, which means that I needed to know enough about experimental design and statistics to design a project that would let me reliably answer the questions that I was interested. So once again, a lot of background knowledge and prior training was necessary.

Simply designing a project clearly isn’t enough, however. I actually needed to do it, and for that I needed money. So, I spent several weeks writing grants to lots of different funding agencies to try to get money for this project. Several of them rejected me, but several others accepted my proposal so I got the funding. During that same time period, I also applied for research permits and filled out ethics applications so that I could actually do the project. Fortunately, all of that came together, so once I had money, permits, and ethical clearance, I immediately moved on to the next phase and actually went out into the field and collected my samples (that was also quite a challenge because it required hiking up ridiculous rainforest streams, climbing cliffs and waterfalls, etc.).

After getting the samples, I started on the lab work that I have been talking about thus far, and now that I have a working protocol, I can move on to the next stage: actually getting DNA from my real samples. Once I have the DNA from those samples, I will amplify and sequence them, and that will finally give me the actual data which I will then do a series of statistical analyses on. Following those analyses, I will write a paper on the results and submit to a journal (it will probably be another 4–6 months before I have something submitted, maybe longer).

If we put all of that together, you should start to be able to see just how much effort and how many skills are required for a project like this. I needed to have an enormous amount of background knowledge on the topic and on experimental design just to plan the study, I needed writing skills to get the grants/permits/ethics clearance, I need field skills to get the samples, I needed lab skills and a knowledge of chemistry to get the extractions working, once I finish the lab work I will need statistical skills to analyze the data, and I will need writing skills again to actually craft a paper. Perhaps most importantly though, I really need to understand the ecology of my species in order to make sense of the data that I am collecting. In other words, it’s not just enough to know a bit about the background and get the raw data, I need to actually understand things well enough that I can make sense of my results and see how they fit into the big picture. So a tremendous amount of work goes into a project like this, and a lot of knowledge and skill is required.

At this point, I again want to clarify that I’m not trying to convince you that I’m a brilliant, naturally gifted researcher, because I’m not. What I am, however, is highly trained. This brings me to perhaps my most important point in this post: science is hard, and it takes lots of training and work. I couldn’t have done a project like this when I started my scientific training almost a decade ago, and, honestly, I couldn’t even have done it a few years ago. It takes years of extremely hard work to become an academic because there is so much that you need to know. There is an overwhelming amount of knowledge that needs to be acquired and skills that need to be mastered in order to do research like this, and it takes a long time to slowly build up that knowledge and those skills. As a result, I get very upset when people who have never taken an advanced science course, never set foot in a lab, never learned statistics, etc. pretend to know more than scientists and claim that scientists are wrong on vaccines, climate change, GMOs, evolution, etc. To be clear, I’m not being elitist here. I’m not suggesting that scientists are better or smarter than everyone else, but they are highly trained and experienced, and that matters. The idea that you will understand a topic better than scientists simply by reading some websites and books is insane and extremely arrogant. Scientists are carefully trained on how to design experiments and analyze data, they spend an inordinate amount of time reading every available paper on the topics that they study, and they spend years actually doing research. That type of training, knowledge, and experience isn’t going to be toppled by a few Youtube videos.

Note: I want to be clear that I am not trying to be dismissive of citizen science projects, because I actually think that those are fantastic, largely because they help to train the public to think like scientists. Similarly, I am not trying to disparage anyone who is trying to learn about science. As I said, being a scientist is as much about training and diligence as anything else, and anyone can learn how to be a scientist. My point is simply that you shouldn’t place too much weight on your degree for Google University.

It’s a group effort

I’ve been writing this post in the first person, but I want to be totally clear that I have not been doing this alone. I have had a tremendous amount of help and support at every step of this. I have gotten help and advice from my fellow graduate students who are in the lab working on their own projects, I have gotten advice from post-doctoral researchers who recently finished their PhD’s, and I have gotten advice and help from my advisers, each of whom is an experienced researcher who spends much of their time passing on their knowledge to students. Further, this is the norm in science and has been the norm for me at every level of my training. I have had great mentors from day one and have learned a tremendous amount from working with them. Once again, that’s a type of practical learning that just can’t be matched by surfing the web. Also, you should realize that science is extremely collaborative, and most projects get input and advice from a whole host of people who don’t make it onto the actual author list. This is, in my opinion, one of the greatest strengths of science, because everyone has a different set of skills and a different knowledge base, so when those skills and knowledge bases are combined, you end up with an extraordinary and diverse amount of knowledge and ability being funneled into one project.

Most scientists aren’t incompetent or unethical

The final thing that I want to deal with in this post is the notion that scientists are either hopelessly incompetent or horribly unethical. I hear these claims all of the time, and they are ludicrous. For example, when it comes to climate change, I frequently hear people say things like, “the answer is obvious, it’s the sun, but scientists just plow forward with their beliefs instead of looking at the obvious answer” or “but the climate has changed repeatedly in the past, so it’s silly to think that the current change isn’t natural.” Claims like that are absurd. Scientists aren’t incompetent, and they have, in fact, carefully tested the natural drivers of climate change (including the sun) and consistently found that natural causes of climate change cannot explain the current warming, but our greenhouses gasses can (more details and sources here). Scientists spend a great deal of time testing every reasonable possibility that they can think of, as well as systematically refining their methods and analyses to ensure that they get accurate results. That is one of the things that I hope you will take away from my two months of trying to get a reliable extraction method working. There were lots of points that I could have stopped and used a method that I knew wasn’t accurate or reliable, but I didn’t do that because I wanted to make sure that my final results will be correct.

Once again, this type of situation is the norm for scientists. Talk to any researcher, and they will tell you very similar stories of weeks, months, or even years spent trying to get a method to work reliably, trying to refine an analysis to make it more accurate, etc. We all spend an inordinate amount of time trying to make sure that we are doing things correctly and are accounting for all reasonable possibilities. To be clear, scientists aren’t perfect, and mistakes certainly get made, but most of us care tremendously about producing high quality research because we are curious people who want answers. That’s why we went into science (it certainly wasn’t for the money, because there’s not much of that). I chose and designed my current project because it is a topic that I care about. I want my results to be correct and accurate because I want to know what the answer is. That type of curiosity is a huge driving force in science, and it forces us to be diligent.

This brings me, finally, to the topic of wide-spread corruption. Many people are under the delusion that scientists are just in it for money, and we go around falsifying data whenever it disagrees with us. That idea is, however, total nonsense. If I didn’t actually care about the results, if I was willing to falsify data, then why on earth would I spend two months working 60+ hours a week trying to get a reliable extraction method working? If I was just going to make up the results, then I could have just gone ahead with one of the crappy, unreliable methods. For that matter, I could have just skipped the lab work altogether and sat down at my compute and generated fake results. Similarly, the idea that I would get to the end of this project only to have someone pay me to keep the results quiet or to change the results is nuts. Look, by the time that it is all said and done, I will have invested close to a year of my life into this project. I have poured everything into it, and there is no way on earth that I am going to get to the end and either not publish or change my results just because someone offers me a few thousand dollars. I am not so provincial and neither are the majority of scientists. Again, we don’t do this for the money. If money was what we cared about, we would have chosen a different field. We do this because we want to know how things work. We do this because we care about expanding mankind’s knowledge and making the world a better place, and the vast majority of us would not be willing to bury research for a simple bribe. To be clear, there are bad apples in every group, but again, what I am describing here is not out of the ordinary. Go talk to any scientists anywhere in the world and they will have similar stories, because this is the norm for science.

Posted in Nature of Science | Tagged , | 3 Comments