What does it mean to be a skeptic?

It is good to be skeptical about everything that you hear and read. In fact, skepticism is one of the defining characteristics of a scientist. Nevertheless, terms like “skeptic” and “open-minded” are often misappropriated by people in the anti-science movement, and many of the most biased people on the planet are under the delusion that they are skeptical. Climate change deniers, for example, often refer to themselves as, “climate change skeptics,” and it is rare to have a conversation with anti-vaccers without them referring to pro-vaccers as “sheeple.” Therefore, I want to briefly examine what it actually means to be a skeptic.

First, I want to clear up a common misconception. Many people seem to be under the impression that being a skeptic means going against the mainstream view. Thus, anti-vaccers consider themselves to be “thinking parents,” while viewing everyone else as “sheeple.” There is, however, nothing in the definition of “skeptic” that requires you to reject a scientific consensus. You are welcome to accept a consensus as long as that consensus formed as a result of strong, scientific evidence.

Having cleared up that misconception, let’s move on to the definition of a skeptic. There are basically two parts to being a skeptic. First, a true skeptic does not accept something or commit to a position unless there is sufficient evidence for that position. In other words, a skeptic questions what he/she is told and doesn’t accept anything until they have carefully studied the issue and examined the available evidence. Importantly, you must use good sources when fact checking. So, for example, reading Natural News does not constitute examining the evidence. Rather, you need to look at the original, peer-reviewed research.

This first requirement of skepticism may sound simple, but it is something that most people struggle with (including people who strongly support science). It is very easy and tempting to quickly latch onto some new study that seems to support your position, but it is crucially important that you avoid this trap. You must always carefully examine the evidence regardless of whether or not it supports your position. This is, in fact, one of the most important things that students of science get taught in graduate school. The peer-review system works well, but it is far from perfect, and sometimes bad research does get published. Therefore, you can never assume that something is true, and you must rigorously and carefully examine everything before accepting or rejecting it.

The second prerequisite for skepticism is being open-minded. This simply means that you are willing to change your position if you are shown evidence to the contrary. The term, “open-minded” has, however, been stolen and perverted by the anti-science movement. On numerous occasions, I have had people tell me that I need “open my mind about alternative medicines.” The reality is that am completely open to the possibility that alternative medicines work, but I’m not going to accept that they work until they have passed rigorous scientific testing. That’s not being close-minded, that’s being skeptical. Similarly, I have had multiple anti-vaccers tell me that my training in the sciences has made me, “close-minded.” When I pressed these people for what they meant , they explained that I was being close-minded by demanding scientific studies and refusing to accept anecdotal evidence. Think about this for a second. According to them, those of us who argue in favor of science are close-minded because we demand scientific evidence for a debate about science. Further, these same people will usually proudly proclaim that nothing will ever change their minds, which is, of course, the very definition of close-minded.

Accepting something without sufficient evidence is not being open-minded, it’s being gullible. It is not, for example, open-minded to use anecdotal evidence to arrive at the conclusion that vaccines cause autism. Rather, someone who is open-minded would reject those anecdotal reports in favor of the large, carefully controlled studies which clearly show that vaccines do not cause autism. It is important to note, however, that nothing in science is ever 100% certain. Thus, being truly open-minded means that you are always willing to consider the possibility that you might be wrong no matter how clear the data currently seem. So, for example, if in the future a large, well designed, carefully controlled study is published showing that vaccines do cause autism, and the results of that study are replicated by other researchers, I promise you that skeptics around the world (myself included) will write and prominently display posts admitting that we were wrong about the relationship between vaccines and autism. That is what it truly means to be open-minded. It means that you are willing o change your view when presented with good evidence. It does not mean that you are willing to blindly accept something despite a lack of evidence.

In summary, a skeptic is simply someone who demands good evidence before accepting something and is willing to change their view when it conflicts with the evidence. These requirements are easy to say, but often hard to follow. Nevertheless, everyone should strive to be a skeptic. You should use good sources, question your assumptions, demand evidence, beware of cognitive biases, and above all else, never hold any position so dearly that you are not willing to challenge it.

Posted in Uncategorized | Tagged , | Comments Off on What does it mean to be a skeptic?

Evolution is blind

Note: the notion that evolution is blind has nothing to do with Darwin's eyesight. I just thought this was an amusing image.

The notion that evolution is blind has nothing to do with Darwin’s eyesight. I just thought this was an amusing image.

One of the central tenets of evolutionary biology is the concept that evolution is blind. In other words, it has not foresight or goal. This principle is extremely important for understanding how evolution works, but it’s a concept that is often misunderstood, even among people who accept evolution. Further, this lack of understanding often leads to a number of erroneous creationist arguments such as the claim that whales defy evolutionary theory. Similarly, the popular irreducible complexity argument is easily defeated once we understand that evolution is blind.

Note: in this post, I am going to use “evolution” to refer specifically to evolution by natural selection. Remember that evolution is simply a change in allele frequencies over time, and natural selection is one among several mechanisms that causes allele frequencies to change (others include things like gene flow and genetic drift). The concept of blind evolution is, however, most germane to evolution by natural selection; therefore, that is what I will be describing in this post.

Before I can explain what is meant by, “evolution is blind,” I have to give a brief primer on how evolution by natural selection works. Natural selection requires two things: heritable variation for a trait and selection for that trait. Both of these are nearly always met in real populations. In other words, in any population there is variation for a trait (e.g., not all individuals are the same height) and that variation is nearly always heritable (e.g., tall individuals tend to produce tall offspring). Further, that variation affects fitness (e.g., tall individuals may be able to get more food, which gives them more energy, which lets them produce more offspring), so certain traits get “selected” by virtue of the fact that the individuals with those traits produce more offspring than individuals without those traits. As a result, the genes for the beneficial trait will be more common in the next generation. Thus, the population will evolve because its allele frequencies will change.

There are several important clarifications to be made here. First, everyone accepts natural selection. It’s a simple mathematical certainty (even young earth creationist organizations accept it, they just place arbitrary and logically invalid limits on it). Second, evolution only acts on populations not individuals. Individuals simply cannot evolve. Third, in evolutionary terms, “fitness” refers to the number of genes that you get into the next generation, not physical strength. Generally speaking, being physically fit does give you a higher evolutionary fitness, but not always. Thus, the phrase, “survival of the fittest” is something of a misnomer. Survival is important only in that it gives you more time to produce offspring, and there are plenty of short lived species that have a high evolutionary fitness. For example, think about species like many octopuses where the females die after laying their first and only clutch of eggs. They have a lower survivorship, but a high fitness (fun fact: “octopi” = several within a species, “octopuses” = several species). Thus, natural selection only acts on traits that affect your reproductive potential. These may be traits that directly influence your survivorship, such as antipredatory behaviors, but they can also be traits like foraging ability (more food = more energy = more offspring) and the ability to attract a mate (i.e., sexual selection).

It may seem like I have digressed from my thesis, but this is all important groundwork for understanding blind evolution. They key here is that evolution has no foresight or direction. In other words, it has no goal or endpoint in mind. In each generation, it simply adapts populations to their current environment, but if that environment changes, then an adaptation that has been useful for thousands of generations can suddenly be detrimental. Let’s say, for example, that we have a group of birds who eat very small seeds that are held in little folds of the plants. Thus, they need fairly small, skinny beaks to get to the seeds. Therefore, in each generation, the birds with the beaks that are most well suited to reaching into the folds get the most food and produce the most offspring. So for many generations, evolution has been shaping the birds beaks to fit in these folds. However, one year there is a massive drought, and all of the plants that the birds usually get seeds from die, but another species with large thick seeds survives. Now, the small, petite beaks that have been so useful are suddenly detrimental, and large thick beaks are useful. This means that the birds who previously would have had a very high fitness are now going to have a very low fitness, and the thick billed birds that would previously have had a low fitness are suddenly going to have a high fitness. This is what we mean by, “evolution is blind.” It cannot anticipate the future needs of an organism. All it does is adapt a population to its present environment.

Hopefully, at this point, the problem with creationists’ whale argument is clear. For those who aren’t familiar with this argument, creationists often claim that whales are a problem for evolution because evolutionary history tells us that all land organisms evolved from a marine ancestor, but whales would have had to evolve from a terrestrial ancestor. Thus, creationists claim that whales had to evolve “backwards” or “de-evolve” because they went back to water. Similar arguments are made about species like flightless birds.

The problem with these arguments is simply that they ignore this concept of evolution being blind. There is no “forwards” or “backwards.” At one point in time, for a certain population of marine organisms, it was beneficial to be able to come out on land. Therefore traits that allowed individuals to come out on land were selected for. Then, millions of years later, for a certain population of land-dwelling mammals, it was beneficial to be able to go into the water. Therefore, nature selected the traits that allowed individuals to enter the water. This is in no way shape or form a problem for evolution because in both cases, populations were evolving to match their current environment.

It’s also worth noting that there is really no such thing as being “more evolved” because evolution has no direction. Chimps are not, for example, more evolved than a single celled bacteria living in a hot spring. Chimps are certainly more complex, and they certainly have accumulated more genetic changes, but they are not “more evolved” because that suggests that evolution is directional. Both chimps and bacteria are well adapted to their current environments, and that is all that evolution does: it adapts populations to their present environments. Think about it this way, a chimp would die in the hot springs where many bacteria thrive, and the specialized bacteria would die in the chimp’s rainforest. They are both highly adapted for their environments, but neither one is more evolved than the other. To put this another way, you can say that chimps are more evolved for a life in the forest, but you cannot make a broad comparison between them and bacteria because it is equally fair to say that certain bacteria are more evolved for a life in the hot springs.

It’s also important to realize that because evolution is blind, we don’t expect it to produce perfect organisms. Rather, we expect organisms to be a hodgepodge of former traits. In other words, we expect them to have a large number of evolutionary leftovers. We call these leftovers vestigial traits. I plan on devoting an entire post to them in the future because they provide extremely strong evidence for evolution, but to describe them briefly, these are traits that have no function or a very limited function in the current organism, but they would have been fully functional in that organism’s ancestors. Blind cave fish are the classic example of vestigial structures. These are fish that have eyes, but the eyes are no longer functional and often have a layer of skin growing over them because the fish spends its whole life inside a pitch black cave. So, at one point in time there was a population of fish living outside of a cave that had functional eyes. Then, for one reason or another, the fish ended up inside of the cave where the eyes were no longer useful. Thus, nature stopped selecting for vision, and the eyes slowly accumulated mutations to the point that they are now useless. Animals are full of examples of these structures. For example, baleen whales and certain snakes retain non-functional pelvic bones. Humans also have many vestigials. Our tail bones, goose bumps, wisdom teeth, and multiple other features are all vestiges of our evolutionary history. They are evolutionary leftovers that were beneficial in previous environments and situations but are no longer beneficial today.

Finally, I want to briefly discredit irreducible complexity. I addressed this topic in detail here, but to put it in its simplest terms, irreducible complexity states that certain biological systems are highly complex to the point that removing any one part prevents the system from function. For example, irreducible complexity claims that the bacterial flagellum couldn’t have evolved because it requires 42 proteins to function (for most species), and if any one protein is removed, it no longer functions as a flagellum. Thus, the argument is that it couldn’t have evolved because no one protein would be useful unless all of the other proteins were already in place. The problem is that this argument sets up the flagellum as some ultimate endpoint that evolution is working towards, but as you now know, that’s not the way that evolution works. Each protein doesn’t need to function as a flagellum, it just has to function. In other words, if the protein does anything useful, it will be selected for. You see, the function of a trait can change in response to new environments or as a result of new mutations. In fact, we know that all of the proteins in flagella are used for other things in the cell, and often several of them work together to perform a function. It is not hard to imagine a series of mutations that brings these functions together. In fact, we have a hypothetical pathway that would allow a flagellum to evolve step by step with each step being useful. Only the final step functions as a flagellum, but that doesn’t matter because each step still functions, and that is all that evolution needs because it is a completely blind process that acts without any forethought or anticipation of future needs.

Summary
Evolution by natural selection simply adapts populations to their current environments. It cannot anticipate future environments or needs. As a result, a trait may be selected for in one generation, and selected against in a later generation after the environment changes. Therefore, it is incorrect to describe evolution as having a “direction” because it is simply responding to the current conditions.

Related posts:

Posted in Science of Evolution | Tagged , , , , | Comments Off on Evolution is blind

7 easy ways to lose a debate

One of the saddest statistics about my life is the amount of time that I spend pointlessly debating anti-scientists. Having devoted so much time to this endeavor has, however, allowed me to observe certain patterns and trends in their debate tactics. Specifically, there are several flawed debate strategies that anti-scientists frequently employ. In fact, I have rarely been in a debate where my opponent did not at some point commit one of these. So in this post, I am going to describe seven fundamentally flawed, yet extremely common debate tactics that you should watch out for and avoid at all cost. If you commit one of these, then you have automatically lost the debate.

When I say that you lost, I am obviously being somewhat facetious because the fact that you used a bad argument doesn’t inherently mean that your position itself is flawed (that would be a fallacy fallacy). Nevertheless, these strategies generally arise as a result of a fundamental weakness in the position that is being defended. As such, their use does generally indicate a critical problem that must be taken seriously. Finally, even if the position you are arguing for is actually correct, these strategies are so flawed that I would still contend that using them automatically costs you the debate itself (it is technically possible to lose a debate, but still be right).

 

#1. Accusing your opponent of being a shill
This tactic, often called the shill gambit, is perhaps the most common flawed strategy. Many other commentators on science have explained the problems with it, so I will try not to belabor the point too much. To describe it briefly, this occurs whenever you simply accuse your opponent of being paid off rather than actually engaging with their arguments. So, for example, when I explain the science of vaccines to anti-vaccers, they frequently respond by saying, “you’re just supporting vaccines because you’re a shill for Big Pharma.”

The problems with this response are numerous. First, it’s nothing more than an ad hominem fallacy. It attacks your opponent rather than attacking his/her position. Second, it’s an ad hoc fallacy and places the burden of proof on you. In other words, you cannot accuse someone of being a shill unless you have actual evidence that they are in fact being paid off. The fact that someone would dare to disagree with you does not automatically mean that they have a financial motive for doing so. Finally, it is often a red herring fallacy because it conveniently dodges any contrary evidence that has been present against your position (more on that later).

 

#2. Accusing your opponent’s sources of being shills
This tactic fails for the same basic reasons that the shill gambit fails, but its structure is somewhat different so I want to talk about it separately. In this case, you don’t accuse the actual person that you dealing with of being paid off. Rather, you blindly reject all of their sources by accusing the authors of being shills. For example, I frequently present anti-vaccers with papers that discredit their arguments, and when I ask them why they refuse to accept those papers, they almost invariably respond that, “the study was funded by Big Pharma” or “those authors were paid off” or “they are just in it for the money.” As with the normal shill gambit, this is nothing more than an ad hominem assault. It confidently ignores the results of the paper by attacking its authors. Also, scientific papers always included funding sources and author affiliations. So you can check and see whether or not the authors had a financial conflict of interest. When I point this out to anti-vaccers, they usually claim that the money is being distributed under the table and isn’t officially reported, but at that point, we are back to logically invalid ad hoc reasoning, and, once again, the burden of proof is on them.

 

conspiracy theory skeptic science expert#3. Inventing a conspiracy
This is closely related to the first two flawed tactics, but it has a much wider scope. So wide, in fact, that it utterly astounds me that anyone would ever use it. For this flawed strategy, you don’t simply accuse your immediate opponent of being a shill, nor do you accuse a handful of scientists of being corrupt. No, for this logical blunder, you accuse the entire scientific (and/or medical) community of being involved in some insanely large conspiracy. The exact nature of the supposed conspiracy varies widely, but the core reason for proposing it is always the same: when faced with the fact that nearly all experts disagree with them, people try to discredit all of those experts in one fell swoop by accusing all of them of being in a massive conspiracy. This approach suffers all of the problems of the shill gambits, but those problems are magnified a thousand times. Now, the burden of proof doesn’t simply require you to prove that one or two people or corrupt; it requires you to prove that hundreds of thousands of people are corrupt! If you have to invent an extensive global conspiracy of astronomical proportions to defend your position, then your position is so fundamentally weak that it is beyond saving.

 

#4. Shifting the goal posts
This problem arises from inconsistencies in your arguments and views. Basically what happens is that when you are faced with contrary evidence, you simply make an ad hoc modification to your view rather than rejecting it. This most clearly occurs when you have stated a clear challenge or dogmatic thesis. For example, many anti-vaccers are adamant that thimerosal in vaccines is causing autism. One of the key problems with this claim is the fact that thimerosal hasn’t been in childhood vaccines for over a decade, yet autism rates haven’t dropped. When presented with this fact, anti-vaccers generally change their thesis and begin to adamantly proclaim that it is the aluminum in vaccines that is causing autism. At its core, the problem here is that this line of reasoning is ad hoc. Anti-vaccers have decided beforehand that vaccines cause autism, so when shown evidence to the contrary, they simply modify their explanation of how vaccines are causing autism. The difficulty with this is that nothing will ever convince them that vaccines don’t cause autism. No matter what data they are presented, they will keep shifting their goal posts so that the conclusion is always that vaccines cause autism (this often involves question begging fallacies).

Global warming deniers not only commit this blunder, but they generally do so in a predictable pattern. When I make the mistake of engaging them, they generally start with the position that the planet isn’t even warming. With no small amount of effort, I can usually at least convince them that the planet is in fact warming, at which point, they simply change tune and claim that it’s just a natural warming and it’s not our fault. Using satellite data (Harries et al. 2001; Griggs and Harries 2004; Chen et al. 2007) and isotope ratios (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009), I can sometimes persuade them that we are actually the cause, but if I am successful in that endeavor, they inevitably retort that it won’t actually be that bad, so we don’t need to do anything about it. Sadly, when shown evidence to the contrary, they usually either resort to shifting back to one of their previous positions or they just go full conspiracy theorist. The problem is, again, that they have already decided what is correct, so nothing will ever make them reject their view. Rather, they will simply continue to twist, contort, and modify it indefinitely.

 

#5. Playing red herring
The red herring fallacy occurs when you simply ignore one of your opponent’s arguments. This can be very obvious, such as playing the shill card and constructing a conspiracy theory, but it is often more subtle. People often commit this fallacy by responding in a way that gives the illusion that they answered the criticism when in fact they addressed a totally separate issue (politicians are masters at this, watch for it in the next political debate).

I want to focus on the more blunt version of this because it is the one that I see most frequently in internet debates. One of my favorite debate tactics is either to present a logical proof for my position or ask a question that reveals a fundamental inconsistency in my opponent’s reasoning. For example, I might present a logical proof for anthropogenic climate change, or I might ask young earth creationists why they use science to interpret Joshua but not Genesis. Both of these illuminate fundamental problems with my opponents’ positions that must be addressed in order for those positions to be valid. When faced with these problems, my opponents nearly always play red herring. They simply ignore my challenge and move on to some other point. This is not a logically valid strategy. You must deal with serious criticisms of your position, and you cannot simply ignore any arguments that discredit your view. I generally refuse to let people get away with this and continually come back to my challenge until they finally deal with it (or usually just commit one of the other blunders that I have already discussed).

 

#6. Citing Natural News, Mercola, Whale.to, etc.batman slap meme natural news valid source
The internet is full of terrible, misleading, and downright dishonest sources, and if your argument is based on those sources, then you have a very serious problem. Similarly, when faced with information that opposes your position, you cannot cite these illegitimate sources. You must defend your position using valid sources, and in science, that means using the peer-reviewed literature. If you cannot find support for your position there, then support for your position doesn’t currently exist.

 

#7. Stating that nothing will change your mind
This should be an obvious problem, but for some reason I frequently encounter people who proclaim this with pride as if the strength of their resolve correlates with the strength of their position. I wrote about this problem in detail here, but to briefly summarize, you must always be willing to challenge your views. If you start off by ardently proclaiming that you are correct, then you will always find a way to twist the facts so that they seem to fit your preconceived views. The fundamental problem is that at this point it is impossible to disprove your position no matter how clearly flawed it is. In other words, even if your position is the stupidest thing that anyone has ever come up, nothing will ever convince you of that because you have already decided that you are right and, therefore, you won’t even look at the evidence to the contrary. This is an extremely serious problem, and it is a cognitive trap that you should avoid at all cost. Never hold any position so dear that you are not willing to admit the possibility that you might be wrong.

Posted in Global Warming, Rules of Logic, Science of Evolution, Vaccines/Alternative Medicine | Tagged , , , , , , , , , | Comments Off on 7 easy ways to lose a debate

2 biggest lies of the anti-vaccine movement

measles isn't harmless meme anual deaths

In 2013, 145,700 people died from measles. Given numbers like that, it is downright dishonest to characterize measles as a mild illness.

It’s no great secret that the anti-vaccine movement is rife with scientific inaccuracies and logical fallacies, but a few of their claims are so extraordinarily erroneous and demonstrably false that I have difficulty calling them anything other lies. There are several of these falsehoods that I considered writing about, such as the blatantly false claim that vaccines contain mercury and aborted fetal cells, but I ultimately decided to focus on just two dangerous and pervasive assertions: the claim that the safety of vaccines hasn’t been tested, and the claim that measles isn’t a dangerous disease.

 

The safety of vaccines hasn’t been tested
This claim is something of a rallying cry for the anti-vaccine movement, and I repeatedly see it on anti-vaccer blogs and memes. If it was true, it would indeed be a serious problem, but it is demonstrably false. There are literally hundreds of papers on the safety of vaccines, and they are easy to find. Get on Google Scholar or PubMed and search for papers on the safety of vaccines. In mere seconds, you will be rewarded with numerous safety trials. That is why I consider this claim to be a lie. It’s not a debatable claim about the toxicity of a chemical, experimental results, etc. Rather, it is a simple factual statement. Either these papers exist or they don’t, and they clearly and undeniably exist. Therefore, anyone who tells you that the safety of vaccines hasn’t been tested is either lying or willfully ignorant. Either way, they clearly aren’t a trustworthy source of information.

Now, some people will object that I am being too general. They’ll say that although a few safety trials have been done, these studies are limited and don’t address all possibilities. This claim is usually followed by an example of a specific type of study which they believe has not been conducted. I clearly can’t make a blanket statement that none of these claims are true, but the vast majority aren’t, and it takes only a few seconds to discredit them. So, again, the people making these claims are either dishonest or willfully ignorant.

Here are just a few examples of studies that supposedly don’t exist.

Claim: There are no double blind, placebo controlled studies on the safety of vaccines
Reality: Yes there are. I entered, “vaccine safety double blind placebo controlled” on Goggle Scholar and got 62,000 results. Granted, not all of those are going to actually be relevant, but many of them are. For example, Zhu et al. 2010 and Cutts et al. 2005 were among the first three results. Again, the claim that the anti-vaccers are making is absurdly easy to test. It takes mere seconds. So the fact that they don’t bother to test it should be extremely disturbing to people.

Claim: “There are no studies comparing the health of the vaccinated and unvaccinated
Reality: Yes there are.

Claim: There are no studies on the effects of the large number of antigens in vaccines/the effects of multiple vaccines
Reality: Yes there are.

I could keep going, but hopefully by now you get the point: scientists have very carefully studied the safety of vaccines from multiple angles and the result has consistently been that they are safe. Anyone who tries to tell you otherwise is either deceptive or simply doesn’t know what they are talking about. Either way, you should not be getting medical advice from this person.

 

Measles isn’t a dangerous disease
This claim has been around for a long time, but it gained recently popularity after the Disneyland outbreak. In response to the accusations that they were responsible for the spread of the disease, many anti-vaccers argued that measles is a “harmless childhood disease” and isn’t really that serious. I have seen this claim on innumerable anti-vax blogs and memes, and there is even a truly horrifying children’s book based around this notion. The reality is completely different. During the supposedly harmless Disneyland outbreak, for example, 20% of patients were hospitalized. Anything that requires hospitalization deserves to be called “serious.” You don’t take people to the hospital for harmless childhood diseases. You take them to the hospital for serious medical conditions that require expert intervention. I have never once seen a parent stand beside their child’s hospital bed, shrug their shoulders and say, “it’s just a normal childhood illness.” Further, just because these children recovered doesn’t mean that the illness wasn’t serious. Many people recover from heart attacks, but that doesn’t make cardiac arrest a “minor illness.”

Now, let’s look at some general data rather than data from a specific case. On average, 1 out of every 20 people who get measles will develop pneumonia, which can be deadly in young children. In fact, 1 out of every 1,000 measles infected children will die as a result. Something that kills children clearly should be taken seriously and is not harmless. Further, even if your child doesn’t die from the measles, there is a 1 in 1,000 chance that he/she will develop encephalitis, which is a swelling of the brain that can cause fun side-effects such as seizures, permanent deafness, and mental disabilities. Again, those are not the side effects of a harmless disease.

At this point, you may be thinking, “fine, measles can be dangerous, but for most children it’s not that big of a deal.” That’s all well and good until it’s your child lying on a hospital bed or the cloth padding of a coffin. In my opinion, 1 in 1,000 is still far too many deaths for a disease that we can eliminate. To be clear, I’m not committing an appeal to emotion fallacy here, because hospitalizations and even deaths are very real possibilities from measles infections. For example, although 1 in 1,000 may sound small, realize that thousands of children die from measles annually. In 2013, for example, 145,700 children died from measles. Harmless childhood disease? I think not.

You may be tempted to ignore numbers like 145,700 based on the fact that you live in a luxurious, developed country while most of those deaths came from third-world countries, but do you know why most of those deaths came from developing countries? It’s because they don’t have good access to vaccines! As vaccination rates increase, the death toll decreases. The only reason that we don’t have hundreds of children dying from measles in the US is because we have relatively high vaccination rates. If those rates drop, measles will come back and children will suffer and die. That’s not fear mongering, that’s a scientific fact. For example, low vaccination rates recently caused an outbreak in the Netherlands which resulted in 147 complications (14.4% of cases) including 90 cases of pneumonia and 82 hospitalizations. Fortunately, no one died during that outbreak; however, a recent outbreak in France (which also centered around unvaccinated communities) claimed 10 lives and caused almost 5,000 hospitalizations and over 1,000 cases of “severe pneumonia.” Does that honestly sound like a mild childhood illness?

America USA measles deaths vaccine before after anual

This graph shows the annual number of measles deaths in the USA immediately before and after the introduction of the vaccine. Source: CDC

Finally, let’s back the clock up to 1953, 10 years before the first measles vaccine. By any reasonable standard, American was a devolved country with high health and sanitation standards in 1953. Yet in the absence of a vaccine, 462 people died from measles. Similarly high numbers continued until 1963 when the first measles vaccine was introduced. With the introduction of this vaccine, the number of deaths began to drop, and no year since then has had a death toll that was higher than the average for the 10 year period prior to the vaccine (mean = 440.3 deaths from 1953–1962). Further, in 1968, a more effective live vaccine was introduced, and 1968 set a new record low of only 24 deaths! This clearly demonstrates two things. First, the vaccine clearly works. Our sanitation practices didn’t magically improve in the late 60s. The decline was unambiguously from the vaccine. Second, measles is a serious disease. For the 10 years immediately prior to the vaccine, an average of 440 people in the US died from measles each year. There is no universe in which that is a “harmless childhood disease,” and anyone who tells you that measles isn’t serious is either dishonest or delusional.

A note on appeal to emotion fallacies: It’s worth pointing out that appeals to emotion are only fallacious when all that they are doing is appealing to emotion rather than making a rational argument, or if they make some irrelevant appeal to emotion (they are often associated with straw man fallacies). There is, however, nothing wrong with a logical, factual argument also evoking emotions. In the case of this post, all of the death rates, hospitalizations, etc. are facts, and they are facts that should evoke a strong emotional response, but simply evoking an emotional response does not automatically make them appeal to emotion fallacies.

Let me give an example of an actual appeal to emotion fallacy to show the contrast. Have you ever seen those anti-vaccine pictures where a screaming child is receiving a massive shot that is filled with a sinister looking fluid? Well those images are perfect appeal to emotion fallacies. First, the fact that a child cries from a shot is irrelevant because children also cry when you tell them that they can’t have ice cream for breakfast, tell them to go to bed, etc. So a child crying is clearly not a good indication of whether or not something is ultimately good for them. Second, these images also make an irrelevant and dishonest appeal to fear (this is where the straw man comes in). By showing you a horse needle filled with a colored fluid, they are deliberately playing into your emotions to make you scared of the vaccine. In reality, the tiny needles and clear fluids used in vaccines are far less frighting. Do you see the difference between images like that and simply presenting facts that happen to evoke emotions? One is deliberately misrepresenting the situation for the express purpose of playing on your emotions, whereas the other is presenting an accurate portrayal of the situation, and that situation just happens to be horrible.    

Posted in Vaccines/Alternative Medicine | Tagged , , , , | Comments Off on 2 biggest lies of the anti-vaccine movement

The Rules of Logic Part 6: Appealing to Authority vs. Deferring to Experts

The appeal to authority fallacy (a.k.a. argument from authority) is easily one of the most common logical fallacies. This is the fallacy that occurs when you base your claim on the people who agree with you rather than on the actual facts of the argument. This may seem fairly straightforward, but it can actually be quite confusing, and I often see people incorrectly accuse others of committing this fallacy. The problem is that there are clearly times when it is fine to defer to an expert. For example, we constantly defer to doctors, and there is nothing wrong or fallacious about trusting their diagnoses and taking the recommended treatments. My intention is, therefore, to try to clear up some of the confusion about this fallacy and explain when it is and is not appropriate to defer to experts.

There are basically four ways that this fallacy occurs and I am going to deal with each one separately:

  1. Citing an opinion as authoritative
  2. Citing people who aren’t actually experts
  3. Using authority as a logical proof
  4. Citing a small minority of experts when an opposing majority consensus exists

 

Citing an opinion as authoritative
This one is fairly trivial and easy to spot. It occurs when you quote someone famous or cite their opinion while overemphasizing the person who said it. You may for example see a meme on Facebook that quotes George Washington and puts his name in enormous, bold caps. Doing that is really a type of inadvertent appeal to authority fallacy because you are essentially asserting that we need to accept the argument in this quote or at the very least take it seriously because of the person who said it. The problem is of course that no one is infallible, and even very intelligent people make mistakes. This is especially true when it is a quote about an opinion, philosophical view, etc. To be clear, you should obviously cite the source of the quote, but whether or not the quote should be trusted or taken seriously must be determined by the actual content of the quote, not the fame and reputation of the person who said it (also, on the internet half of the quotes are fake anyway, but that’s another problem entirely).

 

Citing people who aren’t actually experts
This is probably the most common occurrence of this fallacy, and it happens anytime that you cite someone who is not actually an expert on the topic at hand. In fact, this mistake is so pervasive that many people actually refer to the appeal to authority fallacy as the inappropriate appeal to authority fallacy. Some cases of this are pretty easy to spot. For example, Jenny McCarthy pretends to be an expert on vaccines, and Vani Hari (ak.a. the Food Babe) has deluded herself into thinking that she is an expert on nutrition, but neither of them have any real qualifications in science (i.e., they have no formal training on the topics they preach about, they have never conducted original research, they have never published a scientific paper, etc.). So if you reference them as an authority on a topic you are committing an inappropriate appeal to authority fallacy. To be clear, the fact that they aren’t experts doesn’t automatically mean that they are wrong or that you can reject everything that they say out of hand. Rather, it means that there is no a priori reason to think that they are right. In other words, you cannot simply defer to them as experts, because they are not experts.

A more common and insidious form of this fallacy occurs when you try to legitimatize a view by citing pseudoexperts. These are people who do have qualifications in some tangentially related field, but are not actually experts on the specific topic at hand. Dr. Sherri Tenpenny and Dr. Mercola are classic examples of this. I frequently see anti-vaccers cite them in an attempt to validate their views. They assure me that their views must be legitimate and scientific because there are people with M.D.s (such as Dr. Tenpenny and Dr. Mercola) who agree with them. The obvious problem is that Dr. Tenpenny and Mercola are both osteopaths. They have absolutely no qualifications in immunology, epidemiology, or any other field that is relevant to vaccines. So even though they have M.D.s, they are not experts on vaccines. You see, in science, being an expert requires that you do original research on the topic that you are purporting to be an expert on. Neither Dr. Tenpenny or Dr. Mercola have ever done any original studies on vaccines, and they frequently demonstrate a fundamental lack of knowledge about how such research is actually done (watch this video for a hilarious example).

This problem is, of course, not limited to anti-vaccers. Creationists love to do this by citing scientists who are also creationists. Dr. Raymond V. Damadian is one of the more famous examples because of his role in the development of MRI technology. The fact that he made important contributions to medical science does not, however, qualify him as an expert on evolutionary theory, and his views have absolutely no bearing on the legitimacy of evolution.

Climate change deniers employ a similar strategy. The fraudulent Oregon Petition is a perfect example of this. It claimed to have accrued the signatures of over 31,000 scientists who disagreed with climate change, but a quick examination of the petition reveals that many signatures are not “scientists” by any reasonable definition (multiple signatures came from veterinarians, for example), and even among the actual scientists, many signatures came from totally unrelated fields. In fact, it appears that only 39 of the signatures came from actual climatologists!

The point is simple: if you’re going to cite an authority, it needs to be someone who is actually an expert on the topic that is being discussed. Otherwise, you are committing a logical fallacy.

 

Using authority as a logical proof
So far, I have only been dealing with cases where the person being cited is not actually an expert on the specific topic being debated, but what about the cases where he/she is actually credentialed in that field? In this case, you’re welcome to cite them. Anyone who has ever looked at a scientific publication has no doubt seen a lengthy works cited section referencing many other researchers. As I will explain more in the next section, you can even construct a probabilistic argument around a strong consensus of experts. A probabilistic argument is one in which the conclusion is probably true. Unlike deductive arguments, the conclusions of probabilistic arguments are not absolutes, but otherwise, they follow the same rules as deductive arguments, and it is still illogical to reject the conclusion without an evidence based reason for doing so. The key here is that you must always ensure that you are using a consensus as support for your position, not as proof of your position.

For example, roughly 97% of climatologists agree that we are causing the climate to change. That is an exceedingly strong consensus. Nevertheless, it would be fallacious of me to say, “97% of climatologists agree that we are causing climate change, therefore we are causing climate change.” This commits a fallacy because it is always possible that the consensus is wrong. It is, however, unlikely that such an enormous majority of expert climatologists are wrong about such a well studied phenomenon. Therefore, it is not fallacious of me to say, “97% of climatologists agree that we are causing climate change, therefore we are probably causing climate change.”

I will deal with this idea of deferring to a consensus more in the next section, but for now the point is that you can cite legitimate experts as long as you aren’t using them as proof that your position is correct or valid.

 

experts bloggers scientists trust appeal to authority

We defer to experts all of the time, but for some reason when it comes to science and certain medical topics (like vaccines) people suddenly think that they know more than the experts.

Citing a small minority of experts when an opposing majority consensus exists
The final variation of this fallacy is another very common one (it’s actually closely related to an inflation of conflict fallacy). It occurs when you cite a few credentialed authorities as support of your position even though that vast majority of experts disagree with them (note: this strategy is generally fallacious even if you are attempting to make a probabilistic argument). It is important to realize that no matter what crackpot position you choose to believe in, you can find someone, somewhere with an advanced degree who thinks you’re right. So the fact that you found an immunologist who thinks vaccines are dangerous or a climatologist who thinks that the sun is driving climate change does not automatically mean that those positions are correct or even valid.

You should always fact check whenever possible, regardless of the person making the claim. The problem is, of course, that as a layperson, it can be difficult or even impossible to adequately investigate some claims, and all of us have to defer to experts at least some of the time. No one can be an expert on everything. The question is then, how do we do this without acting illogically? In other words, how do we defer to experts without committing an appeal to authority fallacy?

A good rule of thumb is that you don’t need to be an expert to accept a consensus, but you do need to be an expert to reject one. In other words, your default position should always be the one held by the majority of experts in that field, especially if it is a very large majority. To be clear, it is always possible that the consensus is wrong. I’m not advocating that you view a consensus as irrefutable proof of a position. Rather, what I am arguing is that you, as a non-expert, should be very, very cautious about claiming that the majority of experts are wrong. To put this another way, how likely do you actually think it is that you figured out something that the majority of experts missed?

To give a simple illustration of this, imagine that I take my car to 100 mechanics, and 97 of them say that transmission is the problem, and three of them say that it’s my engine. I know a fair amount about cars and I do almost all of my own mechanical work, but I am still far from an expert. So, how am I as a non-expert supposed to decide which mechanics to trust? Well, common sense tells us that it is more likely that the large majority of experts are right. Further, even though I am knowledgeable about cars, it would be absurdly arrogant and presumptuous of me to proclaim that I understood what was wrong with my car better than 97% of professional mechanics.

I could, of course, give many other examples of this. If we go to several doctors with a problem and all or most of them tell us the same thing, we usually have no trouble accepting their diagnosis, because their experts. We defer to expert lawyers, contractors, mechanics, etc. all the time, but for some strange reason, when it comes to science, people suddenly feel empowered to reject the expert consensus and side with some internet quackery instead. This is a very dangerous thing to do. On topics like global climate change where roughly 97% of expert climatologists agree that we are causing it, it seems rather risky to side with the 3% who disagree with the consensus. Again, to be perfectly clear, I am not advising that you should always just blindly follow the popular view. You should always make every effort to learn as much about a topic as you can, but after you have carefully reviewed all of the evidence, if you have reached a different conclusion than the vast majority of credentialed experts, you should be very trepid and cautious about that conclusion. You may be right. That possibility always exists, but you should really think long and hard about the probability that a few hours on the internet allowed you to figure out something that was missed by hundreds of experts with years of experience.

The underlying reason for accepting a consensus relates back to the burden of proof. Remember, the person making the unparsimonious claim is the one who bears the burden. In other words, if you are going to claim that 97% of climatologists are wrong about climate change, every major medical organization in the world is wrong about vaccines, virtually every biologist is wrong about evolution, etc. you need some extraordinary evidence to back up that claim. This is why I stated earlier that you really need to be an expert before you reject a consensus. For example, I am not an archaeologist, and I do not need to be an expert on archaeology to accept the consensus that ancient Egyptians built the pyramids. I do, however, need to be an expert to say that the consensus is wrong and aliens did it. It is not valid for me to make that claim unless I have spent years studying hieroglyphics, examining remains, etc. To give another example, it’s one thing for someone who has spent their entire life doing research in biology to proclaim that they have evidence that discredits evolution. It’s another thing entirely for someone to spend a few hours on the internet, then proclaim that they have found evidence that nearly every biologist in the world is either ignorant of or has chosen to ignore. That is an extraordinary claim, and it requires extraordinary evidence. Further, it’s important to realize that the research and results of these outliers who disagree with a consensus are going to be scrutinized by the rest of the scientific community, and more often than not, that scrutiny reveals fundamental problems with the research.

To recap, a consensus is certainly not infallible, and you should do your best to learn as much about a topic as you can (always by using good sources), but if your conclusion is that the consensus is wrong, you should be cautious of that conclusion and carefully reexamine your evidence. It is possible that you’re right, but if you are honest with yourself, you’d have to admit that it seems unlikely that reading a few websites has endowed you with superior knowledge than you would get from years of careful training and actual experience. So the burden of proof is on you to prove that the consensus is wrong, and you need some extraordinary evidence. Perhaps most importantly, you need actual evidence. The fact that a few experts agree with you does not make your position valid, and it is fallacious to base your argument on the fact that a handful of experts think that you’re right.

Other posts on the rules of logic:

 

Posted in Rules of Logic | Tagged , , , , | 4 Comments