## The Rules of Logic Part 3: Logical Fallacies

Perhaps the most common mistake that people make in debates is the use of logical fallacies. This occurs largely because people generally are not taught logical fallacies, and, therefore, don’t recognize them when they use or see them. Knowing logical fallacies and being able to recognize them is, however, extremely important, because, as previously explained, the presence of even a single fallacy will completely destroy an argument and force you to reject it. So this post is my attempt to help my readers learn the how to spot logical fallacies. What follows is a list of some of the most commonly committed fallacies. For each of them, I attempted to provide examples to illustrate how they work (or, rather, fail to work). I have grouped these fallacies by “type.” These are not officially recognized distinctions (such as formal and informal fallacies), rather, they are groupings that I find useful to keep track of how the various fallacies work. Many fallacies are very similar to each other and create the same basic problems. In fact, in many cases, distinguishing which fallacy was committed is almost impossible and several fallacies are often committed simultaneously. Further, many fallacies are simply a specific type of some other fallacy. So, the way that I grouped these is simply an organizational scheme to help keep things straight.

Obviously I did not explain every logical fallacy, there are a great many that I did not address. For one of the most complete lists of fallacies that I have ever seen, visit the Internet Encyclopedia of Philosophy.

Note: the examples are pulled from many different debates. Please keep in mind that if a logically fallacy is committed, you must reject the argument, not the conclusion. The examples include a few invalid arguments that are frequently used in support of positions that I actually agree with.

“Correlation/data analysis fallacies” fallacies
Affirming the consequent
Correlation fallacy
Denying the antecedent
Hasty generalization fallacy
Post hoc ergo propter hoc
Sharpshooter fallacy

ASSOCIATION FALLACIES

The fallacies that I am calling “association fallacies” occur whenever you argue that something is good/bad or true/false because of its relationship to something else, rather than its own merits.

This occurs when someone attacks the people who hold a view, rather than the view itself. It is a very common fallacy in politics.

Example:

1. In the 70’s, scientists said we were causing global cooling
2. Today those same scientists say that the planet is warming
3. Since they were wrong in the 70’s, we shouldn’t trust them now
4. Therefore global warming isn’t happening

Even if the accusation was true (which it isn’t, see Peterson et al. 2008), it would in no way discredit global warming, because whether or not global warming is true is based on the facts about global warming, not the facts about the people who support global warming (note: this example is an as hominem fallacy because it proposes that the same set of scientists were wrong in the 70’s, if instead the argument was worded such that scientists in general have been wrong before then it would become a guilt by association fallacy).

Other common uses of this fallacy simply involve name calling. A great example of this is the “shill gambit.” This occurs in debates about vaccines and especially GMOs, when the anti-scientist attempts to dismiss their opponent by baselessly accusing them of being paid by pharmaceutical companies or Monsanto.

Example:

• Pro-GMO advocate: “There is no scientific evidence that GMOs are harmful. Here are a bunch of papers that show that they are safe.”
• Anti-GMO advocate: “Well you just think that because you are a shill for Monsanto.”

This response does nothing to discredit the science of GMOs. It simply attacks the person supporting them. Also, the shill response is nearly always an ad hoc fallacy because there is generally no logical reason to actually think that the pro-GMO advocate is being paid off.

Appeal to authority fallacy

This occurs when you argue that a position is true or valid because of the people who support it, rather than the facts of the position itself. It is essentially the opposite of a guilt by association fallacy. It can be either explicit or implicit. For example, when you see a quote on Facebook followed by the name of a well respected person and the name is in a huge font (e.g. “…” –BENJAMIN FRANKLIN) that is an implicit appeal to authority. It is implying that the quote should be taken seriously because it was Benjamin Franklin who said it. In reality, you have to judge the truth of any statement or claim based entirely on its own merits, not the merits of someone who supports it.

Explicit appeals to authority are more common in debates, and are usually used to justify a position that is held by the minority of professionals. For example, in debates with those who oppose vaccines, I frequently get comments like, “there are lots of doctors and scientists who agree that vaccines don’t work/aren’t safe, so my position is just as valid as yours.” The problem should be obvious. The truth of a claim is determined by the facts of the claim itself, not the credentials of the people who claim it. Further, no matter what crackpot position you hold, you can find someone with an advanced degree who agrees with you (Note: this example also commits an inflation of conflict fallacy).

To be clear, citing an authority is not always a fallacy, in fact, in many cases it’s a good thing, but you must ensure that the argument is focused on what someone found/argues, not the person who found/argued it. For example, when I say, “thousands of scientists from all over the world have tested vaccines and found that they work, therefore we have good reason to think that they work,” I am not committing a fallacy, because my focus is on the fact that they have been tested. In other words, I am not saying, “thousands of scientists say they work, so they work.” Rather, I am saying, “they have been tested thousands of times, and shown to work, therefore they work.” Referencing the fact that thousands of people have done these tests merely serves to demonstrate that the results are not the outcome of a few biased individuals.

Appeal to emotion fallacy

This fallacy can be subtle and hard to spot. It operates by evoking an emotional response rather than giving a logical reason for believing something. To be clear, a logically valid argument can evoke an emotional response, it’s only a fallacy when you are relying on the emotions rather than logic.

Example:
(anti-vaccine parent) “We aren’t going to let the government or Big Pharma experiment on our children. Parents like us, not the government, know what’s best for our children.”

There is no logic (or truth) to any of that, but it evokes a powerful emotional response among the weak minded. People want to think that they know what is best for their kids, and they tend to mistrust the government. So this argument plays into their emotions, fears, and desires rather than presenting logic and facts.

This fallacy is often put on full display by politicians who use patriotic catch phrases to sway voters rather than actually giving a concrete reason why they are the best candidates for the job. Watch for it during the next election, I’m betting you’ll see it a lot.

Another common form of this fallacy is a scare tactic used by anti-scientists. For example, there was a big fuss a few years ago when some concentrated fluoride spilled and damaged some concrete and required a hazmat team. Anti-scientists went crazy and made all sorts of allegations like, “if its corrosive enough to eat concrete it can’t possibly be safe!” Think carefully about this claim for a second. At a quick glance, it seems valid enough, but upon deeper reflection, it’s obviously flawed. What spilled was concentrated fluoride. What is in drinking water is extremely diluted fluoride. Clearly the fact that the concentrated stuff is dangerous does not mean that the diluted stuff is. Almost anything is dangerous when concentrated. So why did this seem valid at first? Quite simply, it evoked an emotional fright response, and that emotional response clouded a rational examination of the situation.

To give one more example from the fluoride incident, another common claim was that the cleanup crew was wearing, “the very same type of hazmat suits being worn by workers at the Fukushima Dai-Ichi nuclear plant in Japan.” This is a fascinating ploy. People generally have an irrational fear of nuclear energy, so by bringing up Fukushima people automatically make the jump in their minds, and, once again, emotion clouds logic. This fluoride spill was in no way shape or form as dangerous as an issue with an atomic plant, and bringing up Fukushima adds nothing to the argument from a logical standpoint, all it does is induce fear. Therefore, it is an appeal to emotion fallacy.

Appeal to nature fallacy

This occurs whenever you argue that something is good/better than something else because it is natural or that it is bad/worse than something else because it is not natural.

Example:

1. Herbal remedies are natural
2. Therefore they are better than modern medicine

Example:

1. Vaccines are unnatural
2. Natural immunity is natural
3. Therefore natural immunity is better than vaccines

Example:

1. Genetically modified foods are unnatural
2. Therefore genetically modified foods are bad

The problem is that simply being natural (which is somewhat arbitrarily defined) does not automatically make something good or better than something else. For example,

1. Death from cancer is natural
2. Radiation and chemotherapy are unnatural
3. Therefore we should all accept death from cancer

The problem is pretty obvious, there are tons of natural things that clearly aren’t good. Arsenic, cyanide, hurricanes, earth quakes, the plague, polio, etc. are all natural, but they are also obviously bad. You must establish that something is good/better than something else on merits other than the fact that it is natural (the inverse is also true, you cannot argue that something is bad/worse than something else simply because it is unnatural).

Appeal to popularity, tradition, novelty, force, poverty, etc.

I have elaborated on three of the “appeal to…” fallacies, but there are many others that all take the same basic form of arguing that something is good/bad or right/wrong because of its association with something else rather than its own merits. For example, appeal to popularity (also known as the bandwagon fallacy) asserts that something is right/good because it is popular or wrong/bad because it is unpopular. The appeal to novelty fallacy is precisely the opposite. It argues that something is good because it is novel, unique, or the underdog position (anti-vaccers commit this frequently). Appeal to tradition argues that something is right/good because it is tradition (e.g., “Japan should be allowed to continue their barbaric whaling practices because it’s their tradition”). To see the problems with the appeal to tradition fallacy, I recommend that you listen to Weird Al’s “Weasel Stomping Day,” it illustrates it rather nicely. I won’t take the time to go through all of the other “appeal to…” fallacies, but you should look them up on your own.

Guilt by association fallacy

This is really a special type of ad hominem fallacy. This fallacy argues that something is wrong because it is associated with something that is morally wrong. It usually takes the following form:

1. A is believed by B
2. B is morally wrong
3. Therefore A is morally wrong.

Example:

1. Hitler believed in eugenics, gun control, evolution, etc.
2. Hitler was morally wrong
3. Therefore eugenics, gun control, evolution, etc. are morally wrong

The fact that someone immoral believed something does not automatically mean that the belief itself is immoral. For example, Hitler also believed that we breathe oxygen, but neither a belief in oxygen nor that act of breathing oxygen are immoral.

This argument can also be given as:

1. People who believe A also believe B
2. B is morally wrong
3. Therefore A is morally wrong.

Example:

1. Hitler believed in eugenics and mass extermination
2. Mass extermination is morally wrong
3. Therefore eugenics is morally wrong

A final version of this fallacy occurs when you simply associate something with someone/something that you don’t like, rather than someone/something immoral. For example, I once heard a conservative state, “There is only one piece of evidence I need for knowing whether or not global warming is true. Al Gore believes it, so it can’t be true” (I have heard many other conservatives use various forms of this argument, all with the same faulty conclusion). This argument is clearly absurd. Regardless of what you might think of Gore’s political views, you can’t assume that he is wrong about scientific issues. If that argument worked, then the following argument would also work, “There is only one piece of evidence I need for knowing whether or not gravity is true. Al Gore believes it, so it can’t be true.” Clearly the argument is flawed. Also, note that even within politics, you can’t assume that a position is wrong just because of the people who hold it. It is an extremely common trend in politics for conservatives to automatically assume that all liberal positions are wrong simply because they are liberal positions, and for liberals to automatically assume that all conservative positions are wrong simply because they are conservative positions. This type of thinking is illogical and harmful to everyone.

“MAKING CRAP UP “FALLACIES

My name for this group pretty much says everything that you need to know about these fallacies, they all make stuff up that you wouldn’t accept unless you were already convinced of the conclusion that they are arguing for (creationists commit these fallacies ad nauseum).

Ad hoc fallacies can be very tricky to spot because they are not arguments in and of themselves. Rather, they are faulty responses to an argument. They occur when someone is faced with an argument that discredits their position, and they respond by making something up that serves no purpose except to patch the hole in their view.

Example: (this is conversation between a supposed psychic who claims to be able to read minds and a skeptic)

Skeptic: “If you’re psychic then tell me what number I am thinking of”                           Psychic: “My powers don’t work in the presence of skeptics.”

In this example, the fallacy is pretty obvious. The response that the their powers don’t work around skeptics is clearly a ridiculous explanation, and it’s an explanation that I would never accept unless I was already convinced that the person was a psychic. Further, it makes it impossible to discredit them no matter how fraudulent they actually are (a lack of falsifiability is a hallmark of ad hoc fallacies).

These fallacies are not, however, always that clear cut. So, here is a simple way to tell if an ad hoc fallacy has been committed. Ask yourself the following three questions.

1. Did they just make something up?
2. Is their claim based on evidence/is there a good reason to accept this claim other than that it solves the problem in their argument?
3. Would someone who wasn’t already convinced of their view accept that claim?

If the answer to #1 is “yes” then an ad hoc fallacy has pretty obviously been committed. This is, in fact, the most common way that ad hoc fallacies occur. In the heat of a debate, one person makes some crap up off the top of their head to try to save their position. #2 is a little bit trickier. If there is no evidence or logical reasoning for a claim that is used in a debate, then it is an ad hoc fallacy, but just having some evidence in support of it doesn’t automatically mean that it isn’t a fallacy, which is where #3 comes in. This is a simple thought experiment, but it works pretty well. If you have to believe in the position that you are trying to save before you would believe the claim that is supposed to save it, then an ad hoc fallacy has definitely taken place.

Example:

A young earth creationist is faced with evidence for an old earth and responds, “The speed of light, radiometric decay, the formation of varves, the formation of ice cores, and coral growth rates were all faster in the past, thus, they do not prove that the earth is old.”

Every last one of these claims is an ad hoc fallacy. There is no actual evidence to support those claims, and I would never except them unless I was already convinced that the earth was young. (on a side note, creationism relies on every single one of those claims, so by proving that they are ad hoc fallacies, I have just proved that creationism isn’t logically valid).

In order to illustrate this fallacy in its fullest from, let me give one final, specific example from a debate that I had with a representative from Answers in Genesis. I presented the creationist with the following argument:

1. Varves form in lakes as alternating layers of dark sediment from the spring and light colored sediment from the winter
2. We know from observing modern lakes that varves in the center of lakes only form one set of layers per year
3. We have fossilized varves from the centers of lakes that contain hundreds of thousands of layers
4. We can independently confirm that the layers represent different seasons because the dark layers are full of diatoms which only bloom in the spring/summer, and the light layers do not contain diatoms
5. We can tell that these layers were formed by the lakes because they are not found on the land surrounding the lakes
6. Therefore, these lakes must be several hundred thousand years old

The AiG representative responded with, and I quote, “There must have been some unknown mechanism during the flood that was able to sort the layers by particle type and algae.”

Lets run that response through the three checks for an ad hoc fallacy.

1. Did he make something up?

Yes, he invented a magical and totally unknown mechanism that defies all logic and common sense. It requires a flood to sort both diatoms and sediment into alternating layers at a rate of a new layer every few seconds, and this magical mechanism only occurred directly over the lake beds. It’s an absurd stretch of the imagination to say the least.

2. Is there any evidence to support his claim?

No, there is no evidence anywhere to suggest that such a mechanism could exist.

3. If we presented my argument to someone who knew nothing about the age of the earth, would they accept the creationist response?

No, they would never accept that explanation. They would universally agree that the lakes must be old.

Note: There is a lot of confusion about ad hoc fallacies. Many websites make statements like, “ad hoc fallacies are not actually fallacies in the strictest sense.” This has caused many to think that they aren’t actually fallacies at all, which isn’t the case. What those statements refer to is a technicality within the structure of logical fallacies. Ad hoc fallacies are not among the fallacies known as “formal fallacies,” but that doesn’t make them any less wrong. Any philosopher will tell you that all fallacies are equally bad. Terms like “formal” and “informal” are just used by logic nuts like me who like to categorize things.

The second point of confusion comes from the fact that ad hoc reasoning is not necessarily an ad hoc fallacy. Many valid scientific questions begin with ad hoc reasoning. For example, suppose that little was known about the growth rates of corals, and we were having a discussion about whether or not they proved that they earth was old, and you said, “Perhaps they were capable of growing faster in the past. I’m going to conduct a study on their maximum growth rates to see if that is a viable explanation.” You would not have committed a fallacy. It only becomes a fallacy when it is used in a debate as if it’s already been tested. So, if you had said, “Well they must have grown faster in the past, so they don’t prove that the earth is old,” that would have been an ad hoc fallacy, because you used an accelerated growth rate as a fact without having any evidence to support the notion that corals can have prolonged, accelerated growth rates. Using that argument today (as creationists do) is even more erroneous because we know from lab experiments that even at their fastest possible growth rates, our coral reefs could not have formed in the past 4,000 years.

Circular logic fallacy

This is very similar to, and often confused with, a question begging fallacy. The difference is that in question begging, one of the premises relies on the conclusion, whereas in circular logic, one of the premises IS the conclusion. Question begging is actually much more common than true circular logic, but most people only know the term “circular logic” and use it even when it’s actually a question begging fallacy.

Generally speaking, in circular logic the conclusion and the premise are worded differently to hide the circularity.
Example:

1. The Bible is infallible and totally accurate
2. The Bible says that it is true
3. Therefore, the Bible is true

It is truly remarkable that one could think that this is a good argument, yet Christians use it all the time. The fact that the Bible says that it is true is meaningless as evidence for its truth because you wouldn’t believe it when it said that it was true unless you already thought that it was true.

Fallacy of the false dilemma

This occurs when you misuse the following syllogism:

1. Either A or B
2. Not A
3. Therefore B

or

1. Either A or B
2. A
3. Therefore not B

This syllogism is fine, in fact it can be quite powerful, IF and only IF you can verify premise 1. It becomes a false dilemma when there are actually more options that just A or B. In other words, you create a dilemma that doesn’t actually exist.

Example:

1. Either the Bible is true or evolution is true
2. The Bible is true
3. Therefore evolution is not true

This common argument completely ignores the possibility that they are both true and it is only the young earth interpretation of the Bible that is at fault.

Example:

1. You’re either a liberal or a conservative
2. You’re not a conservative
3. Therefore you are a liberal

This once again ignores the possibility of a middle ground. There are plenty of people who think the liberals are right on some things and the conservatives are right on others, as such they are neither liberal or conservative.

Inflation of conflict fallacy

This is an oddly specific, yet extremely common fallacy in which you attempt to show that there is a debate about an issue for which there is no formal debate. Unsurprisingly, anti-vaccers, creationists, and global warming deniers all do this frequently. The prevalent claim that, “there are plenty of climatologists who don’t think that anthropogenic global warming is happening, therefore we don’t need to trust the supposed scientific consensus” is an excellent example of this fallacy. The reality is that roughly 97% of climatologists accept anthropogenic global warming. There is no serious debate among scientists about it. To put 97% a different way, if we filled a conference room with 1,000 climatologists, only 30 of them would deny anthropogenic global warming. Do you have any idea how few topics there are on which 970 out of 1,000 scientists agree? It is an absurdly strong consensus.

Similarly, anti-vaccers rant endlessly about the doctors and scientists who agree with them, and I frequently hear them claim that there is significant debate among scientists. Once again, the debate only exists in their minds.

To make this fallacy more clear, think about the following, no matter what crackpot notion you believe, you will be able to find someone, somewhere with an advanced degree who thinks you’re right. There are plenty of Ph.D.s and M.D.s who think that smoking is safe, gravity isn’t true, helicentrism isn’t true, HIV doesn’t cause AIDS, etc. In all of these cases, it is obvious that the fact that there are a few dissenting voices does not mean that there is significant debate on the topic. There will ALWAYS be someone who disagrees with the mainstream view, that doesn’t mean that there isn’t a consensus.

To be clear, I am not saying that a consensus means that something is automatically true. I am merely pointing out that it is a fallacy to claim that there is significant debate about topics which are not actually seriously debated (honestly it’s just downright lying to claim that these debates exist).

Note: the astute reader will notice that most instances of the inflation of conflict fallacy are accompanied by appeal to authority fallacies.

No true Scotsman fallacy

This is really a special case of the question begging fallacy. It takes the following form,

1. (debater 1) “No true A does B”
2. (debater 2) “C is a true A and does B”
3. (debater 1’s response) “C cannot be a true A because he does B”

(this is also used in the inverse, i.e., “all A do B,” “C is A and does not do B,” “C cannot be A because he doesn’t do B”)

That probably seemed confusing, so let me give a silly example:

1. (gamer 1) “All true gamers drink Mountain Dew”
2. (gamer 2) “I’m a true gamer and I don’t drink Mountain Dew”
3. (gamer 1’s response) “You can’t be a true gamer because you don’t drink Mountain Dew”

Basically what this fallacy does is set up a claim that is impossible to refute, regardless of whether the claim is true. You see, it is impossible to refute gamer 1’s claim because any time that you cite a gamer who doesn’t drink Mountain Dew (if any actually exist), gamer 1 will simply claim that the person in question is not a true gamer because they don’t drink Mountain Dew. To give a few real-world examples of this fallacy:

1. (debater 1) “No true Christian scientists support evolution”
2. (debater 2) “What about all these Christian scientists who support evolution (insert list of theistic evolutionsts)?”
3. (debater 1’s response) “They are not true Christians because they support evolution.” (yes, it pains me to say that I have actually had people give me this argument, as well as an identical argument for global warming)

It should be noted, that this fallacy does not always use people. For example:

1. (anti-vaccer) “There have been no good, independent studies that have tested vaccine safety”
2. (pro-vaccer) “Here are several hundred good, independent studies that tested vaccine safety”
3. (anti-vaccer’s response) “Those studies are wrong/shouldn’t be trusted/were funded by ‘Big Pharma'”

Notice, this isn’t as obviously a no true Scotsman fallacy, but a close look at its structure reveals its true nature. The anti-vaccer’s response does not explicitly cite claim 1, but unless they give supporting evidence that ALL of those studies shouldn’t be trusted (which they never do), then it is obvious that they have decided that claim 1 is true, and nothing will persuade them otherwise. This example is also an ad hoc fallacy.

To conclude the section on this fallacy, I want to point out that this logical form is not a fallacy if you can support premise 1. For example,

1. (me) “‘Creation researchers’ are not scientists because they do not follow the standards of science.”
2. (creationist) “Person X is a creation researcher and a scientist.”
3. (my response) “Person X is not a scientist because they do not follow the standards of science.”

This is not a fallacy because I did not make up premise 1. The standard definition of a scientist specifies that to be a scientist you must follow the scientific method (most germane to this example, you must go from evidence to a conclusion and you cannot invoke a supernatural explanation). Creation researchers do not follow the scientific method and are, therefore, not scientists.

On a side note, a variation of the no true Scotsman fallacy was responsible for a lot of bad science in the early days before science had obtained the more formalized structure that it has today.

This fallacy occurs when you take an argument and stretch it to an absurd conclusion in order to show that the argument itself is flawed. It can also be used such that the argument itself becomes absurd without precisely stating an absurd conclusion. Perhaps the most famous use of this fallacy occurs when a child wants to do something because “all my friends are doing it” and the parent responds with, “If all your friends were jumping off a cliff would you?” Clearly, wanting to go to a movie, wear a certain type of clothing, etc. is not analogous to jumping off a cliff. The conclusion that the parents reached (i.e., that logical consistency of wanting to be like ones friends requires the desire to jump off a cliff if your friends are doing it) is an absurd stretch of the argument. There is no logical inconsistency between wanting to be like your friends when they are doing something fun and not wanting to be like them when they are committing suicide (note: the kids argument is actually an appeal to popularity fallacy so both sides are logically flawed in this example).

More serious forms of this fallacy often occur in debates and are frequently associated with straw man fallacies. For example, I recently watch a creationist video in which they stated the following, (mostly paraphrasing), “if I said that a princes kissed a frog and it turned into a prince, it would be a fairytale, but that is ‘exactly what evolution teaches'” (yes those were their exact words). “Evolution teaches that we all started out as an ameba, and that ameba eventually evolved into a frog, and that frog eventually evolved into us. The magic here is billions and billions of years.” This is obviously ridiculous. Evolution teaches nothing of the kind. Te video took the theory, simplified it, and stretched it an absurd degree in order to present it as being the same as believing that kissing a frog will turn it into a prince. This combined both a straw man fallacy and a reductio ad absurdum fallacy.

The catch with this fallacy is that reductio ad absurdum logic is not always fallacious. In fact, it is one of the oldest and most powerful types of argument, and I use it frequently. For example, anti-vaccers, creationists, and global warming deniers all frequently like to state the scientists have been wrong in the past, therefore we shouldn’t trust them now. The problem (other than the ad hominem fallacy) is that if we follow this argument through to its logical conclusion, then we should never trust scientists about ANYTHING, including gravity, the germ theory of disease, that we breathe oxygen, etc. This is not a fallacy because it actually is the necessary outcome of the argument. In other words, I did not stretch the argument to an absurd conclusion, I showed that the argument itself actually does result in an absurd conclusion. It’s only a fallacy when the proposed outcome of the argument does not follow necessarily from the argument. If you look through my other examples in this post, you should be able to spot additional instances of non-fallacious use of reductio ad absurdum logic (e.g., my response to the argument that global warming isn’t true simply because Al Gore believes it).

Slippery slope fallacy

In many ways, this is a special type of reductio ad absurdum fallacy. This fallacy occurs when you claim that event A will inevitably lead down a slippery path to B, and B is bad, therefore you should never do A. Note: this is only a fallacy when the causal relationship is in question, and you argue either that A will always cause B, or that A should always be avoided because it might cause B.

Example:

• You shouldn’t smoke marijuana because it is a gateway drug and if you smoke it, you will start doing “hard drugs.”

Yes, many people start with marijuana, but there are plenty of people who never switch to “hard drugs.” Further, it is quit impossible to prove that smoking marijuana causes people to do other drugs. This is also a correlation fallacy (correlation fallacies and slippery slope fallacies are often connected). People who smoke marijuana are probably people who are more likely to do other drugs anyway, so the fact that they started with marijuana does not mean that they wouldn’t have eventually used other drugs even if they never used marijuana.

Example:

• You shouldn’t play violent video games because playing violent video games will make you more violent in real life.

Once again, plenty of people play violent video games and aren’t violent in real life. Also, as with the marijuana example, showing a correlation between violent video games and actual violence does not demonstrate the video games were the cause of the violence. Violent people will naturally be more attracted to violent video games, so there may be a correlation without having a causal relationship.

Example:

• Evolution teaches that man is just an animal and, as such, has no moral obligations. Therefore, teaching evolution will result in more violence, rape, murder, theft, etc.

There are numerous things wrong with this argument (such as a straw man fallacy), but my point right now is simply that there is no obvious causal relationship between teaching evolution and immorality (last I checked, humans were deplorable creatures long before evolution was proposed).

Other examples frequently involve worst case scenarios that probably won’t play out. For example, the arguments that instituting health care in the US will destroy the entire economy and sink the whole world into a great depression are usually slippery slope fallacies. Could it make the economy worse? Yeah, it’s possible. Will it completely obliterate what’s left of our economy? Probably not.

The media (and occasionally even scientists) unfortunately commit this fallacy in regards to global warming. Most strikingly, I once heard a reporter argue that the food shortage will become so bad that we will resort to cannibalism. Will global warming kill many people and make life on this planet harder? Almost definitely, we have tons of evidence to support that. Will it make me eat my neighbor? Probably not.

Note: the key here is not that the conclusion is impossible, but simply that the conclusion cannot be proved or strongly supported.

Straw man fallacy

This is one of the most common fallacies there is. It can be deliberate, but it is often unintentional and results from ignorance on a given topic. It occurs when you present a weak or misrepresented version of an opponent’s argument, then discredit that misrepresentation and claim to have discredited your opponent’s view. In other words, someone believes A. You discredit A’, then claim to have discredited A. This fallacy is everywhere. Political debates are rife with straw men. Similarly, creationists use this fallacy constantly. It appears in arguments about global warming, vaccines, atheism, theism, etc. Almost every time that I have ever looked into any debated topic, I have found people on one or both sides committing straw man fallacies. Here is the interesting thing though, it’s really easy to avoid using this fallacy in your arguments. All you have to do is properly inform yourself about the other position before you attack it, and make sure that your attacks are targeted at the actual position, not a misrepresentation of it. Anytime someone commits this fallacy, they have just demonstrated that either they don’t know what they are talking about because they can’t even state their opponents position accurately, or they are dishonest and are deliberately presenting an incorrect form of the argument.

Example:

1. Radiometric dating relies on uniformitarianism.
2. Uniformitarianism says that everything in nature is constant
3. Radiometric dating claims that the earth has been here for billions of years
4. If the earth has been here for billions of years, and uniformitarianism is true, then the oceans would be much saltier than they are because of the rate at which salt is currently entering the ocean.
5. Therefore either uniformitarianism is not true, or the earth has not been around for billions of years.

This one of many very common straw man arguments that I hear creationists use. Uniformitarianism does not state that everything is constant. It states that the process that shape the earth today are the same process that shaped it in the past and some things are constant (such as the speed of light). No scientists would ever claim that the rate of salt entering the ocean is constant. That is an absurd assumption and is not required by uniformitarianism. Notice, in this example (and many others) the straw man fallacy makes one of the premises untrue, thus breaking two of the requirements for a good logical argument (note: this particular example also commits a reductio ad absurdum fallacy).

Question begging fallacy

This fallacy occurs when one or more of the premises of an argument relies on the conclusion such that you would not accept the premise unless you had already accepted the conclusion. This fallacy is very often accompanied by an ad hoc fallacy.

Example:

1. The Bible says that God exists
2. The Bible is true
3. Therefore God exists

I hear Christians use this argument disturbingly frequently, but the fallacy should be obvious. No one would accept premise 2 unless they had already accepted the conclusion. Good luck convincing an atheist that premise 2 is accurate. This is an extremely clear cut question begging fallacy. Note: if you compare this example with my example in the circular logic section, the difference between the two becomes apparent. In circular logic the conclusion actually is one of the premises, whereas in question begging one of the premises merely relies on the conclusion.

Note: one of my pet peeves is people saying, “it begs the question” when they mean, “it raises the question.” Only say that something has “begged the question” if it has committed a question begging fallacy.

CORRELATION/DATA ANALYSIS FALLACIES

All of these are essentially cases of confusing correlation and causation, where you establish a relationship between two statements, then make a faulty claim about that relationship.

Affirming the consequent

This is a very common fallacy that is similar to a post hoc ergo propter hoc fallacy. It takes the following form:

1. If A then B
2. B
3. Therefore A

The problem is that just because B will always occur when A occurs does not mean that A will occur whenever B occurs.

Example:

1. If it is snowing then it is cold outside
2. It is cold outside
3. Therefore it is snowing

See how this works? For it to snow, it must be cold, but the fact that its cold doesn’t mean that it is snowing.

Example: (this is a creationist argument often used by Ken Ham)

1. If there was a world-wide flood, we would expect to see “billions of dead things, buried in rock layers, laid down by water, all over the earth”
2. We see “billions of dead things…”
3. Therefore, there was a world-wide flood

Do you see the problem? If there was a flood, then we would see “billions of dead things…” but just because we see “billions of dead things…” does not mean that there was a flood. In fact, evolution also predicts that we will see “billions of dead things…” so the fact that we see them doesn’t help either case. Note: I have often explained that the strength of a scientific theory is measured by its ability to make predictions, but it should be noted that these need to be exclusive predictions. In other words they need to be predictions of something that would only come true if that theory were right. In other words, they take the form,

1. If and only if A then B
2. B
3. Therefore A

This is not a fallacy because B can only occur if A occurs, therefore the fact that B occurred means that A must also have occurred.

Correlation fallacy

The existence of this fallacy confuses me because everyone knows about it, yet people continue to commit it. This fallacy occurs whenever you say, “A and B are correlated, therefore A causes B.” If asked, almost everyone would affirm that correlation does not necessarily mean causation, so why do people still commit this fallacy?

Perhaps the most common use of this fallacy is,

1. The increase in autism rates is correlated with the increase in vaccinations
2. Therefore, vaccinations cause autism

The people who make this claim tend to ignore arguments such as,

1. The increase in autism rates is correlated with the growth of the organic food industry
2. Therefore, organic food causes autism

Obviously, both arguments are absurd, the correlations (which actually don’t exist at all when you take into account the changes in the diagnosis of autism) do not mean or even suggest that vaccines or organic food are causing autism.

Note: There are cases where correlation can be used to demonstrate causation, but these only occur when you have either demonstrated that nothing else could be the cause, or provided additional evidence that proves that A causes B. In other words, the correlation itself is never enough to prove causation, you must supply additional evidence. For example, I once had someone accuse me of committing this fallacy when I used a correlation between education status and average pay rate to demonstrate that having more education usually results in higher incomes. As I explained to my accuser, this is not a fallacy because if you look at the jobs, the ones on the low pay end (for the most part) had no education requirements, but as you moved up the pay grade, the education requirements became higher. In other words, the fact that higher education caused the higher pay rates was proved by the fact that the highest paying jobs required the highest education levels.

Denying the antecedent

This is in many ways the inverse of affirming the consequent. This fallacy takes the following form:

1. If A then B
2. Not A
3. Therefore not B

Example:

1. If it is snowing, then it is cold outside
2. It is not snowing
3. Therefore it is not cold outside

This seems simple enough, but many people get confused because the following very similar syllogism is valid:

1. If A then B
2. Not B
3. Therefore not A

Example:

1. If it is snowing, then it is cold outside
2. It is not cold outside
3. Therefore it is not snowing

The thing to keep in mind with this is that premise 1 establishes that B will always occur when A occurs, so, if B does not occur, then A cannot have occurred, but if A does not occur, that does not automatically mean that B does not occur because there may be more than one cause of B. Note that as with the affirming the consequent fallacy, a fallacy does not occur if the syllogism takes an “if and only if” structure. In other words, the following would be logically valid:

1.  If and only if A then B
2. Not A
3. Therefore not B

This is valid because B can only occur if A occurs, therefore the fact that A did not occur means that B also did not occur.

Hasty generalization fallacy

This occurs when you take a limited number of observations and generalize to a larger group. Cherry picking data and stereotyping are often hasty generalization fallacies, and this fallacy is very often associated with an ad hominem attack and can occur simultaneously with a sharp-shooter fallacy and a guilt by association fallacy.

Example:

1. Richard Dawkins is an evolutionary biologist
2. Richard Dawkins is an atheist
3. Therefore all evolutionary biologists are atheists

or

1. Richard Dawkins is an evolutionary biologist
2. Richard Dawkins is an atheist
3. Therefore evolution is inherently atheistic

Example:

1. Al Gore is a liberal
2. Al Gore accepts global warming
3. Therefore global warming is a liberal position

These examples may seem silly, yet there are many people who argue that evolution is inherently anti-God because many of its outspoken proponents are atheists, or that global warming is a liberal conspiracy because people like Al Gore say that it is happening. These particular examples also commit a guilt by association fallacy.

This fallacy also frequently occurs in debates about scientific matters (usually it occurs in science as a form a cherry-picking data).

Example:

1. An outbreak of measles occurred in a vaccinated population
2. Therefore vaccines are not effective.

Obviously the fact that a few outbreaks occur among the vaccinated does not mean that vaccines do not work. You have to look at the big picture and compare disease rates among the vaccinated and unvaccinated. This particular example is also a sharp-shooter fallacy.

Post hoc ergo propter hoc fallacy

The english translation is, “after this, therefore because of this.” This fallacy is essentially a faulty assumption of causality, but differs slightly from a strict correlation fallacy. It generally takes the following form:

1. A occurred then B occurred
2. Therefore A caused B

The problem should be obvious, just because one event followed another does not mean that they are causally related. This differs from the correlation fallacy in that the correlation fallacy states that two things occur together (are correlated) whereas post hoc ergo propter hoc specifies that one preceded the other. In other words, correlation fallacies deal with trends, and post hoc ergo propter hoc fallacies deal with isolated events.

Example:

1. I used the bathroom this morning
2. Immediately afterwards, it began raining
3. Therefore, my urination caused the rain

That example may seem silly, and indeed it is, but people commit this fallacy all the time (especially when trying to use anecdotal evidence in scientific debates), and it is not always that easy to see that the logic is flawed.

Example:

1. I filled my gas tank this morning
2. Half an hour later, my truck broke down
3. Therefore, the gas was bad and caused my truck to break down

Unlike my previous example, the conclusion may actually be true. It is possible that the gas was bad, and that’s what caused my truck to break down, but even if the conclusion is true, the argument is not logically valid (remember that an argument can have a correct conclusion and still be a bad argument). There are many possibilities of why my truck broke down. Maybe my alternator died, or my fuel pump died of old age, or a belt broke, or my engine threw a rod, etc. The point is that bad gas is only one among many possibilities, and the argument does not present enough evidence to conclude that bad gas was the problem. Now, if I also include the information that my fuel filter had been changed a month ago, and I checked it after my truck broke down and it was completely clogged with crap, and the fuel filter was the only thing wrong with my truck, and multiple other people who filled up at that station that morning had the same exact problem, then, I can conclude that bad gas was most likely the cause. In this particular example, it is still not possible to be 100% certain, but by presenting those pieces of additional evidence, I can make a very strong argument that does not commit a fallacy (note: this is what is called a probabilistic argument, it is meant to show that something is probably true, not that it is definitely true).

I frequently run into this fallacy in debates with anti-vaccine advocates.

Example:

1. I got vaccinated
2. Shortly afterwards, I developed the flu, autism, a cold, etc.
3. Therefore, the vaccine caused my illness

This syllogism is no more correct or logically valid than my bathroom example. They are both equally flawed. There are many causes for all of the maladies that vaccines get accused of causing, and it is not possible to tell whether or not vaccines were the cause in any particular case. To be clear, that is not the same as saying that we can’t tell whether or not vaccines are capable of causing those illnesses. We can use carefully controlled studies to see whether or not it is possible for vaccines to cause a given illness, but (with the exceptions of soreness and immediate allergic reactions) it is never possible to attribute a particular case of that illness to vaccines.

This fallacy is also very common in the alternative “medicine” movement. People take some “cure” then get better and automatically assume that the “medicine” they took cured them. This is however, logically flawed. They could have been cured by the placebo effect, or perhaps their disease had simply run its course and their body recovered all on its own. You cannot conclude that it was the alternative “medicine.”

Sharpshooter fallacy

This fallacy gets its name from an illustration that demonstrates how it works. Imagine that someone fired and arrow or bullet at the side of a barn. Then, after firing, they painted a bull’s eye around whatever spot they happened to hit and proceed to proclaim that they were a “sharpshooter.” Obviously they weren’t a sharpshooter, they simply created the illusion of accuracy by painting the target after firing the shot (I personally think that a better analogy than the traditional one that I just gave would be someone painting a bull’s eye on a barn, shooting a hundred arrows at a barn, then quickly removing all of the arrows except the one that hit the bull’s eye.). In actual debates, this fallacy typically occurs as a form of cherry-picking data where you present an isolated result or relationship and proclaim that it is what would be expected if they were right, when, in fact there are other results that discredit your position (another version of this fallacy can occur in statistical analysis if you don’t properly control your family-wise type one error rate, but that is beyond the scope of this post).

One of the most common occurrences of this fallacy takes place when anti-vaccers point out individual instances of vaccines failing. For example,

1. A measles outbreak occurred in community X even though everyone was vaccinated
2. This is what we would expect if vaccines didn’t work
3. Therefore vaccines don’t work

The problem here is premise 2. Vaccines don’t work 100% of the time, this is a fact that no one denies. So we would expect rare outbreaks in vaccinated communities even if vaccines work (this example is also an affirming the consequent fallacy). What this argument ignores is how rare these outbreaks are, especially compared to the frequency of outbreaks among unvaccinated communities. This argument only presents a small subset of the data while ignoring all of the contrary evidence.

Another way to think about this fallacy is that if you look for something isolated and specific in a huge data set, you are likely to find it just by chance. Take, for example, creationists claims that the Bible predicted various aspects of modern science. For example, Job 38:16 says, “Hast thou entered into the springs of the sea?” creationist hop all over this because springs in the ocean were not known to science for many years. So they say, “see, the Bible made scientific claims before scientists new about them, clearly it is a reliable source of scientific information.” The problem is that nearly all of these supposedly scientific claims are written in poetic, figurative styles, and the Bible is full of poetic language. With all the thousands of poetic verses, it is hardly surprising that a hand full of them were worded such that they happened to match up with modern science. More importantly, this argument ignores the countless verses that clearly contradict modern science such as references to the “four corners of the earth” or various verses that suggest that the earth does not move. Creationists correctly claim that these verses are figurative or poetic, which, of course, makes their cherry picking obvious. You cannot claim that every verse that happens to match science is a factual claim that proves that the Bible is a reliable source of scientific information (despite the obvious poetic language) while simultaneously claiming that every verse that happens to contradict modern science is poetic. It’s a pretty clear cut sharpshooter fallacy.

One final example involves psychics. Every once in a while, I run into someone who tries to convince me that psychics are real, and as evidence, the present an example where a particular “psychic” made several consecutive and accurate predictions. The problem is that will all the thousands of self-proclaimed psychics in the world, it is inevitable that a few of them will occasionally get things right just by chance. This argument is a sharpshooter fallacy because it only presents the data that appears to support psychics while totally ignoring all of the contrary evidence (i.e., all the countless times that “psychics” have been wrong).

MISCELLANEOUS FALLACIES

These are the fallacies that didn’t really fit into any of the other categories

Argument from ignorance

This fallacy is very common but can be very confusing. It basically argues the following:

1. There is no current explanation for A
2. Therefore an explanation for A does not exist

The problem is pretty obvious. Just because something can’t currently be explained doesn’t mean that an explanation doesn’t exist (this is in fact, a type fallacy of the false dilemma because it excludes the possibility that the explanation exists and simply hasn’t been found yet). At one point scientists couldn’t explain the movement of stars or planets, the nature of light, the structure of an atom, etc. In fact, at one point there were no scientific explanations at all. So clearly a lack of current explanation doesn’t mean that something doesn’t have an explanation.

Another version of this fallacy takes the following form,

1. We do not know of any A
2. Therefore no A exist

Example:

1. We do not know of any extra terrestrials
2. Therefore no extra terrestrials exist

This fallacy can also occur in an inverted form where you claim the following:

1. We have not disproved A
2. Therefore A is true

Example:

1. We have not proved that extra terrestrials don’t exist
2. Therefore extra terrestrials do exist

Perhaps the most common variation of this fallacy is what has often been called, “God of the gaps arguments.” The history of creationism is littered with hundreds of these arguments, and they take the following form:

1. Scientists don’t have an explanation for A
2. Therefore God caused A

Time and time again creationists have proposed these only to be proven wrong. For example:

1. There are no fossils of pre-turtles. That is, all fossil turtles are fully formed modern turtles.
2. If evolution was true, we should see lots of intermediates between lizard like reptiles and turtles
3. Since these intermediates have not been found, turtles cannot have evolved
4. Therefore God must have created them in their present form

2008 was a good year for the evolutionary history of turtles. Scientists found several different species of pre-turtles. That is, we found lizard like reptiles in various stages of evolving a turtle like shell, and they were in the locations and layers that we would have expected pre-turtles to be found. Score one for evolution. This is, of course, not an isolated event. Read some old creationist literature, they made tons arguments from ignorance that have since been refuted. The surprising thing is that they haven’t learned from their mistakes (actually I guess that’s not too surprising). Their modern literature is full of tons of new God of the gaps arguments.

So far, everything that I have said about this fallacy seems pretty cut and dry, but if you start to think about for too long, an obvious problem emerges. One of the best debate tactics, and one of my personal favorites, is to ask a question that you know your opponent cannot answer, but, if it is an argument from ignorance fallacy to say that something is wrong just because an explanation is not forthcoming, then doesn’t that mean that this debate tactic is logically invalid? No, it doesn’t, but this is where things can get confusing. I need to raise an important distinction between “no current explanation” and “cannot be explained.” For example, we cannot currently explain strong nuclear forces, but there is no good reason to think that we will not someday find an explanation for them. So it would be a argument from ignorance fallacy to say that we cannot explain them, therefore God is responsible (many creationist do exactly that). In contrast, consider the following and ask yourself whether or not it’s fallacious:

1. If vaccines cause autism, vaccinated kids would have it more frequently than unvaccinated kids
2. Unvaccinated children have autism just as frequently as vaccinated kids
3. Anti-vaccers have no explanation for this
4. Therefore, vaccines do not cause autism

I use this fact frequently in debates where I ask anti-vaccers to explain to my why the autism rates are equal between the two groups, and I argue that if vaccines cause autism they must provide an explanation for this. So the question is, did I just commit a fallacy? No, and here is why. If something is wrong, then ipso facto there is no explanation for it. If that seemed confusing don’t worry it was. Let me try to illustrate:

1. If plants require CO2, then plants that are grown without CO2 will die
2. Plants that are grown without CO2 die
3. People who argue that that plants do not need CO2 cannot explain this (as far as I know, no such person actually exists)
4. Therefore plants need CO2

Once again, this is not a fallacy, but why? The key here is that the lack of explanation is peripheral to the argument. In other words, if you where to pull out premise 3, the argument still works just fine. In fact, for any factual argument, you could construct a premise that says that the opposing group can’t explain something. This is what I mean when I say that if something is wrong there is inherently a lack of explanation. The key here is that something needs to be discredited on grounds other than a current lack of explanation. There is no conceivable explanation for why plants would die without CO2 if they don’t need CO2. It’s not that there is no current explanation, it is that there cannot be an explanation. Similarly, there is no conceivable explanation for why autism rates are the same among vaccinated kids and unvaccinated kids if vaccines cause autism. So, in debates, when I ask a question that cannot be answered, it is to draw attention to the fact that it is not conceptually possible to explain the problem because of the evidence against it. It only becomes a fallacy when the argument boils down to, “we don’t know, therefore it’s wrong.”

Fallacy fallacy

I included this one mostly because it has a fun name (I actually explained it in an earlier post). It occurs whenever you reject the conclusion of a faulty argument rather than rejecting the argument itself. It is always possible to make a bad argument for something that is actually true, so if an argument is bad, you reject the argument, but you must provide additional evidence for rejecting the conclusion.

Non sequitur fallacy

This was explained in the first post of this series and occurs whenever the structure of the argument is such that the conclusion does not follow necessarily from the premises. A large portion of fallacies are actually types of non-sequitur fallacies. Consider one of my examples from the post hoc ergo propter hoc section:

1. I used the bathroom this morning
2. Immediately afterwards, it began raining
3. Therefore, my urination caused the rain

The structure of this argument is clearly post hoc ergo propter hoc, but that structure always results in a situation in which the conclusion does not follow necessarily from the premises, which means that it is also a non-sequitur fallacy. In fact, all of the “association” fallacies that I described are types of non-sequitur fallacies, and many other fallacies, such as guilt by association, are also types of non-sequitur fallacies.

Red herring fallacy

This fallacy can essentially be thought of as dodging an uncomfortable question. It usually occurs when someone on one side of a debate asks a question to their opponent, and the opponent answers a different question, or rambles around the question without ever actually addressing it. If you’re not paying attention though, it can appear that they did in fact answer the question.

Example: (this example is from a debate between Bill Nye the Science Guy and a climate change skeptic, I am paraphrasing their comments for brevity).

• Bill Nye, “Do you agree that Venus is warm because of the CO2 in its atmosphere?
• Skeptic, “Well Venus is so far away that I don’t think that we can accurately determine how much CO2 it has had historically or how that correlated with temperature.”

See what the skeptic did? Bill asked a very simple question, but the answer to that question is extremely damaging to the anti-global warming position (its undeniable that Venus’s temperature is caused by CO2). So, rather than answer the actual question, the skeptic made up some crap about the history CO2 on Venus that has nothing to do with Bill’s question. I was very pleased to see that Mr. Nye responded by pointing out his fallacy by name on public TV.

A variation of this fallacy is often used in various contexts by Christians. It’s what I am going to call the “because God” response. This occurs when in a debate that has theological implications, someone makes an argument/asks a question that points out a flaw or discrepancy in the opponents view, and the opponent responds with something to the effect of, “well that seems inconsistent to us, but God is all powerful and much smarter than us and, ‘his ways are not our ways.'” It’s a cheap cop-out response, that is in no way logically valid (technically, it also commits an ad hoc fallacy). You can’t just invoke God whenever your position is shown to be faulty.
Example:

• Me, “We know that the earth is old because of varves, fossil records, coral reefs, etc.
• Creationist, “Well it may seem like the earth is old, but God is smarter than the scientists that found that evidence, so I’m just going to believe what he said.”

This argument is obviously invalid. You can’t simply dodge the fact that your entire position was just totally destroyed.

Stacking the deck

This fallacy is a debate tactic more than a flawed argument. It is basically just cherry picking. It occurs when you list multiple pieces of evidence (or supposed evidence) for your position and ignore all of the evidence to the contrary. Anti-vaccers are famous for this. They will ramble on endlessly about individual cases where a vaccine failed, or instances where someone experienced a symptom that may or may not have been induced by the vaccine, yet they totally ignore the hundreds of peer-reviewed papers that show that vaccines are safe and effective. Note again, this is a flawed debate structure more than a flawed argument, but stacking the deck generally involves creating a lengthy string of sharpshooter fallacies.

### 6 Responses to The Rules of Logic Part 3: Logical Fallacies

1. Simply because something MAY be only a correlation does not make it ONLY a correlation.
The functional and mechanistic basis of the neurological damage of ethyl mercury to growing nerve cells is extremely well known. In fact it is better characterized than the actions of many drugs on the market today. If you look at the data, it is difficult to see how increasing the dose of ethyl mercury from 75 ug to 575 ug before the age of ten cannot cause damage. Note especially the studies that show that ethyl mercury is preferentially deposited in the brain in monkeys at a greater percentage of dose than methyl mercury. Previous studies showing that it is cleared from the blood did not examine the brain deposition. I cannot see why otherwise logical and sober scientists would ever propagate the so-called “scientific” studies done by and for the CDC, especially since Dr. William Thompson came out so blatantly re: the pressure to hide data showing association of increase autism with thimerosal-containing vaccines. Not one person I know of who is against neurotoxins in their vaccines is against vaccines. They are anti-neurotoxin, not anti-vaccine. The entire ‘anti-vaxxer’ moniker is a great example of a straw man argument. Read Kennedy’s book Thimerosal: Let the Science Speak for Itself (2014). It cites hundreds of scientific studies that clearly demonstrate neurotoxicity of thimerosal, and it correctly admonishes people who use epidemiological studies for manipulating the results until the they get the result they want. This includes at least three most often-cited studies, one on which Thompson was lead author. I have written a chapter and your knowledge of logic should cause you grave concern when you read the emails entitled “It Just Won’t Go Away” from a researcher to the CDC who repeatedly analyzed the data from their study for two years until they finally managed to adjust for risk factors that actually were all likely to be place people at increased risk of harm (low birth weight, short gestational period, low income, education level of the mother) – they were all highly collinear, as well, and his adjustments for these related factors would have removed the same ultimate source of variation many times over – you know this would be wrong to do in a multivariate analysis. I would encourage you to start looking into these hard-to-explain behaviors that under any other circumstances would not be tolerated.

Like

• Fallacy Man says:

I have a few questions for you.

1. Are you aware that as of 2001, thimerosal (ethyl mercury) was removed from all vaccines except for a few strains of the flu vaccine?

2. Are you aware that removing thimerosal in 2001 has had no impact whatsoever on neurological problems of any kind? If thimerosal was responsible for autism or some other health problem, then surely we would have seen a drop once it was removed.

3. Where is your evidence that it is dangerous in the doses that are currently present in flu vaccines? Carefully note the “in the current does” part. No one denies that thimerosal is toxic in a high enough does, but even water is toxic in a high enough dose. Where is the evidence that it is toxic in the exceedingly low does that is present in flu vaccines?

4. Do you oppose vaccines other than the flu vaccine, if so, why?

Like

2. shane paleyan says:

explanation of the following fallacies: 1.sleigh-of-hand fallacy? 2.ethical fallacy? 3.statistical fallacy? 4. miscellaneous fallacy?

Like

• Fallacy Man says:

I won’t lie to you and pretend to know more than I really do, so to be totally honest, I am unfortunately not familiar with either the “sleight-of-hand fallacy” or the “miscellaneous fallacy,” so I can’t help you with those two.

The “ethical fallacy” is a very specific fallacy in philosophy which basically occurs when you argue that free actions are limited to moral actions. Its a bit complicated, so I’ll just direct you to this site which does a good job explaining it. http://www.informationphilosopher.com/freedom/ethical_fallacy.html

To the best of my knowledge, there is no one statistical fallacy. Rather, there are a wide range of statistical fallacies which basically occur anytime that you misuse statistics. A common example of one type of this is to claim that you have proved something with statistics when, in fact, statistics can only show what is most likely true, rather than what definitely is true. Confusing correlation and causation is also a type of statistical fallacy. There are many others, but most of them are specific to specific types of tests, so it would take a book to explain them all.

Liked by 2 people

3. Dolloch says:

Terrific site! About the Sharpshooter Fallacy – in the classic analogy I had heard it that it was not a single shot, but a grouping, so that the cluster was in the bullseye thereby making the error the difference between accuracy and precision and related to the clustering illusion. I also heard it as the Texan Sharpshooter Fallacy, so it could be a special case or only related to an original reading.

Keep up the good work!

Liked by 1 person

• Fallacy Man says:

Hi, it is called both the sharpshooter fallacy and the Texas (or Texan” sharpshooter fallacy. They are both synonymous so either term is equally correct. You are also correct that there are several types of sharpshooter fallacies, and some of them occur basically in the way that you have just described. The shooter fires multiple shots, then paints a bull’s eye around the largest cluster. So there are multiple ways that sharpshooter fallacies can occur, but they all suffer the same basic problem (i.e., they focus on the evidence that seems to support your position, while ignoring the evidence that seems to refute it). In some cases (such as what you described) it does become a bit more technical because you are correct that the sharpshooter fallacy is often related to clustering illusions and can be a serious problem in statistics.

Like