Do we need more studies on vaccines, GMOs, climate change, etc.?

anti-vaccers want more studies but refuse to accept studies memeI frequently encounter people who state that, “I’m not anti-vaccine/GMO, I just think that we need more studies” or “we need more research before we take major action on climate change.” I have, however, noticed that whenever someone declares, “I’m not X” they usually end the statement with some pathetic justification for why they are in fact X, and that is definitely the case in this situation. The cry for more studies on vaccines, GMOs, etc. is nearly always hypocritical and stems from a willful ignorance about just how many studies there actually are. The reality is that topics like vaccines have been so well studied that they have achieved the status of settled science (please actually read the linked post before berating me over that term). So, the problem isn’t that there aren’t enough studies; rather, the problem is that people refuse to read or accept the hundreds of studies that we already have. To be fair, I have occasionally encountered people who asked for more studies out of honest ignorance rather than willful ignorance, and those people quickly retracted their statements once I directed them to the veritable mountain of published literature. That type of ignorance is fine. There is nothing wrong with not knowing something, then updating your view when presented with evidence, but in my experience, those people represent a tiny minority, and most of the people who demand more studies are doing so out of willful ignorance.

Note: Before I present some examples of this flawed excuse for denialism, I want to make it absolutely clear that I am not suggesting that we should be doing less research or should spend less funding on science. I am a scientist, and like most scientists, I spend a minimum of 60 hours a week collecting and analyzing data, writing papers, etc. So obviously I place a high value on scientific research, and I think that we need more of it. The amount that most governments invest in research is pathetic (granted, I’m clearly not objective on that issue). So, I’m not saying that we need less research, but I am saying that there are certain topics that have been so well studied that we should move on and focus our efforts on real questions.

The autism scare is the perfect embodiment of this situation. Anti-vaccine parents continue to insist that we need more research on the link between vaccines and autism, when the reality is that there is no link. Study, after study, after study, after study, after study has failed to find any link between vaccines and autism. In fact, we have dozens of these studies including a massive meta-analysis with over 1.2 million children which failed to find any significant difference in autism rates between the vaccinated and unvaccinated. That is one of the largest sample sizes that I have ever seen, and as a general rule, the larger the sample size, the more certain you can be of the result (I explained the evidence in more detail here).

Now, if you’re an anti-vaccer, you’re probably thinking, “fine, there are studies, but they were all paid for by Big Pharma,” but you’re wrong. Many of those studies (including the meta-analysis) had no conflicts of interest. Further, even a recent study that was funded by an anti-vaccine group failed to find any evidence that vaccines cause autism. Finally, you have to evaluate each paper individually and present actual evidence that they are biased/flawed. You can’t just blindly accuse all of them of being bought off.

In the spirit of openness and honesty, I will acknowledge that if you dig around, you can find a few studies which have suggested that vaccines cause autism, but these are nearly always poorly designed studies that are riddled with problems. Further, most of them are correlation studies that cannot establish causation,  and the few studies that actually made proper comparisons all had tiny sample sizes (you can find details about most of the papers here and here). Sometimes you get false results, just by chance, but the odds of that happening are much lower with larger sample sizes, and when numerous large studies all agree with each other (as is the case for vaccines) it would be absurd to reject them in favor of a handful of tiny, methodologically flawed studies.

My point in all of this is simply that the supposed link between vaccines and autism has been studied so many times by so many researchers that we are extremely confident that the link is imaginary and we should move on. Nevertheless, anti-vaccers continue to insist that we need more studies; therefore, every year more studies on vaccines and autism are conducted, but that’s absurd! We know that vaccines don’t cause autism, but we don’t fully understand what actually causes it (though genetics seem to be important), nor do we know how to cure it or even effectively treat it, not alone prevent it. That is what we should be studying. We should be trying to understand its real cause, and we should be looking for ways to actually help the people that have it rather than pouring money down the toilet looking for an answer that we already have, especially when the group that is demanding the studies is never going to accept the results of those studies. Are we really naive enough to think that study number 100 will convince them when the past 99 haven’t? If a sample size of 1.2 million isn’t enough to persuade you, than nothing will ever be good enough for you. That is why the claim that we need more studies is nearly always hypocritical. You can’t sincerely claim that there aren’t enough studies while simultaneously willfully ignoring all of the studies that we actually have.

Note: I want to be clear that there are many researchers studying the real causes of autism and potential treatments, but my point is that every year money and countless man hours get spent doing yet another study on vaccines and autism, and that time and money would be better spent elsewhere.

If we expand the situation beyond autism, we find the same pattern across vaccine issues. Anti-vaccers ardently insist that there aren’t enough studies despite the fact that there are literally thousands of studies. We’ve looked for relationships between vaccines and SIDS, (Hoffman et al. 1987; Griffin et al. 1988; Mitchell et al. 1995; Fleming et al. 2001; Vennemann et al. 2007a; Vennemann et al. 2007b), asthma (Kramarz et al. 2000; Offit and Hackett 2003Grabenhenrich et al. 2014), allergies (Koppen et al. 2004), general health (Schmitz et al. 2011), etc. You name it, we’ve done it. Vaccines are probably the most well studied topic in medical history, and if you claim that they haven’t been well studied, you are simply displaying your own ignorance.

To be clear, I obviously think that any new vaccines should be rigorously tested before being released to the general public (which they are, btw), and I have no problems with doing research on any novel concerns that arise if that there is good justification for them. However, it is a pointless waste of time and money to continue to study topics for which we already have very clear and well established answers. Further, you absolutely cannot justify opposing vaccines on the basis of a lack of studies because in reality, there is a plethora of studies.

Moving beyond vaccines, we find the exact same story for GMOs. In fact, I personally encounter this argument more often for GMOs than for vaccines. People tell me all the time that we need more research before eating or growing GMOs, but is it really true that they haven’t been properly studied? If you are one of the people who thinks that it is, then let me ask you this: how many studies would be enough to convince you? A few dozen? A few hundred? How about 1,700? Would that be enough? Because we have well over that. This review from 2013 examined 1,783 studies and failed to find any evidence that GMOs are dangerous for us or the environment (Nicolia et al. 2013). Say it with me: there are over 1,700 studies on the safety and environmental impacts of GMOs. Further, several hundred more studies have been conducted since that review, and they have consistently found that GMOs are no worse for us or the environment than traditional crops (in some cases they are better). So please, don’t sit there telling me that we need more studies unless you can also give me a logically and scientifically valid reason why you reject all of the 1,700+ studies that we have already done (you should also check out this massive review looking at the health of livestock before and after the introduction of GMO feed). Also, just to be clear, over half of those studies have no ties (financial or otherwise) to agricultural companies (details and sources here).

Finally, let’s look at climate change. On this topic, people are prone to claim that we don’t have enough evidence to warrant action, but that’s once again absurd. We have extremely clear evidence that we are causing it, and thousands of papers from numerous fields of study have confirmed the results. We carefully tested all of the known drivers of climate change, and no combination of natural factors can explain the current warming. The only way to explain the warming is to include our greenhouse gas emissions in the analyses (Meehl, et al. 2004; Wild et al. 2007; Lockwood and Frohlich 2007, 2008; Lean and Rind 2008; Foster and Rahmstorf 2011; Imbers et al. 2014). More details and sources here,  here, and here.

To be fair, it is true that we don’t know exactly how much we are causing it to change, or exactly what will happen in the future, and we should continue to study those topics; however, we have plenty of data to be extremely confident that our actions are causing the bulk of the current climate change, the changes are already having negative impacts for us, and the changes will continue unless we modify our behavior. On those key points, there is no serious debate among scientists. So the claim that we should wait for more data before we take action is misguided and dangerous. Further, in my experience, this claim is nearly always made by people who completely deny climate change and will never accept the results of any study that opposes their preconceived views. So once again, the claim is disingenuous.

But what if…

At this point, inevitably, someone is getting ready to make a comment along the lines of, “but what if there are things that we missed? What if there is a problem with GMOs, vaccines, etc. that we haven’t found yet? Science isn’t perfect and it is arrogant to think that we know everything.”

I agree that science isn’t perfect, and if scientists actually claimed to know everything, I would agree that they were being arrogant, but they don’t claim to know everything. Rather, they simply make conclusions based on all available evidence, and when that body of evidence is extensive, those conclusions can be reached with a high degree of confidence. Look, it is always possible that we missed something. This is true for absolutely any technology that has ever been tested. As a result, it would always be possible to make this argument. No matter how many studies we’ve conducted, it is still possible that we missed something. Therefore, the only rational approach is to study something up to the point that we are as confident as we ever could be in the conclusion, and for things like the safety of vaccines and genetic engineering, we’re there. In fact, we crossed that line long ago.

Let me ask you this, if around 2,000 studies on GMOs isn’t enough, then what would be? How many studies does it take? There has to come a point at which you acknowledge that we have studied a topic so thoroughly that it is exceedingly unlikely that our conclusions are wrong, and if you ask me, that line occurs long before 2,000 studies.

This is especially true for things like vaccines and GMOs where the known benefits are great. We know, for example, that vaccines save millions of lives each year. So given that known benefit, it makes absolutely no sense to oppose them out of the slight possibility that we’ve missed some unknown danger. For things like climate change, the same concept applies, but the situation is somewhat reversed. In other words, all available evidence shows that the consequences of not acting will be dire, so saying that we shouldn’t take action because of the extremely slight chance that we are wrong is incredibly foolhardy.

Conclusion

In summary, do we need more studies on vaccines, GMOs, climate change, etc.? Yes, of course we do, but we need to be studying the things that are actually unknown rather than pandering to people who will never accept any study that disagrees with their biases and preconceptions. Replication is certainly important in science, and we should try to replicate any important results, but once a result has been consistently corroborated over and over again by numerous studies, we should move on. We should be focusing on how to improve vaccines and make vaccines for more diseases rather than producing yet another study on vaccine and autism. We need more research on making GMOs that provide vitamins and economic benefits to developing countries, and less research on whether or not the fundamental technology is safe. We should continue to study the climate, but we shouldn’t wait for future studies before taking action. In short, we should be studying new and marvelous things rather than repeating something that we have already done hundreds of times in the vain hope that people are actually reasonable and will be willing to change their views when presented with one more study.

Literature Cited

  • Anders et al. 2003. Association between thimerosal-containing vaccine and autism. JAMA 290:1763–1766.
  • Anders et al. 2004. Thimerosal exposure in infants and developmental disorders: a retrospective cohort study in the United Kingdom does not support a causal association. Pediatrics 114:584–591.
  • Destefano et al. 2004. Age at first measles-mumps-rubella vaccination in children with autism and school-matched control subjects: a population-based study in metropolitan Atlanta. Pediatrics 113:259–266.
  • Fleming et al. 2001. The UK accelerated immunization programme and sudden unexpected death in infancy: case-control study. BMJ 322:822.
  • Foster and Rahmstorf 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
  • Gadad et al. 2015. Administration of thimerosal-containing vaccines to infant rhesus macaques does not result in autism-like behavior or neuropathology. Proceedings of the National Academy of Science 112:12498–12503.
  • Grabenhenrich et al. 2014. Early-life determinants of asthma from birth to age 20 years: a German birth cohort study. Journal of Allergy and Clinical Immunology 133:979–988.
  • Griffin et al. 1988. Risk of sudden infant death syndrome after immunization with the Diphtheria–Tetanus–Pertussis vaccine. New England Journal of Medicine 319:618–623.
  • Hoffman et al. 1987. Diphtheria-Tetanus-Pertussis immunization and sudden infant death: results of the national institute of child health and human development cooperative epidemiological study of sudden infant death syndrome risk factors. Pediatrics 79:598–611.
  • Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
  • Jain et al. 2015. Autism occurrence by MMR vaccine status among US children with older siblings with and without autism. Journal of the American Medical Association 313:1534–1540.
  • Koppen at al. 2004. No epidemiological evidence for infant vaccinations to cause allergic disease. Vaccine 22:3375–3385.
  • Kramarz et al. 2000. Does influenza vaccination exacerbate asthma? Archives of Family Medicine 9:617–923.
  • Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
  • Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
  • Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
  • Madsen et al. 2002. A population-based study of measles, mumps, and rubella vaccination and autism. New England Journal of Medicine 347:1477–1482.
  • Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
  • Mitchell et al. 1995. Immunisation and the sudden infant death syndrome. Archives of Disease in Childhood 73:498–501.
  • Nicolia et al. 2013. An overview of the last 10 years of genetically engineering research. Critical Reviews in Biotechnology 34:77–88.
  • Offit and Hackett. 2000. Addressing parents’ concerns: do vaccines cause allergic or autoimmune diseases? Pediatrics 111:653–659
  • Schmitz et al. 2011. Vaccination status and health in children and adolescents. Medicine 108:99–104.
  • Patz et al. 2005. Impact of regional climate change on human health. Nature 438:310–317.
  • Taylor et al. 2009. Autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association. Lancet 353: 2026–2029.
  • Taylor et al. 2014. Vaccines are not associated with autism: and evidence-based meta-analysis of case-control and cohort studies. Elsevier 32:3623–3629.
  • Van Eenennaam and Young. 2014. Prevalence and impacts of genetically engineered feedstuffs on livestock populations. Journal of Animal Science 92:4255–4278.
  • Vennemann et al. 2007a. Sudden infant death syndrome: No increased risk after immunisation. Vaccine 25:336–340.
  • Vennemann et al. 2007b. Do immunisations reduce the risk of SIDS? A meta-analysis. Vaccine 26:4875–4879.
  • Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters
Posted in Global Warming, GMO, Vaccines/Alternative Medicine | Tagged , , , , , | 8 Comments

What movie theories teach us about science vs. pseudoscience

Movie theories are lots of fun. I thoroughly enjoy to contemplating and debating novel ideas like the notion that all Pixar movies are connected or that the Joker was actually the hero of Dark Knight, but in addition to being fun, I think that movie theories provide an excellent illustration of the demarcation between science and pseudoscience. Therefore, I am going to use them to try to explain one of the key differences between the two, and by so doing, I will provide you with a vital tool for identifying pseudoscience as well as simultaneously illustrating why science is such a powerful method for understanding our universe. I will conclude this post by applying the lessons from movie theories to one of the prominent pseudosciences of our day: creationism (everything that I am going to talk about also applies to ghost hunters, UFO spotters, Big Foot believers, and just about every other pseudoscience position you can think of).

jar jarThe idea for using movie theories as an illustration for pseudoscience occurred to me while reading the viral theory that Jar Jar Binks was actually the ultimate villain of Star Wars episodes I-III. Therefore, I will use it as my model throughout this post. If you don’t feel like reading the entire theory, it simply argues that Jar Jar was actually a powerful force user who was only playing the part of the fool in order to execute his master plan, and he was at the very least collaborating with Palpatine, and possibly even Palpatine’s master. For this post to make sense, you will probably need to have seen the Star Wars movies, but the vast majority of people have so that shouldn’t be a problem. If you haven’t seen them, what on earth is wrong with you? Go watch them right now, it’s more important than reading this blog post.

Note: when I say “movie theory” I am referring to the alternative explanations that are proposed after a film has been released. I am not referring to guesses about what will be in a movie that has yet to come out.

Use of the word “theory”
I want to briefly point out that movie “theories” are not theories in the scientific sense of the word. In science, a theory is an explanatory framework that has been rigorously tested and has been shown to have an extremely high predictive power (I’ll elaborate on that later). Movie “theories” by contrast are just explanations. There is no testing nor do they make predictions. Therefore, although I will continue to call them “movie theories” I want to be explicitly clear that they are not actually theories in the scientific use of the word.

Movie theories explain facts
The core distinction between science and the type of pseudoscience that is displayed in movie theories is the order in which knowledge is acquired and dealt with. Movie theories are inherently retrospective. People make them after a movie has been released and after all of the data are available. In other words, all that they do is explain the existing facts. In contrast, real science uses the existing data to make predictions about future data (more on that later).

At a quick glance, the ability to explain facts may seem like a good quality, but in isolation it is actually extremely problematic because there are often multiple ways to explain the same facts. For example, the Jar Jar theory explains Jar Jar’s jumping abilities by claiming that he is a force user, but when I watch him do a massive somersault, I explain that fact as simply being part of Gungans’ natural athletic abilities. After all, Gungans seem more closely related to amphibians than anything else, and amphibians are known for jumping abilities. Additionally, there are other non-Jedi species (like Wookies) who have extreme physical abilities.

Similarly, the Jar Jar theory proposes that Jar Jar’s seemingly clumsy moves are actually a form of martial art known as Zui Quan (aka Drunken Fist Wushu); whereas, I think that Lucas simply made a bad call and wrote an awkward, annoying character. The Jar Jar theory can support its position by trying to retroactively match Jar Jar’s moves with specific moves from Zui Quan, and I can support my position by citing numerous other movies that have had a bumbling idiot who accidentally saves the day. I can also bring up Lucas’s other recent blunders (dare I mention Kingdom of the Crystal Skull?).

So, which view is actually correct? I don’t know and neither do you, that’s the problem. Both views can retroactively bolster their position and try to make the available data fit their model. Both positions can offer an explanation for the data, but neither position can demonstrate with a high level of confidence that the other explanation is wrong.

Retroactively explaining data in this fashion also has an additional problem. Namely, in movie theories and pseudoscience, evidence and explanations often become intertwined and confused. Let’s think about the possibility of Jar Jar using Zui Quan again for a moment. Is that evidence for the overarching theory that Jar Jar was actually a force user, or is the theory that he was a force user the explanation for his clumsy behavior? There’s no clear answer to that question, and that is a huge problem, because if your evidence is also your explanation, then you are running a massive risk of a circular reasoning fallacy. In other words, the view is self-reinforcing, but for a good theory, you really want external validation rather than internal support.

There is another problem that is closely related to the last point. Whenever you are retroactively applying an explanation, it is always tempting to stitch together seemingly arbitrary or disconnected facts in order to make them fit your view (conspiracy theories are excellent at this). For example, the Jar Jar theory makes a big deal out of the fact that Palpatine and Jar Jar are both from the same planet and therefore (according to the theory) likely knew each other. That explanation, however, seems like quite a stretch given how large a planet is and the fact that Gungans and humans clearly did not get along or interact with each other. In other words, the fact that they are both from the same planet is a rather minor point which gets conflated into a major topic in order to make it fit with/support the overarching theory.

In the process of overemphasizing minor points, movie theories also have a tendency to break Occam’s razor. For example, at one point, the side of the bridge from which Jar Jar is falling switches, and the theory proposes that he force jumped. The more parsimonious explanation, however, is that it was simply an editing mistake. The Star Wars movies are full of editing mistakes. Therefore, it seems odd to latch onto this one mistake and elevate it as evidence of the Jar Jar theory, but that is exactly what happens when you rely on an overarching explanation rather than falsifiable predictions. This way of thinking causes you to view everything as evidence for your position, which is why it is so dangerous.

yodaFinally, because movie theories are inherently retroactive explanations, it is always possible to explain any evidence that anyone else cites. For example, I could argue that this theory doesn’t work because Jar Jar never would have gotten involved were in not for his chance encounter with Qui-Gon Jinn, but someone who subscribes to this theory could then propose that Jar Jar was a powerful enough force user that he could actually see into the future and not only knew that Qui-Gon would hide on a ship, but also knew which ship he would be on. You could even go as far as citing other instances where Jedi saw future events, as well as Obi-Wan’s comment that, “in my experience there is no such thing as luck.” Technically speaking, that explanation would be boarding on an ad hoc fallacy, but movie theories are by their very nature already ad hoc. In other words, because they are made after all of the data have been collected, they are deliberately designed to account for all of the data. This makes them inherently impossible to defeat because it is always possible to continue making more retrospective explanations in order to justify your original theory. To be clear, those explanations may not be logically valid, but it is still always possible to make them, which actually makes it impossible to disprove the theory (i.e., you can demonstrate that the view is logically invalid, but that doesn’t prove that it’s wrong [that would be a fallacy fallacy]).

In short, movie theories are problematic because all that they do is explain existing facts. When you read them, it is always tempting to say, “this theory is great because it explains everything,” but as I’ll elaborate on in a minute, explanatory power can actually be an extraordinary weakness, rather than a strength.

Science predicts future data
Up until now, I have only been talking about movie theories, but all of the problems that I raised exist within actual views about the real world, and these are problems which scientists and philosophers have wrestled with for a long time. This is, in fact, the very issue that Karl Popper dealt with in his seminal work Science as Falsification. While examining the various “scientific” views of his day, he realized that some of them were extremely problematic. For example, two dominant views in the field of psychology were those of Freud and Adler. Both of them were massive explanations, and both of them conflicted with each other. What Popper astutely realized, however, was that there was no real way to tell which view was correct. No matter what patient came into a psychologist’s office, both schools of thought could give a plausible explanation for the patient’s behavior.

This lead Popper to conclude that explanatory power was not enough to make something a valid scientific view, and in isolation, explanatory power was actually problematic because a view which was designed to be able to explain everything would be inherently untestable. Therefore, Popper proposed that real science should be falsifiable. In other words, real science should make predictions about future data which, if they don’t come true, will falsify the view. Thus, a scientific theory is judged based on its predictive power. In other words, a good theory should make numerous falsifiable predictions, and all of them should come true. The theory of gravity is a good example of this. It predicts that anytime that you drop an object, it should fall. This is a falsifiable prediction because, if you dropped a pen and it floated in mid air, the theory of gravity would be falsified, and we would reject it; however, the fact that objects consistently fall means that gravity’s predictions consistently come true, which means that we can be very confident that it is correct.

Falsifiability is a stark contrast to the logic of movie theories. They generally do not make predictions because they are only designed to explain facts. Further, although they do sometimes make predictions about sequels, they are generally not falsifiable predictions. This is a crucial point: the prediction has to run the risk of falsifying the view. For example, the Jar Jar theory predicts that Jar Jar will appear in Episode VII (perhaps even as Supreme Leader Snoke), but does that make the view falsifiable? No, it doesn’t, because if Jar Jar isn’t there, it could simply be that Abrams decided to go another direction, or that Jar Jar has died in the intervening years (after all, we don’t know the normal lifespan of a Gungan). In other words, you could easily explain his absence without rejecting the theory. Therefore, the prediction is not falsifiable and does not provide an adequate test of the theory.

Applying falsifiability to creationism
In the final section of this post, I want to apply everything that I have been talking about to one of the most blatant and prevalent forms of pseudoscience: creationism. If you read the creationists’ literature, they actually fully acknowledge that they are explaining rather than predicting, but they don’t see it as a problem, and they incorrectly think that real scientists are doing it as well. They seem to be stuck in a pre-Popper era in which science is judged by explanatory power rather than predicting power. They frequently insist that scientists and creationists have the same evidence, but they are just interpreting it differently. Does that sound familiar? In their flawed view, creationism is one explanation and evolution is another, and both camps “interpret” the data to fit their explanation, but neither one can really be demonstrated to be better than the other (just as it is impossible to actually determine which theory of a movie is correct). The reality is, however, quite a bit different.

Evolution is falsifiable, whereas creationism is not (with possible exceptions concerning the flood). You see, evolution does not not simply retroactively “interpret” the data; rather, it predicts the data beforehand. As I have previously explained, evolution predicted that we should find intermediate fossils, and today we have hundreds of them that are exactly like what evolution predicted decades earlier. To be clear, that was a falsifiable prediction. Darwin himself even said that if we never found any intermediates, evolution would be discredited, but we did find them, and that is why evolution is so powerful. A theory which can predict the existence of organisms that we have yet to discover is utterly incredible. Creationism, in contrast, can’t do that. It very clearly predicted that intermediates shouldn’t exist, so, every time that we find them, it simply changes its tune and claims that “those aren’t actually intermediates, God just created them to look exactly like what we would expect intermediates to look like.” Do you see the problem? Creationism isn’t falsifiable because you can always fall back on the “God did it” argument. To be clear, that response isn’t logically valid (it’s an ad hoc fallacy), but it is technically possible. Thus, creationism can’t be falsified.

Further, the predictive power of evolution goes far beyond intermediates. For example, it also predicted that the fossil record would show a clear and orderly progression, it predicted that genetics would agree with the fossil record, and it predicted that biogeographic patterns would match the patterns seen in genetics and the fossil record. All of these are falsifiable predictions, and all of them are predictions that really should only come true if evolution is actually true. If the fossil record turned out to be jumbled, and modern mammals, dinosaurs, and Precambrian invertebrates were all found in the same layers, that would completely shatter evolution. Similarly, if the fossils said that birds and mammals evolved from reptiles, reptiles from amphibians, and amphibians from fish, but the genetics said that birds were most closely related to fish, and mammals were most closely related to amphibians, that would have been devastating to the theory of evolution. The fact that evolution got all of those predictions right, however, allows us to be extraordinarily confident in it. In contrast, creationists are left shrugging their shoulders and saying, “God did it that way.”

Think about the difference between those two for a minute. Evolution made a series of extraordinary and extremely risky predictions in totally different fields, and if any one of those predictions had failed, evolution would have been falsified. In contrast, creationism either made no predictions, or got its predictions wrong, but none of those predictions were falsifiable, so it simply changed its interpretation. Which one of those sounds like a robust and reliable way to understand our universe?

Finally, just like our movie theories, creationism has a long history of latching onto minor points and exalting them as proof of their position. For example, creationists are fond of claiming that dragons were actually dinosaurs and all of the legends of dragons are actually evidence that humans and dinosaurs lived together. Now, to anyone who wasn’t already convinced that creationism was true, that idea sounds laughably ridiculous. Ancient cultures are full of all sorts of legends that we don’t take literally, so why should we do so with dragons? Once again, this is the problem with applying a pre-existing explanation rather than making falsifiable predictions. If you have an explanation already in place, then you will view all of the evidence in a way that supports that explanation, even if that means making some truly imaginative leaps (technically, this argument is a question begging fallacy).

Conclusion
In summary, real science makes testable, falsifiable predictions. It does not simply retroactively apply a pre-existing explanation to the data. Rather, it predicts what the data should be before those data are collected. In contrast pseudoscience simply “interprets” data to fit its preconceived views, which often results in logical fallacies.

 

Posted in Nature of Science | Tagged , , , | 7 Comments

12 bad reasons for rejecting scientific studies

quote there is nothing wrong with asking questions but you have to be willing to accept the answers to those questions vaccine safety scientific studyA few days ago, I posted what I thought was a fairly innocuous image (right) onto my blog’s Facebook page. I was, however, sadly mistaken. My page was quickly flooded with comments by people who arrogantly insisted that there was nothing wrong with blindly rejecting all of the thousands of studies showing that vaccines are safe. I probably shouldn’t have been surprised by this, but still, I was astounded by the level of hubris and willful ignorance that was being so proudly displayed. What didn’t surprise me, however, were the attempts at justifying such a baffling position. They included all of the usual tropes about conspiracies, scientists being paid off, government corruption, etc. (I have included screen shots of some of the responses to the meme throughout this post). Most of these responses suffered the same fundamental problem. Namely, they assumed that there was something wrong with the studies rather than actually providing evidence that they were flawed. This is a very common mistake. When faced with a study that disagrees with their preconceptions, people often blindly assert that the authors were paid off, the data were manipulated, there’s a conspiracy afoot, etc., but unless they can actually prove that such unethical behavior occurred, that response is logically illegitimate and is no different from simply saying, “it’s wrong because I say it’s wrong” (in technical terms, it’s an ad hoc fallacy). You cannot assume that a study is flawed just because you disagree with it.

To be clear, I am not suggesting that you blindly accept every scientific paper. Unfortunately, not all papers are of a high quality, and bad, biased research does get published. So you should carefully examine all scientific studies, but you cannot blindly reject them just because they discredit your preconceived views. This is especially true for topics like vaccines for which we have thousands of studies that all agree with each other. It’s one thing to say that one paper is biased or one research group was paid off, but it is something else entirely to assert that essentially every paper on a topic is wrong and every researcher is corrupt.

In short, it is important to carefully examine scientific studies rather than blindly accepting them, but the inverse is also true (i.e., you have to carefully examine the study before rejecting it). I have previously written about good criteria for rejecting a paper; therefore, in this post I want to flip things and instead describe 12 bad arguments for rejecting a paper.


Bad reason #1: Galileo/Columbus

galileoWhen faced with results that they don’t like, many people will invoke Galileo or Columbus and claim that they defied the mainstream view and people thought that they were crazy, but they turned out to be right. I explained this one in detail here, but to be brief, no one thought that Galileo was crazy. He presented facts and careful observations, not conspiracies and conjecture. He did not blindly reject the science of his day, rather he made meticulous observations and presented data that discredited the common views. That is not in any way shape or form the same as arrogantly and ignorantly rejecting a paper just because you disagree with it.

Moving on to Columbus, the debate in his day was about the size of the earth, not its shape, and Columbus was dead wrong. In fact, his stubborn ignorance would probably have killed him and his crews if it hadn’t been for the fortunate “discovery” of the New World. You see, Columbus was the ideological equivalent of a modern day anti-vaccer. He “did his own research” and pompously declared that all of the experts were wrong about the size of the earth, and he thought that the well-accepted calculations were flawed. In reality, the accepted calculations of his day were very close to correct, while Columbus’s numbers were way off.


Bad reason #2: science has been wrong in the past

Moving beyond the specific examples of Galileo and Columbus, other people often make the broad claim that science shouldn’t be trusted because it has been wrong before. This is another one that I have dealt with at length elsewhere, so I’ll be brief here. First, it is true that science has been wrong, but it has always been other scientists who have figured out that it was wrong. Further, it is logically invalid to blindly assume that it is wrong just because it has been wrong before.

Additionally, although there have been plenty of minor hypotheses which have been discredited, there have been very few core ideas that have been rejected in the past century. In other words, ideas which are supported by thousands of studies have rarely been rejected, and very few central ideas have been overthrown in recent decades. The closest example that you can find is probably Einstein’s theory of relativity replacing Newton’s law of gravity, but even in that example, Newton’s work wasn’t really wrong, it was just incomplete. Einstein didn’t completely throw Newton out the window; rather, he simply showed that Newton’s law doesn’t always work and doesn’t give us a complete picture. He built on what Newton had found.

Finally, attacking science by asserting that it has been wrong before is utterly absurd because science is inherently a process of modifying our understanding of the world. In other words, science is self correcting. This is one of it’s greatest strengths. To publish in science, you have to pass a rigorous peer-review process, which weeds out a ton of junk science. So, most of what gets published is of high quality. Further, when a bad paper gets published, it quickly comes under scrutiny by the rest of the scientific community, who will then point out errors in it (if they exist) and often try to replicate the results. As a consequence, it can be hard to get away with fraudulent science because if someone else tries to replicate your work, they are going to figure out that something was very wrong with your research (this is exactly what happened with Wakefield’s fraudulent paper suggesting that vaccines cause autism). Thus, science is self correcting and constantly replaces erroneous ideas as new evidence comes to light (the same can’t be said for anti-science views which rigidly cling to their positions no matter how much evidence opposes them). Therefore, the fact that science has been wrong is actually a good thing, because if there were no instances where we had discovered that a previous idea was wrong, that would mean that science hadn’t advanced.


Bad reason #3: it’s all about the money

fundingThis is probably the most common response to papers on climate change, vaccines, GMOs, etc., and it’s often simply untrue. The scientific community is massive, and there are thousands of independent scientists doing research. Further, all scientific publications require authors to declare any conflicts of interest, so you can actually check and see if a paper was paid for by a major company, and if you did that, you would find that many of the papers supporting GMOs, vaccines, etc. have no conflicts of interest. Anti-scientists, of course, have no interest in actually looking at the paper. They would rather just assume that it was paid off because that fits with their world-view. Further, even if 90% of the papers on a topic like vaccines had been paid off, that would still leaves us with hundreds of papers showing that they are safe and effective and essentially no papers saying that they are dangerous (you can find more details on the finances of vaccines, GMOs, and climate change here).

money

Based on this person’s follow up comments, everything except for the first sentence was sarcastic (i.e., they think that science does actually blindly reject answers we don’t like, etc.)

Finally, even if a paper does have a conflict of interest, that doesn’t give you carte blanche to ignore it. The fact that someone works for a pharmaceutical company, for example, does not automatically mean that they biased or falsified their data. If a paper has a conflict of interest, then you should certainly give it extra scrutiny, and you should be suspicious if it disagrees with other papers or has questionable statistics, but you cannot automatically assume that it is flawed.


Bad reason #4: there are other results that I disagree with

This is one of my favorites. Someone will say, “I reject the science of X because science also says Y and I disagree with Y.” We can rephrase this as, “I reject science because I reject science.” I would not, for example, accept water fluoridation as evidence that it’s ok to reject the science of vaccines unless I had already rejected the science of fluoridation. In other words, you have to justify your rejection of the science of Y before you can use it as evidence that we shouldn’t trust the science of X. Further, even if you could demonstrate that the science of Y (in this example fluoridation) was wrong, that still would not in any way shape or form prove that the science of X (in this example vaccines) is wrong. In fact, this entire line of reasoning is just a special case of the logical fallacy known as guilt by association. If are going to say that a scientific result is incorrect, you have to provide actual evidence that the specific result that you are talking about is incorrect.

other things I disagree with

I’m not sure which part of this post grieves me the most: their shoddy logic, their rejection of science, their bizarre capitalization, or the fact that someone “liked” it.


Bad reason #5: gut feelings/parental instincts

I encounter this one frequently, and it irritates me to no end. I will, for example, show someone the scientific evidence for vaccines, and they respond with, “well as a parent only I know what is best for my child.” Similarly, when I show people the evidence for GMOs, they often respond with something like, “well I just have a gut feeling that manipulating genes is bad.” I do not give a flying crap about your instincts or gut feelings. The entire reason that we do science is because instincts and feelings are unreliable. When someone presents you with a carefully conducted, properly controlled study, you absolutely cannot reject it just because you have a gut feeling that it’s wrong. Doing that makes no sense whatsoever. It is the most blatant form of willful ignorance imaginable. Don’t get me wrong, intuition is a good thing, and gut feelings can certainly help you in many situations, but they are not an accurate way to determine scientific facts.

Just to demonstrate the true absurdity of this response, let’s imagine for a minute that you went into the ER, and the doctor there said, “according to scientific studies, I should only give you X amount of morphine, but my gut tells me that I should actually give you five times that amount, so that’s what I’m going to do.” I’m pretty sure that you would immediately demand a different doctor. Similarly, imagine someone saying, “science says that smoking causes cancer, but my gut tells me that it’s fine.” Do you see the problem? Gut feelings simply aren’t reliable. That’s why we do science.


Bad reason #6: I’m entitled to my opinion/belief

Daniel Moyniham quote everyone is entitled to his own opinion but not his own factsThis is another very common response, and it is very similar to #5. Science deals with facts, not opinions or beliefs. When multiple scientific studies all agree that X is correct, it is no longer a matter of opinion. If you think that X is incorrect, that’s not your opinion, you’re just wrong. Think about the relationship between smoking and lung cancer again. What if someone said, “well everyone is entitled to their opinion, and my opinion is that it’s safe.” Do you see the problem? Scientists don’t have an opinion or belief that smoking is dangerous; rather, it is a scientific fact that it is dangerous, and if you think that it is safe, you are simply in denial. Similarly, you don’t get to have an “opinion” that the earth is young, or vaccines don’t work, or climate change isn’t true, or GMOs are dangerous, etc. All of those topics have been rigorously tested and the tests have yielded consistent results. It is a fact that we are changing the climate, a fact that vaccines work, a fact that the earth is old, etc. If you reject those, you are expressing willful ignorance, not an opinion or belief.

belief
Bad reason #7: I’ve done my research/an expert agrees with me

meme research you keep using that wordI’ll make this one simple: if your “research” disagrees with properly conducted, carefully controlled studies, then your research is wrong (or at the very least, must be rejected pending future data). There, it’s that simple. The only exception would be if your research is actually a large set of properly controlled studies which have directly refuted the study in question (e.g., if you have a meta-analysis vs. a single study, then, all else being equal, go with the meta-analysis). It’s also worth pointing out that having a few people with advanced degrees on your side does not justify your position (that’s a logically fallacy known as an appeal to authority). No matter what crackpot position you believe, you can find someone somewhere with an advanced degree who thinks you’re right.


Bad reason #8: scientific dogma

This response basically states that all scientists are forced to follow the “dogma” of their fields, and anyone who dares to question that dogma is quickly ridiculed and silenced. I’ve written about this before, so I’ll be brief here. In short, that’s simply not how science works. Nothing makes a scientist happier than discovering that something that we thought was true is actually false. In fact, that is how you make a name for yourself in science. No one was ever considered a great scientist for simply agreeing with everything that we already knew. Rather, the great scientists are the ones who have shown that our current understanding is wrong and a different paradigm provides a better understanding of the universe. To be clear, if you are going to defeat a well established idea, you are going to have to have some very strong evidence. After all, “extraordinary claims require extraordinary evidence,” but if you have that extraordinary evidence, then you absolutely can publish it. If, for example, I actually had powerful evidence that discredited the theory of evolution, not only could I publish in the journal of my choosing, but I would have just guaranteed myself the Nobel Prize. As a biologist, nothing could possibly be better for my career than discrediting Darwin. So why then aren’t biologists rushing to publish that evidence? Quite simply, because it doesn’t exist. Similarly, you don’t see many publications against anthropogenic climate change, vaccines, etc. because the data for those positions just don’t exist (fun fact: “data” is plural so “the data don’t” is actually grammatically correct).


Bad reason #9: distrust of governments/media

I often find that people reject science because of a distrust of governments or the media. For example, anti-vaccers often blindly reject all CDC statistics showing the benefits of vaccines (amusingly they readily accept the reported side-effects, inconsistent reasoning anyone?). Many people, however, take it even a step further. On numerous occasions, I have shown someone a study which was not in anyway affiliated with a government agency, yet they still responded with a lengthy rant about corrupt governments or the media. The basic idea of their argument seems to boil down to, “the government/media agree with these results, therefore they must be false.” This line of reasoning is, however, clearly fallacious (in fact it’s a logically fallacy known as guilt by association). Governments and the media will lie to push their own agendas, I’m certainly not denying that, but that fact does not automatically mean that everything that they say is a lie. For example, the CDC and other government agencies say that smoking is dangerous, does that mean that it’s safe? Obviously not. Similarly, if a news reporter said that you shouldn’t drink lava, would that mean that you should? It’s fine to be skeptical of what you are told by the government/media. In fact it is a good thing, but when you are presented with scientific evidence, then it’s not a matter of trusting the government/media. Rather, it is a matter of whether or not you accept science. In other words, I don’t need to trust the government or media in order to accept the results of a carefully controlled study.

decide for myself

This comment was a bizarre combination of distrust of the government, doing one’s own research, following a personal belief, and confusingly trying to equate science with government decision in order to assert that mistakes have been made in the past (or at least that’s my understanding of this ramble)


Bad reason #10: it’s a conspiracy

This one is very closely related to #8 and 9, but it takes things a step further. It proposes that there is a massive conspiracy and scientists are being paid by governments/big companies to falsify results. Just take a quick look at the anti-vaccine movement or the anti-GMO movement, and you will quickly find that pro-vaccine/pro-GMO scientists are vilified and receive constant accusations of being “shills.” Similarly, there are many people who think that all climate scientists have been bought off by governments. I’ve explained the problems with this line of reasoning in more detail here and here, so I’ll just talk about the biggest problem. Namely, the scope of this conspiracy would be impossibly huge. The scientific community consists of millions of people from all over the world working out of thousands of universities, institutes, non-profits, corporations, agencies, etc. It includes people from countless religions, cultures, political ideologies, etc. There is no way that you could possibly get that many people to agree on a massive deception like this. Just think about what is being proposed here. Do you honestly think that nearly all of the world’s climate scientists have been bought off? We are talking about thousands of people from all over the world. Similarly, there are numerous corporations, universities, non-profits, etc. involved in the research and production of vaccines and GMOs. Do you honestly think that all of those different organizations (many of whom compete with each other and have different goals and purposes) have all managed to come together to make one unified conspiracy? That’s just nuts. The same problems exist for governments. Topics like vaccines, GMOs, and the dangerous of climate change are agreed upon by numerous governments and scientific organizations from all over the world. Honestly ask yourself the following question: which is more plausible, that countless governments, companies, non-profits, etc. have all come together to create the world’s largest conspiracy and buy off virtually every scientist on the planet, or that the thousands of independent scientists who have devoted their lives to science are actually doing real research?


Bad reason #11: anecdotes

This list certainly wouldn’t be complete without talking about personal anecdotes. I can’t even begin to tell you how many times I have presented someone with scientific studies showing that vaccines are safe only to have them responded with, “but I know someone who developed autism after getting a vaccine” or “what about this case where someone became sick after a vaccination.” Anecdotes do not matter in science, because anecdotes don’t allow us to establish causation. Let me give an example. Suppose that someone takes treatment X and has a heart attack 5 minutes later. Can we conclude from that anecdote that treatment X causes heart attacks? NO! It is entirely possible that the heart attack was totally unrelated to the treatment and they just happened to coincide with one another. Indeed, I once heard a doctor describe a time where he was preparing to vaccinate a child, and while preparing the vaccine, the child began having a seizure (to be clear, he hadn’t vaccinate the child yet). He realized that if he had given the vaccine just 60 seconds earlier, it would have looked for all the world like the vaccine had caused the seizure when in fact the kid just happened to have a seizure at the same time that a vaccine was being administered.

From those two examples, it should be clear that anecdotes are worthless because they cannot establish causal relationships (in technical terms, using them to establish causation is a logical fallacy known as post hoc ergo propter hoc fallacies [i.e., A happened before B, therefore A caused B]). Properly controlled studies, however, do allow us to establish causation. If, we took a large group of individuals of the same age, ethnicity, medical history, etc., divided them randomly into two groups, and gave half of them treatment X and half of them a placebo, then and only then would we be able to look for causal relationships. In other words, if the treatment group has heart attacks significantly more frequently than the control group, then we could conclude that treatment X most likely causes heart attacks (science never proves anything with 100% certainty). Nothing else will let us make that claim. Even if you collected a whole series of anecdotes in which people had heart attacks following treatment X, it wouldn’t matter because there wouldn’t be any controls. In other words, I could respond to your anecdotes with anecdotes of people who received treatment X and didn’t have heart attacks as well as anecdotes of people who didn’t have treatment X, but still had heart attacks. Properly controlled studies are the only way to establish that one thing causes another. That goes for side effects of vaccines, alternative “medicines,” fad diets, etc.


Bad reason #12: a scientific study found that most scientific studies are wrong

This argument is fascinatingly ironic because it uses a scientific paper to say that we shouldn’t trust scientific papers, but let’s look closer because this argument actually has some merit. The paper being references is, “Why most published research findings are false” by John Ioannidis, and it is actually a very useful and informative work, but it often gets misused. The paper describes several reasons why published papers are often wrong, and I will go over just a few highlights. First, we have the problem of small sample sizes. As I have previously explained, small sample sizes are unreliable and you really need a large sample size to be confident in your results, yet many studies get published with small sample sizes, and you should be hesitant to place a lot of confidence in a result that didn’t come from a good sample size.

Second, we have publication bias. This can be a bias because of funding sources or preconceived ideas, but often it is a bias that is inherent in the publication system. In science, it is (unfortunately) often hard to publish a “negative” result. For example, if you do a drug trial and you find that it doesn’t work, you may have trouble publishing that result; whereas, if you got a “positive” result (i.e., it does work) you could easily publish it. The problem is that statistical significance relies on probabilities, and some papers will, get a false positive just by chance (this is called a type I error and I explained it in more detail here). So, when journals only publish positive results, you end up with a lot of false positives which aren’t balanced out by the negatives, because the negatives don’t get published. In other words, the type I error rate among published papers is much higher than the rate among all studies, because negative studies often don’t get published.

Now, all of that may sound very bleak, but it should not make you lose all confidence in the scientific process because of a very important component of scientific inquiry: replication. Ioannidis’s work applies mostly to single paper studies. In other words, when only one study has ever looked at drug X, there is a high chance that the results are actually wrong, but when multiple studies have tested drug X and all found that it works, then you can be fairly certain that it is actually effective. So, the arguments set forth by Ioannidis don’t apply to topics like vaccines, GMOs, and other areas of “settled science,” because they have been examined by thousands of studies. When numerous studies all agree, then you can have very high confidence in the results (this is why meta-analyses and systematic reviews are so useful). So, this paper shouldn’t make you question the safety of vaccines, the effects we are having on the climate, etc. It should, however, make you skeptical of the one or two anti-vaccine papers that you occasionally see, or the one paper supporting some “miracle cure,” or the occasional paper on homeopathy, acupuncture, etc. Those studies almost always have tiny sample sizes and countless other studies have failed to replicate their results. This is why it is so important to look at the entire body of literature not just a single study.


Conclusion

In summary, properly conducted, carefully controlled studies are the only way to reliably understand our universe, and you cannot reject them without good justification. Look around you. All of the modern marvels that you see today were brought to you courteously of science. Further, if I asked you, “How many of your siblings died of a terrible childhood disease?” I’m guessing that the answer would be “none.” If I had asked that question a few decades ago, however, most of you would have lost at least one sibling to diseases which are now almost unheard of. Even if you want to erroneously attribute the decline of those diseases to increased sanitation rather than vaccines and modern medicine, it is still science which is responsible for our increased hygiene and access to clean water. So no matter how you cut it, many of you wouldn’t be alive today if it wasn’t for science. Science clearly works and you need an extremely strong justification for rejecting scientific results.

To be fair, some scientists are corrupt and bad science does occasionally get published, but bad research tends to be identified and discredited by other researchers. In other words, there may be a high probability of a single paper being wrong, but when lots of different studies have all arrived at the same conclusion, you can be very confident in that conclusion. Perhaps most importantly, you cannot simply assume that a paper is bad just because you disagree with its results. You need to present actual evidence that it is flawed or biased before you can reject it.

 

 

 

 

 

 

Posted in Nature of Science | Tagged , , , , , , | 20 Comments

Stop accusing me of ad hominem fallacies you stupid idiots

Ad hominem fallacies are among the most common logical fallacies, but they are also among the most misunderstood. Indeed, I often see people falsely accusing their opponent of committing an ad hominem fallacy. Therefore, I am going to explain how this fallacy actually works and give you some basic tools to identify it. There are two fundamental points that you need to understand, and I will elaborate on them throughout this post. First, in order for an argument to be ad hominem, it has to actually attack its opponent. Second, not all ad hominem attacks are ad hominem fallacies.

Defining “ad hominem”

“Ad hominem” is Latin for “to the man” or “to the person,” and it simply occurs anytime that you attack your opponent. This is very, very important. In order for an argument to be ad hominem it has to verbally assault its opponent. So, attacks against a person’s intelligence, race, gender, appearance, morality, etc. all count as ad hominem, but attacks against a person’s argument are not ad hominem (even if they are written in a hostile style). This seems very simple, and indeed it is, but somehow, people continuously mess this up and take criticisms of their arguments personally. For example, in a recent debate with someone who opposed GMOs, I asked them to show me their sources, and they responded with a link to an opinion piece written for an online news outlet. When I explained to them that their link was not a valid source of scientific information and was irrelevant to the debate, they responded by accusing me of committing an ad hominem fallacy.

Indeed, this is a frequent occurrence. I try really hard to avoid ad hominem attacks on this blog, and you will rarely see me call someone an “idiot,” “moron,” etc.  Nevertheless, I constantly get accused of ad hominem fallacies, and I see this occurring on other pro-science pages as well. So I want to be explicitly clear about this: pointing out a problem in an opponent’s argument, asking for their sources, criticizing their sources, etc. does not count as ad hominem. You cannot accuse someone of an ad hominem fallacy just because they disagreed with you. Unless they actually attacked you or the authors of your sources, they did not make an ad hominem argument.

Daniel Moyniham quote everyone is entitled to his own opinion but not his own factsWhile we are going down this road, it’s worth mentioning that the same rules apply to “bullying,” “being rude,” etc. I cannot even begin to tell you how many times I have been accused of “bullying” just because I repeatedly asked someone for their sources and told them that I didn’t care about their opinions. Asking someone to back up their claims with facts and logic is neither rude nor offensive. So please stop being so thin-skinned. Further, you do not have the right to write/say anything without being subject to criticism or ridicule. Yes, you are entitled to an opinion, and yes, we should all be tolerant of other peoples beliefs, but when you are making a factual claim, you are neither expressing an opinion nor belief, and you should be held accountable for the accuracy of that claim. Imagine, for example, how ridiculous it would be if someone who didn’t accept gravity became upset when people ridiculed their absurd views. Even so, when you make factually incorrect statements about vaccines, evolution, climate change, etc. you aren’t expressing an opinion, you’re just wrong, and no one should be tolerant of your nonsense.

Not all ad hominem attacks are fallacies

Having now established what it means for something to be ad hominem, it is important to discuss what makes an ad hominem attack a fallacy. There are three basic uses of ad hominem assaults, only one of which is fallacious. The first is simply name calling for the sake of name calling, and the title to this post was intended to be a sarcastic example of this. It is ad hominem, and it’s certainly in bad taste (at least it would be if it wasn’t sarcasm), but it’s not actually a fallacy because it’s not being used as an argument. In other words, simply insulting someone isn’t enough to make something a fallacy. In order to be an ad hominem fallacy, you have to use an attack on a person as a means of attacking their argument. In the case of my sarcastic title, I got your attention by using hostile language, then proceeded to actually explain the logic of how these fallacies work; therefore, I did not commit an ad hominem fallacy (i.e., my insult was just an insult, not an argument).

The second use of ad hominem arguments is the really problematic one. It occurs when you are presented with an argument, and you respond by criticizing the person making the argument rather than the argument itself. For example, if someone is presented with an argument and simply responds with, “you’d have to be an utter moron to believe that,” then an ad hominen fallacy has been committed, because they simply attacked the people who accept the argument without ever addressing the argument itself. If, however, they said, “you’d have to be an utter moron to believe that because it commits the following logical fallacies (insert names of fallacies) and has been discredited by the following studies (insert citations)” then they would have not committed a fallacy. In other words, their comment is ad hominem, and it’s in bad taste,  but it’s not a fallacy because they did not use the insult as their argument. Rather, they made a logical argument and explained the problems with their opponent’s view, and then they slapped an insult on there for no good reason. To be fallacy, the insult has to actually be part of your argument.

Amusingly, anti-scientists are generally the ones who I see committing ad hominem fallacies, even though they are also generally the ones who I see falsely accusing others of committing them. For example, the classic “shill gambit” is nearly always an ad hominem fallacy. I get accused of being a shill for Big Pharma or Big Ag almost daily ,when in reality, I receive absolutely no money from them because I support science not big industry (entertainingly no one seems bothered by the fact that I strongly oppose “Big Oil”). Nevertheless, people constantly respond to my posts with comments like, “what a shill” or “how much did Big Pharma pay you to write this?” These responses are ad hominem fallacies, because the people making them generally don’t follow up with logical criticisms of my arguments. Rather, they simply accuse me of being a shill then march off to declare victory to their fellow anti-scientists. In other words, their entire argument can be rephrased as, “you are wrong because you are a shill.” However, unless they can actually provide evidence that I am being paid off (which they can’t, since I’m not), this “argument” is fallacious because it attacks me, not my arguments.

This brings me to my final point and the third usage of ad hominem. There are situations in which you can attack the person instead of their argument without it being a fallacy. For example, let’s imaging a court room scenario where a key witness has identified the murderer, and the defense responds by providing evidence that the witness is a pathological liar. The defense’s argument is ad hominem because the attack is against the person not the person’s argument, but the attack is not fallacious because there is a serious question about this witness’s credibility. If the witness is truly a pathological liar, then they should not be trusted, and their testimony should be viewed as irrelevant. To be clear, the defense has to actually provide compelling evidence that the witness is a pathological liar in order for this argument to be valid. If they cannot back up that claim, then this argument is both an ad hominen fallacy and an ad hoc fallacy (as is the shill gambit).

Similarly, arguing that people like Vani Harri (aka the “Food Babe”), Sherri Tenpenny, Mercola, etc. shouldn’t be trusted because of the truly ludicrous claims that they have made is ad hominem, but it’s not fallacious because it raises serious and completely valid doubts about their credibility. For example, Vani Hari once argued that water crystallizes when you repeatedly say the words “Satan” or “Hitler” around it, and she was concerned that airplanes don’t contain 100% oxygen (the air you breathe is mostly nitrogen, btw). By making these claims, she has demonstrated a terrifying level of scientific ignorance and illiteracy, and she has made it completely clear that she doesn’t know what she is talking about. So, when someone says that you should not trust her because she has frequently been exceedingly wrong, they are making an ad hominem assault, but not an ad hominem fallacy, because her credibility truly is in question. To be clear, however, you do have to be careful when making this type of argument. The fact that her arguments have repeatedly been comically erroneous means that she shouldn’t be trusted, but it does not automatically mean that she is wrong. In other words, if you say, “she is wrong about X because she has repeatedly been wrong in the past,” then you have constructed a logically invalid argument because it is always possible (however unlikely) that she will eventually be right about something. You can, however, say, “she should not be trusted about X and cannot be used as a source because she has repeatedly been wrong in the past,” and there is nothing fallacious about that.

ad hominem flow chart fallacyConclusion

In summary, an argument is only ad hominem when it actually attacks someone. Simply explaining the problems with an argument or asking someone to provide sources does not not count as ad hominem, even if the explanation is given in hostile language. Further, even when an argument is ad hominem, it is only a fallacy if it is attacking the person instead of the person’s argument and if it is not merely pointing out a legitimate, relevant concern about someone’s credibility, morality, etc. So, to test whether or not an argument is an ad hominem fallacy simply follow the guide to the right

Note: originally, this article did not contain the flowchart and instead asked the following three questions:

  1. Is the argument attacking someone?
  2. If #1 is “yes,” is the attack being used as the argument?
  3. If #2 is “yes,” does the attack raise a legitimate, relevant concern about the person’s credibility, morality, etc.?

However, that left out fallacies that are committed by using attacks as proofs, and I think that the actual diagram is easier to follow.

Other posts on logical fallacies

Posted in Rules of Logic | Tagged , | 8 Comments

The real Frankenfoods

franken foodAnti-GMO activists are excellent at stirring up emotions and creating fear.  They are better at frightening gullible people than just about any group that I can think of (though anti-vaccers give them a run for their money). Their posts are full of images of grotesque mutations, giant needles sticking out of vegetables, and bizarre genetically hybridized organisms. The real question, however, is whether or not those fears are justified (spoiler alert: they aren’t). You see, it’s fine to present a fact that also evokes an emotion, but when you are exaggerating or ignoring the truth in order to scare someone, then you are committing a logical fallacy known as an appeal to emotion, and that is exactly what anti-GMO activists are doing.

The term “Frankenfood” is perhaps the greatest embodiment of this fallacious line of emotional manipulation, and honestly, it’s brilliant propaganda. It is simultaneously evocative and memorable. It’s a buzz word that instantly conjures images of mad scientists and dangerous, unethical experiments. It also deliberately misrepresents GMOs. The reality is that GMOs are extremely precise, tightly regulated, and carefully manufactured. As I will demonstrate, there is nothing about them that is deserving of the title, “Frankenfood.” However, to understand GMOs, we first need some context. In other words, we need to understand how our traditional foods are made before we can really understand the significance of how GMOs are made. Therefore I am going to start by explaining how our traditional crops are manufactured, then I will briefly explain how GMOs are produced. All that I ask of you as you read through this, is that you set aside any biases that you might have and briefly ignore all of the anti-GMO fear-mongering.

Note: I am including organic under the umbrella of “traditional crops.”

This is what a wild banana looks like.

This is what a wild banana looks like. Clearly, we have in fact improved on nature.

Artificial selection
GMOs are often presented as being unantural and therefore dangerous; whereas traditional crops (especially organics) are presented as natural and good. Beyond the blatant appeal to nature fallacy, this dichotomy is demonstrably false. As I have previously argued, essentially none of our food is “natural,” and all of it has been genetically modified by thousands of years of careful breeding. You know that nice, delicious, essentially seedless banana that you get from your grocery store? It doesn’t exist in nature. Similarly, those large, juicy strawberries from your farmers market don’t grow in the wild. In fact, virtually none of our crops can be found in nature. So how did we get them?

We got our crops from artificial selection. At some point in history, someone started cultivating things like bananas, and each generation, they selected the plants with the biggest, most delicious fruit, and bred them. Thus, over thousands of generations we went from wild bananas to the far more edible fruit that we have in stores today.

There are several important points here. First, this is a type of genetic manipulation. Anytime that two individuals mate, novel combinations of genetic material are being formed. You see, for organisms like most plants and animals, each individual has two copies of each  gene (one from mom and one from dad). We refer to these copies as alleles, and within a population, there are usually many different alleles for a given trait (for an explanation of the difference between alleles and genes go here). Further, most traits are polygenic, meaning that they are influenced by multiple genes, each of which has multiple alleles. This provides extraordinary potential for genetic variation in the offspring. Indeed, any time that two individuals mate, 50% of the offspring’s genes will be from mom, and 50% will be from dad, resulting in novel combinations of genetic material.

Artificial selection is, however, a means of eliminating that variation. For example, when we cross two individuals who both produced large fruit, we expect them to pass on the alleles for large fruit, which should result in offspring with large fruit. Thus, we are deliberately modifying the genetic makeup of the next generation by determining which alleles it is going to get, and when we do this over thousands of generations, we end up with varieties which simply do not exist in nature. In other words, we create distinct and novel genomes.

Now, here is the really important part, this process is extremely messy and unpredictable. Remember each individual is a combination of 50% of the genetic material from two different individuals, and the results are often unexpected. We might be interested only in the alleles that make big fruit, but artificial selection doesn’t let us exchange only those alleles. Rather, alleles for all of the different genes get exchanged as well. So although we can predict that two individuals with large fruit will produce offspring with large fruit, we cannot predict what the consequences will be of making new combinations of alleles for all of the other thousands of genes. Anti-GMO activists are technically correct that anytime that we make a new combination of alleles we can produce novel and unexpected proteins, allergens, “toxins,” etc., but they ignore the fact that this is also possible from traditional breeding methods. In fact, it is far more likely from them because the entire genome is getting modified, rather than just the trait that we are interested in.

Domestic dogs provide a good illustration of this. We select them for traits like head shape, coat color, size, etc., but selecting for those traits often has unintended consequences for other traits. For example, bulldogs often have to be artificially inseminated in order to breed, pugs are prone to heart problems, German shepherds often have hip dysplasia, etc. All of these are a result of the unrefined and unpredictable nature of artificial selection. Selecting for one allele can inadvertently affect another trait or cause some harmful combination of alleles to rise to prominence. It is a very crude and imprecise process.

Hybrids
Just in case you were unimpressed with artificial selection, let’s talk about hybrids. All of the same concepts and problems apply, but with one important difference: in typical artificial selection you are combing 50% of the genetic material from two individuals of the same species.  They may be from different populations and different parts of the world, but they are still the same species. Hybrids, however, are crosses between two different species. Think about this for a minute. We are making entirely new organisms that contain half the genes from one species and half the genes from another species, yet no one refers to plumcots, tangelos, pluots, etc. as “Frankenfoods.”

Mutation breeding
The fact that anti-GMO activists don’t attack mutation breeding is one of the most astounding inconsistencies imaginable. What if I told you that many of the crops that you eat were created by exposing them to radiation or chemicals in order to induce mutations? Would you be surprised? I was. I knew that it occurred, but I didn’t realize its true extent. The reality is that thousands of different varieties of crops have been produced this way, and the production process is a scene straight from a comic book. We literally expose plants to Gamma radiation in order to induce mutations (that’s the same stuff that produced the Hulk), then we select and breed the ones with beneficial mutations. Mutations are, however, totally random. So a crop may have one beneficial mutation, but several harmful mutations. There is simply no way for us to know what we are going to get out of this process. We just induce mutations and hope for the best. To be clear, breeders try to cross the mutated strains such that only the beneficial mutations survive, but ensuring that nothing detrimental anywhere in the genome gets passed on is nearly impossible. It would be entirely possible, for example, for a mutated plant to be selected because it had a mutation for larger fruit while, unbeknownst to us, it also had a mutation that would cause a severe allergic reaction in many people. Nevertheless, no one seems to freak out about this, and even organic companies will often sell crops that were produced by mutation breeding.

GMOs
Before I explain how GMOs actually work, I want to briefly recap. Traditional breeding practices include: altering a breed’s entire genome by selecting for a particular trait (often with unintended consequences on the rest of the genome), creating entirely new organisms that are 50% one species and 50% another species, and using radiation and chemicals to induce completely random, uncontrolled, and unpredictable mutations. In contrast, GMOs are made by carefully and precisely modifying or inserting a handful of genes. Think about that for a minute. Let it really sink in. Anti-GMO activists freak out over “Frankenfoods” and the potential of unintended allergens and toxins from modifying or inserting one or two genes, yet traditional crops are made by modifying the entire genome! How can anyone possibly think that deliberately and precisely changing a very specific set of genes is dangerous, but randomly and unpredictably mutating the genome is just fine? Why should we call something a “Frankenfood” for having one or two genes from another species when hybrids are universally acceptable even though half of their genes came from a different species?

joker meme, GMO mutation breedingI want to be very clear here, I’m not saying that traditional breeding methods are dangerous. Rather, I am saying that all of the potential problems with GMOs are also potential problems with traditional crops, but GMOs involve fewer genetic changes, and the changes are very carefully controlled. Yes, there is always the potential of unintended consequences when you modify a genome, but the odds of an unintended effect are much lower when you deliberately and carefully change one very specific part than they are when you change massive sections with essentially no control over what alleles are being swapped or modified. Indeed, that is exactly what a study comparing GMO rice with mutation breeding rice found (i.e., there were more unintended effects in the mutation breeding rice; Batista et al. 2008). That result really should make intuitive sense. Genetic engineering is done very precisely to modify a specific gene in a specific way, but mutations modify random genes in completely random ways! It makes absolutely no sense to oppose GMOs but readily consume mutation crops, hybrids, etc.

Conclusion
Essentially all of our foods were created by manipulating an organism’s DNA, but genetic engineering is unique among our cultivation methods in that it is extremely precise and only changes small, carefully selected parts of the genome. Other breeding methods result in extremely large and unpredictable exchanges of DNA or even the random creation of entirely new and unpredictable genetic traits. Therefore, there is no reason to think that GMOs themselves are going to be more dangerous than traditional crops or that they will have more unintended consequences. So if you want to describe something as a “Frankenfood” you should be talking about mutation crops or hybrids, not GMOs.

Note: for more technical info on the similarities and differences between GMOs and traditional crops, as well as unintended consequences form the various methods, I recommend Cellini et al. 2004.

Note to commenters: this post is about GMOs themselves. In keeping with the Comment Rules, please stay on topic. Issues such as pesticide use, Monsanto’s business practices, and GMO labeling will be dealt with in future posts. If you want to discuss those topics, please wait for those posts.

 

Posted in GMO | Tagged , , , , | 4 Comments