Don’t tell people to “Google it.” That’s your job, not theirs

I spend a lot of time debating people on the internet, and, unsurprisingly, I frequently encounter people who make outlandish claims without providing any evidence to support those claims. In such situations, I typically ask them to present their sources, at which point they usually respond indignantly with something along the lines of, “just Google it,” “look it up yourself,” “do your own research,” or “so you expect me to do your work for you? I don’t think so.” However, as I will explain in this post, these responses are fundamentally flawed because they fallaciously shift the burden of proof and ignore the rules of logic.

The first major problem is simply that these responses misuse the burden of proof. The person making the claim always bears the burden and is responsible for providing supporting evidence, especially when their claim goes against the generally accepted wisdom. In other words, if I say, “aliens caused 9/11” then the burden is entirely on me to provide reliable sources to back up my assertion. You, in contrast, would be under absolutely no obligation to refute my claim or even take it seriously until I provided that evidence. So I could not tell you to “just Google it” because that would be placing the responsibility on you to debunk my claim, when in reality, the responsibility is on me to support my claim. To put it simply, you do not have to take a claim seriously until your opponent provides reputable sources to back it up. It is 100% their responsibility. So when people say things like, “so you expect me to do your work for you?” they have things totally backwards.

A useful way to think about this is to apply it to courtroom scenarios. The prosecution is the one making the claim or assertion (i.e., “person X is guilty”), so it is their responsibility to provide the evidence. The defense only has a responsibility after the evidence has been presented. In other words, the defendant does not need to prove their position (i.e., they don’t need to prove that they are innocent), rather they simply need to show that the prosecution’s evidence does not prove guilt. So the defense only bears a burden to defeat the prosecution’s evidence, rather than bearing a burden to provide evidence supporting their innocence. To be clear, there is certainly nothing to stop them from providing that evidence, but they aren’t required to do so. In other words, the prosecution must provide evidence of guilt, whereas the defense does not have to provide evidence of innocence.

The exact same rules apply in logical debates. Any time that you make a claim like, “X cause Y,” “Z is dangerous,” “studies have found X,” etc., you have just placed yourself into the role of the prosecution. At that point, it is your responsibility to provide evidence for your claim, whereas your opponent does not have to provide evidence refuting your claim. They are certainly welcome to do so, but they are under no obligation to prove you wrong until you provide evidence.

When you think about this in the context of a debate about science, the reason that the burden of proof works this way should become obvious. Imagine, for example, that I said, “studies have found X,” but I refused to actually provide you with such studies. How would you prove that I was wrong? Quite frankly, you couldn’t. You could show me that you failed to find such papers in any major database, but that wouldn’t prove that the studies don’t exist because they could simply have been published in a minor journal that isn’t well archived. See the problem? Proving a negative claim (e.g., “papers that found X do not exist”) is nearly impossible. It’s like trying to prove that Bigfoot doesn’t exist. It can’t be done because no matter how many trail cameras we put out, it is always technically possible that Bigfoot does exist and has simply managed to elude all of the cameras. That is why the burden is always on the person making the claim. I would be responsible for providing you with the papers that found X, just as Bigfoot hunters, UFO spotters, etc. are all responsible for providing evidence for their position. In other words, if one person says, “prove to me that Nessie, yetis, etc. exist” and the other says, “prove to me that they don’t,” the second person is being irrational because they are trying to shift the burden of proof. They are required to provide evidence for their position, not the other way around (note: you may notice that these arguments also contain an argument from ignorance fallacy, that is because shifting the burden of proof is simply a special case of that fallacy).

Adam Savage internet minefield information

Via MythBusters Episode 187 “Bubble Pack Plunge”

Getting back to the original topic of simply telling people to Google something rather than actually providing them with the source, there is another serious problem there. Namely, Google is a mess. You can find websites supporting pretty much any quack position out there. So for any topic, you will be able to find “sources” supporting a given view, but those sources are often utter crap. As a result, simply telling your opponent to Google something is a terrible idea because they have no way of knowing specifically which sources you read. Imagine, for example, that you tell me that X causes Y and I should just Google it, so I do, but the first 20 or so results that I find are all from unreliable, quack websites. At that point, most people would probably stop looking, but the fact that the most popular hits are junk obviously doesn’t mean that there isn’t a good source out there somewhere. This is once again the problem of trying to prove a negative (i.e., trying to prove that a given source doesn’t exist). It is unreasonable to expect your opponent to spend hours looking for information to support your position. That is your job, not theirs.

Finally, knowing exactly which sources your opponent is basing their arguments on is extremely important, because if they are bad sources, then you can discredit their argument right there and then. To be clear, using bad sources doesn’t mean that the conclusion of the argument is wrong (that would be a fallacy fallacy), but it does mean that the argument itself must be abandoned until such time as reliable sources can be presented. Once again, the person making the claim must provide the evidence to support it, and until they do that, you are under no obligation to entertain their fanciful notions.

At this point, you might be thinking that is entirely unfair to expect people to have sources to back up all of their claims, and if you are thinking that, then that’s really too bad because this is simply how logical debates work. If you don’t have the sources to back up your claims, then you shouldn’t be debating people, because no one has to take you seriously until you provide your sources. It’s really that simple. Personally, I make lists and databases of sources on various topics, and every time that I read a useful new article, I archive it into my lists, that way I always have the sources ready to back up my positions. Microsoft Word or Excel are just fine if you want to make lists of URLs, but given that peer-reviewed publications are the relevant sources for most scientific debates, I recommend using a PDF organizer (I personally like Mendeley, which is free unless you need to store over 2GBs of PDFs).

The point in all of this is really quite simple: if you are claiming that something is true, then it is your responsibility to provide high quality sources to back up your position, and your opponent is under no obligation to refute your claims unless you provide those sources. Indeed, this entire post was summed up nicely by Christopher Hitchens in what has come to be known as Hitchens’ Razor, “What can be asserted without evidence can be dismissed without evidence” (or if you prefer the original Latin version, “Quod gratis asseritur, gratis negatur”).

Suggested further reading:
The Logic of Science — The Rules of Logic Part 5: Occam’s Razor and the Burden of Proof
RationalWiki — Burden of Proof
Science or Not — The reversed responsibility response – switching the burden of proof
Philosophy of Religion — The Burden of Proof
The Free Dictionary — Burden of Proof

Posted in Rules of Logic | Tagged , , , | Leave a comment

Debunking creationism: a visual comparison of “micro” and “macroevolution”

Creationists often like to claim that there are two fundamentally different types of evolution: microevolution and macroevolution. They argue that microevolution does actually occur, but only produces small changes within a species or “kind” of animal. For example, most creationists are generally okay with the concept that all finches evolved from a common ancestor, all crows evolved from a common ancestor, all ducks evolved from a common ancestor, etc. However, they draw the line roughly at the taxonomic level of family (e.g., ducks are in the anatidae family), and they argue that evolution beyond that level (what they call macroevolution) is impossible and has never and can never happen. Thus, they dismiss the notion that finches, crows, and ducks all share a common ancestor. I have written about this before and explained that this distinction is completely arbitrary and meaningless because the exact same evolutionary mechanisms that caused the evolution of finch species could (and indeed did) cause the evolution of all birds. In other words, macroevolution is simply the accumulation of microevolutionary steps, and one inherently leads to the other.

Since I have written a detailed explanation of the problems with creationists’ reasoning, I decided to take a different approach for this post and provide a visual explanation. The image below shows a hypothetical pathway through which turtles could have evolved from their lizard-like ancestors (several of these images are rendering of actual fossils: B6 = Milleretta, A15 = Eunotosaurus, C22 = Odontochelys, B30 = Proganochelys, D37 = Chelydra [modern turtles]; these are just screen shots from Dr. Tyler Lyson’s excellent video). This full progression is, of course, what creationists would consider to be macroevolution, and creationists are adamant that today’s turtle families were uniquely created and did not evolve from a lizard-like ancestor. However, because they accept microevolution, most creationists would have no problem with any particular pair of images, and they would accept that A1 could evolve into B1, B1 could evolve into C1, etc. In other words, each pair of images shows “microevolution” (which creationists almost universally accept), but when we string all of those steps together, we get “macroevolution” (which creationists say is impossible).

You can probably see where I am going with this, but just to be sure, I will state it explicitly. If you are going to say that macroevolution is impossible and turtles could not have evolved from lizard-like ancestors, then which step do you think is impossible? Please show me which step could not have occurred, and justify that claim. Additionally, please explain the obvious transitional fossils. Remember, B6, A15, C22, B30, and D37 are actual fossils, and they perfectly match the expectations for what a transitional fossil should look like (details here). So, if turtles and their lizard like ancestors were uniquely created kinds, then at what point in this progression do lizard-like reptiles end and turtles begin?

These images are simply screen shots from Dr. Tyler Lyson's video

These images are simply screen shots from Dr. Tyler Lyson’s video

 

At this point, some people will likely be inclined to ignore my questions and harp instead on the fact that this pathway is hypothetical, but that argument completely misses the point in several ways. First, this pathway is only partially hypothetical because B6, A15, C22, B30, and D37 are actual fossils that we have found. Additionally, of course the pathway is partially hypothetical. We will never find every single one of these steps, and we don’t need to in order to defeat creationism. Evolution is very much like the visible light spectrum. Each color gradually fades into the next color without a clear breaking point. In other words, there is a point along the spectrum that is clearly red and there is a point that is clearly blue and there is a point that is clearly

Evolution is like the visible light spectrum (A) with each color (species) gradually changing into the next without a clear point at which one ends and the other begins.  Creationists predict something like B, where each "kind" is unique and distinct form the other kinds. What we actually find in the fossil record is C. There are clearly transitionals that creationists say shouldn't exist, but because the fossil record is incompletely, we will never have every single step in the spectrum.

Evolution is like the visible light spectrum (A) with each color (species) gradually changing into the next without a clear point at which one ends and the other begins. Creationists predict something like B, where each “kind” is unique and distinct form the other kinds. What we actually find in the fossil record is C. There are clearly transitionals that creationists say shouldn’t exist, but because the fossil record is incompletely, we will never have every single step in the spectrum.

violet, but there is a spectrum of change in between those points and it is not possible to pick an exact point where the blue ends and violet begins, just as you cannot pinpoint the exact step at which the reptile become a turtle as we know it. The problem is that the fossil record is extremely incomplete. The conditions that are required for a fossil to from very rarely occur, and the vast majority of dead animals are eaten rather than fossilized. So only a remarkably small subset of animals ever become fossilized. Additionally, even if a fossil is formed, it has to survive for millions of years without being eroded away, and we ultimately have to find. As a result, the fossil record will always be incomplete and it is totally unreasonable to expect that we would find every single one of the steps illustrated above. However, what we have been able to find is plenty to refute creationism, because creationism claims that animals were created as distinct kinds. In other words, it does not predict a spectrum. Rather, it predicts that there should be blues and there should be violets, but there should not be intermediate steps. What we actually find in the fossil record is, of course, an incomplete spectrum with lots of intermediates, which is exactly what we expect from evolution. Look at A15 (Eunotosaurus), for example. It’s certainly not a modern turtle, but it’s not a lizard either. It has half the traits of both groups. It is precisely the type of intermediate that creationism says shouldn’t exist.

Finally, the argument that this pathway is meaningless because it is partially hypothetical misses the point because it is absolutely fine to use hypotheticals to defeat absolute claims. Creationists claim that macroevolution cannot happen, and this pathway shows that it can happen. In other words, to defeat the claim that macroevolution is impossible, I don’t need to prove that this pathway actually occurred; rather, I simply have to show that a pathway is possible, which it clear is. We can of course do this for tons of examples of macroevolution. For example, scientists have known the steps involved in the evolution of an eye for a very long time, and a close examination of the structure of bacterial flagella has shown that it is entirely possible for flagella to have evolved by evolution.

In short, if you are going to insist that macroevolution is impossible, then I want you to look at the evolution of the turtle and tell me which step is impossible (and justify that claim). I also want you to explain the existence of the known intermediate fossils (without committing an ad hoc fallacy), and I want tell me the exact point at which modern turtles first appear. If you are going to comment in defense of creationism, then I expect an actual answer to those questions.

Note: Please read this post before bringing up the fundamentally flawed “irreducible complexity argument” (spoiler alert: it ignores the fact that evolution is blind and each step simply needs to be useful for something, rather than being useful for a particular end product).

 Note: Before anyone responds by saying that “Darwin himself said that the notion that an eye could evolve was ‘absurd,’” please realize that this argument misquotes Darwin. What he actually said was that saying that an eye could form naturally seems absurd…until you understand evolution. He went on to explain how the eye could have evolved via natural selection.    

 Related posts

Posted in Science of Evolution | Tagged , , , , , | 14 Comments

Who reviews scientific papers and how do reviews work?

I spent my afternoon reviewing a paper for a scientific journal and making a recommendation about whether or not the paper should be published. As a scientist, this is not an uncommon task for me, but it is a process that is largely foreign to the general public. Indeed, the peer-review system often seems to be a mystery to those who don’t participate in it, and, as a result, it is a frequent topic for this blog. For example, I have previously written about what it takes to publish a paper. However, I have not previously written a post specifically about what it is like to be a reviewer or even who reviewers are. So, I thought I would take this opportunity to explain the process from a reviewer’s point of view and offer you a window into the system that determines which papers get published.

 

Who are reviewers and how are they chosen?
In short, reviewers are scientists. The peer-review system is rather interesting because essentially every involved with it acts as both an author and a reviewer. In other words, reviewers are themselves scientists who also submit papers for review. This is a good system, because it means that scientific papers are being reviewed by other scientists, not by politicians, corporations, etc.

Nevertheless, reviewers obviously aren’t chosen at random from within the scientific community. Rather, journal editors choose them based on relevant experience and expertise. This can happen in several ways. Often, journals require you to recommend reviewers when you submit your paper. In other words, as part of the submission process, you have to nominate several people to serve as reviewers, as well as providing contact information and (often) a justification for why they would make suitable reviewers. As a general rule, you want to suggest people who have published similar papers (often papers that you cited in the paper you are submitting) and who aren’t in anyway affiliated with you (this insightful post provides more details about how to select reviewers). Ultimately though, the editor has the final say, and he/she will consider your suggestions and make the final call.

Another common method that journals use to select reviewers is simply to recruit reviewers during the paper submission process (or via society memberships). In other words, the submission form often includes an optional check-box that says something to the effect of “I am willing to be a reviewer for this journal.” Generally, this is also accompanied by a section where you list your areas of expertise. Thus, if you are submitting to a herpetology journal (i.e., reptiles and amphibians) and you check the box and list yourself as a turtle expert, then you will go on the list of potential reviewers for turtle papers.

A third mechanism that editors use is simply to contact authors who recently published similar research in the journal that they edit. Indeed, after publishing a paper in a particular journal, it is quite common for the editor of that journal to contact you about reviewing a different paper for that journal (this has happened to me several times).

Finally, editors may choose simply to send the paper to someone who they know does research in a similar area or who was recommended to them by someone else. For example, the editor may have recently read a similar paper and contact one of the authors of that paper. Indeed, the paper that I reviewed today was in a journal that I have never published in or signed up to be a reviewer for, and the paper did not cite my previous work, so I doubt that the authors recommended me. However, I have previously published extremely similar and relevant research (honestly, it is pretty irritating that the authors didn’t cite me), so I suspect that I was selected by the editor because of my previous papers on this topic.

Regardless of how they are chosen, the point is that reviewers are generally experts on the topic of the paper that they are being asked to review. In other words, editors select scientists who have the necessary skills, knowledge, and experience to assess the paper and determine whether or not it is worthy of publication. Usually, at least two reviewers are selected per paper, but it is not uncommon to have three reviewers, and some journals use four or more.

Who can’t be a reviewer?
Most publishing scientists also serve as reviewers, but not all scientists are eligible to be a reviewer for a given paper. First, to be a potential reviewer, you have to have expertise on the particular field that the paper is about. I, for example, am predominately a herpetologist, so I would never get asked to review a paper on physics. For that matter, I wouldn’t even get asked to review a paper on botany. I might, however, be asked to review papers on other areas of zoology, but only if the topic of the paper was closely aligned with topics that I study (e.g., one of my areas of research is population ecology, and although I study the population ecology of reptiles and amphibians, the same concepts, statistics, etc. apply to other taxa, so I have the necessary skills and knowledge to assess a population paper on birds, for example).

A second criteria is a lack of conflicts of interest. Exactly how that is defined is variable, but as a general rule, reviewers should not be in anyway associated with the paper in question, and they should not be institutionally linked to any of the authors. I could not, for example, serve as a reviewer for a paper that was written by another graduate student in my lab, because that would be a conflict of interest (i.e., even though I would try to be objective, I would be less likely to criticize the paper because I work with and like the authors).

Finally, in many cases, authors can recommend people who should not serve as reviewers. Editors are under no obligation to follow these recommendations, but if there is someone who you really don’t want reviewing your paper, you can make that case. For example, if you have a long standing rivalry with someone and, as a result, that person likely would not be objective, then you can argue that he/she should not be selected as a reviewer. To be clear though, there needs to be a legitimate reason why that person would not make a good reviewer, and you can’t include someone in that list simply because you are concerned that they will find problems with your paper.


What happens when you receive a request to review a paper?
Each journal is different, but this is generally how things play out. First, you receive an email from an editor asking if you would be willing to review a given paper. This email usually includes the abstract for the paper, or, at the very least, the topic that the paper is on. It may also include other information such as the number of pages, words, figures, and/or tables in the paper. Sometimes this email will have information about the reviewing standards of that journal such as whether or not the review is anonymous and the time frame in which you are expected to provide the review.

As a reviewer, you then look at the information that you have been given and decide whether or not the paper is on a topic that you are sufficiently knowledgeable about, whether or not you have any conflicts of interest, and whether or not you currently have the time to review it. You then respond to the editor to tell them whether or not you are willing to be a reviewer. If you reject the request, it is generally considered good form to suggest an alternative reviewer (which is another way that editors identify potential reviewers).

Once you accept your duty as a reviewer, you generally receive a copy of the paper as well as paperwork on confidentiality. Some journals also provide you with a review template that you are supposed to fill out, whereas others let you provide feedback in whatever way you see fit. Generally, with either system, the final review will consist of several summary statements about your views on the paper, a list/explanations of your major criticisms, and a list of comments on specific lines (these are either annotations to the original document, or a list with corresponding line numbers).

What do reviewers look for?
Imagine that you are a reviewer, and you have just been given a paper to read. What are you going to look for? There are actually lots of things that you should look for, but the key thing to keep in mind is that the job of a reviewer is to assess the quality of the research rather than acting as an editor. In other words, reviewers do look at things like grammar, readability, and the presentation of the data (e.g. good/appropriate figures and tables), but their main duty is to act as a filtering mechanism that blocks bad science from getting published and helps authors to improve potentially useful research.

As such, the most important thing that reviewers look at is the methodology. They check things like sample size, experimental design, the statistics that were used, how the statistical models were set up, etc. All of this is intended to identify poor methodology and ensure that the study was done correctly. Following that, reviewers will check to see if the results were reported and discussed properly. It is not at all uncommon for authors to jump to conclusions that are not supported by the data, and it is the reviewers’ job to reign them back in and make sure that all of the conclusions are merited.

Additionally, reviewers are tasked with making sure that the paper is well grounded in the scientific literature. All research inevitably builds on previous research. As such, papers are supposed to cite and discuss relevant papers, especially if similar studies reached different conclusions, so reviewers check to make sure that this was done appropriately. This is another reason why it is important for reviewers to be experts on the given field. As experts, they know the literature, so if important papers are missing, they will be able to point them out. Indeed, it is exceptionally common for reviewers to suggest specific papers that the authors should have cited (I made several such suggestions on the paper that I reviewed today).

Finally, it is the reviewers’ job to actually be helpful to the authors. Many otherwise nice people become utter jerks when reviewing, but the idea is actually for reviewers to provide constructive criticism that will help the authors improve their paper. Thus, reviewers are asked not only to point out problems with the paper, but also to suggest ways to fix those problems (e.g., “your statistical method is inappropriate, and you should use method X instead”).

What do reviewers recommend?
You have now read the paper and made a list of comments, which means that it is time for you to make a decision. Does this paper deserve to be published? You generally have 4–5 options to choose from (some journals don’t use option 2).

  1. Reject without the option to resubmit — This means that the paper is seriously flawed and will rejected without further consideration.
  2. Reject with the option to resubmit — This means that the paper has serious flaws, but it. also has merit if those flaws can be correct. The authors can then revise the manuscript based on your comments and resubmit it back to that journal. At that point it goes back out for review, and you will often (but not always) be asked to review the revised paper.
  3. Accept pending major revisions — This means that the study has merit, but there are still some substantial issues that need to be addressed. If the authors can correct those errors to the editor’s satisfaction, then it will be accepted for publication without further review.
  4. Accepted pending minor revisions — This means that the paper is solid, but there are some minor issues that need to be dealt with before it will be published.
  5. Accepted in its current form — This is a theoretical state in which a paper is accepted without any changes being required. I’m not convinced that it ever actually happens to real papers (it definitely hasn’t in my experience as either an author or reviewer).

After you make and justify your decision, the editor will look at your comments as well as the comments made by the other reviewer(s), then make the final decision about the fate of the paper. This system of having several reviewers is another strength of the peer-review process, because even if one reviewer does a crappy job and misses major flaws, the other reviewer(s) are there to pick up the slack.


Why do scientists serve as reviewers?
It is worth mentioning that you don’t get paid to be a reviewer. It is entirely a volunteer service, and it is quite time consuming. I, for example, had a large data set that I was hoping to analyze this afternoon, but instead I spent six hours reviewing a paper, and I never got around to the data (and this was a fairly short paper). So you may be wondering why on earth scientists do it. Why don’t we always just reject requests to review papers? I obviously can’t speak for every scientist, but I can tell you my views and why I do it and take it seriously, and I know many other scientists who feel the same way. So I can’t state anything statistical, but I suspect that this would be a common response.

First, it is simply reciprocity. Every time that I submit a paper, I am imposing on other scientists to take time out of their busy schedules and review my research. As such, it is only fair that I then take time out of my schedule to review other scientists’ research. If everyone tried to pass the buck, then all of the reviews would be being conducted by an increasingly small and disgruntled group of scientists, and that would be bad for everyone. So acting as a reviewer is really just paying my dues as a member of the scientific community.

Second, as a reviewer, I get to play an active role in ensuring the quality of the research in my field, and that is a duty that I take very seriously. I obviously care greatly about my field and the advancement thereof, and I want the papers in my area of research to be of the highest quality possible. So, by serving as a reviewer, I get to block flawed research, promote high quality research, and make recommendations about how to improve research. This lets me extend my influence and impact on my field far beyond my own publications, and I see that as both a duty and a privilege.

To put this another way, I think that being a reviewer is an enormous responsibility. I know what it takes to do original research. I know the amount of work and effort that is involved, and I know what it is like to have all of your work torn to shreds by a reviewer. So, when I receive a paper, I always want it to be good and publishable, because I want those scientists to be rewarded for their extraordinary effort. At the same time though, when I agree to review a paper, I accept responsibility for preventing bad research from being published, and that responsibility motivates me to make absolutely sure that the research is solid before I give it my stamp of approval.

Summary
In short, reviewers are simply other scientists who have been matched with a paper based on their experience and expertise. It is their duty to carefully examine the paper, determine whether or not it was conducted correctly, and recommend if it should be published or rejected. Scientists do not get paid for this service, but it is an important task that researchers tend to take seriously.

Note: Journals vary widely with regards to anonymity. In some cases, reviewers don’t know who the authors are and authors don’t know who the reviewers are. In other cases, reviewers have the authors names, but authors don’t know who the reviewers are, and in yet other cases, both parties know the identities of the other party, and some reviews are even made public. There is an interesting debate about which system is best, and I had planned on going into it, but this post became longer than intended, so I will save that topic for a later post.

flowchart diagram how to publish scientific peer-reviewed paper blog

This flowchart summarizes the steps required to publish a peer-reviewed paper. See this post for details.

Posted in Nature of Science | Tagged | 10 Comments

Measles is not better than autism: Debunking anti-vaccine arguments

Over the weekend, I was unfortunate enough to come across an article by Jaclyn Harwell at “Modern Alternative Health” called “5 Reasons That Measles is Better Than Autism.” Unsurprisingly, it was full of misinformation and shoddy arguments. Indeed, it was so full of counterfactual claims and dishonest distortions of reality that I felt compelled to write a rebuttal, especially since the faulty arguments contained in the post are prevalent among antivaccers. Therefore, I am going to dissect that post and explain why it is nonsense. Before I get to Jaclyn’s “5 reasons,” however, I need to deal with several serious problems in the opening statements of the article.

First, this post is fundamentally flawed because the entire thing is based on the false dichotomy that you have to choose between because vaccines and autism. In reality, of course, vaccines do not cause autism. As I explained at length in this post, numerous enormous studies have tested the vaccine/autism hypothesis and failed to find any evidence of vaccines causing autism. The only studies that suggested that vaccines cause autism were tiny and riddled with problems. Anecdotes don’t matter, deceptive documentaries are irrelevant, and the “CDC whistle-blower” did not present any actual evidence of vaccines causing autism (or of CDC corruption, for that matter). Only scientific evidence matters, and science is overwhelmingly on the side of vaccines.

measles isn't harmless meme anual deaths

Second, the core argument throughout the article is that measles is a mild illness, but that argument is a blatant lie. I’ll deal with this at length under reason #5, but briefly, in developed countries with access to modern medicine, measles still has a death rate of 1 in 1,000 infected individuals. Nevertheless, it is true that in terms of sheer numbers, there are few measles deaths in industrialized countries today, but that is only because of vaccines! Indeed, in the US, in the 10 years prior to the introduction of the measles vaccine, measles killed an average of 440.3 children each year in the US alone (details and sources here; also note that the death rate per infected individual was roughly 1 in 1,000 then as well). Further, when we look at the entire world, measles still kills well over 100,000 people annually, and the World Health Organization describes it as, “one of the leading causes of death among young children” (WHO 2016). That’s not fear-mongering, that’s a fact. So while anti-vaccers want you to think that measles is trivial, actual epidemiologists have a very different view of this disease.

Further, even when children don’t die from measles, there is still a 1 in 10 chance that they will develop an ear infection (which can cause permanent hearing loss), a 1 in 20 chance that they will develop pneumonia, and a 1 in 1,000 chance that they will develop encephalitis (a swelling of the brain; CDC 2015a). There are also other complications such as febrile seizures in 0.1–2.3% of patients (Orenstein et al. 2004). Indeed, during a large outbreak in France, nearly 25% of victims had to be hospitalized (Antona et al. 2013), and in the US from 2001–2008, 40% of measles patients were hospitalized (CDC 2011). You simply cannot describe a disease that hospitalizes 25–40% of its victims and kills over 100,000 people annually as a minor illness (WHO 2016). That is extremely dishonest (see reason #5 for a more thorough explanation of just how deadly this disease truly is).

measles outbreaks, low vaccine rates

Please carefully note how the measles outbreaks are centered around the communities with low vaccination rates. Image from Knol et al. 2013

Third, the author claims that, “in the majority of outbreaks, most affected people have, indeed, been vaccinated.” This is an extremely common anti-vaccine argument, and it is horribly flawed. First, in the case of measles the claim itself isn’t even true. For example, in the 2001–2008 study that I mentioned earlier, 89% of patients were unvaccinated (CDC 2011). Similarly, during an enormous outbreak in France that involved over 20,000 people, 80% of patients were unvaccinated (Antona et al. 2013), and during a large outbreak in the Netherlands, 91.7% of patients were unvaccinated (Knol et al. 2013). Those are not cherry-picked examples. Rather they are the norm for measles outbreaks.

Additionally, and far more importantly, this argument ignores rudimentary math. You have to look at the proportions not the raw numbers. Most people are vaccinated, so of course many outbreaks will affect vaccinated people. Simply looking at the raw number of cases doesn’t tell us whether or not vaccinated people actually get the disease more often, and when we look at the actual rates, we find that infection rates are substantially higher among the unvaccinated (King et al. 1991; Schmitz et al. 2011). To give a completely analogous example, most car accidents involve sober drivers, but that doesn’t mean that driving drunk isn’t dangerous. Most people drive sober. Therefore, in terms of raw numbers, most accidents involve sober drivers. However, when you look at the rates, you find that the rate of accidents is far higher among drunk drivers than among sober drivers.

Finally, the author insinuates that “the efficacy of vaccinations is questionable at best.” Again, this is simply not true, especially for the measles vaccine. Even beyond the examples that I cited early of outbreaks corresponding to unvaccinated communities and higher measles rates among the unvaccinated, many other studies have examined the efficiency of the measles vaccine and it is quite high (93% with one dose and 95–97% with two doses; King et al. 1991; CDC 2015b). For example, Clemens et al. (1988), found that introducing the measles vaccine into a population reduced the measles death rates by 57%, but please go ahead and say that the vaccine isn’t effective or important (note the immense sarcasm).

Now that we are clear on the actual facts, let’s look at the five arguments in the article. My intention is to address the original article point by point, but the original jumped around quite a bit and didn’t follow a great logical structure. As a result, I thought that it might be useful to provide a bullet list of key points and the sections in which they are discussed.

  • Vaccines don’t cause autism (introduction, #1)
  • Getting measles to avoid getting measles is idiotic (#1)
  • Measles actually weakens the immune system (#2)
  • Neither measles nor autism can be cured, but measles can be prevented (#4)
  • Total measles mortalities are relatively low because of vaccines (#1, #5)
  • Measles still kills over 100,000 people annually (introduction, #1, #5)
  • Without vaccines measles would kill over 1 million people annually (#5)
  • Without vaccines, at least 0.35 in 100 children under the age of five would die from measles each year (#5)

Note: To be clear, if you want to say that surviving a measles infection without any serious or lasting consequences is better than a life-long affliction with severe autism, fine, I’m not going to disagree with you. What I take issue with is the dishonest and misleading claims about measles mortality rates, the dangerous and unmerited vilification of vaccines, and the damaging way that autism is presented. We need to stop talking about autism as if it is the world’s worst disease and as if people with autism are inferior, damaged, and in need of repair. That is an indignity that autistic people should not have to suffer. Autism is not the worst thing that could happen to your child.

 

Bad reason #1: “Measles imparts lifelong immunity.”
I have repeatedly had to address this argument from anti-vaccers (for example here and here), which is frustrating because it is such an obviously ridiculous argument. We can rephrase this argument simply as, “getting measles is good because it prevents you from getting measles.” Think about that for a second. This argument i

s actually proposing that you should avoid getting sick by getting sick. In contrast, a vaccine will prevent you from ever getting sick. In other words, a vaccine prevents you from ever getting measles, whereas measles only prevents you from getting measles a second time. Using a measles infection to protect you from a measles infection is like using pregnancy as a contraceptive.

To be fair, some studies have found that vaccine-induced immunity does not last as long as natural immunity (Christenson and Bottiger 1994), but other research has found that both forms last equally well for many years (Jokinen et al. 2007), and since measles is predominantly a childhood disease, those first years are the really important ones. Additionally, the longevity of immunity can easily, safely, and effectively be extended with boosters, which, once again, prevent you from getting measles in the first place. Further, I have already provided multiple citations showing that the measles vaccine is very effective and infection rates are much higher among the unvaccinated. In other words, this anti-vaccine argument makes it sound like getting measles is the best way to avoid getting measles, but that is clearly ridiculous. The vaccine is by far the best way to avoid ever having to suffer through measles.

Jaclyn goes on to reiterate that measles “isn’t a big deal” by citing the fact that between 1950 and 1960 the death rate for the US was less than 1 per 100,000 individuals (for the entire population, not per infected individual). As I showed earlier, however, that comes out to well over 400 deaths annually, which is in fact a big deal (also see #5 for an explanation of why the 1 in 100,000 figure is misleading). Even if vaccines caused autism (which again, they don’t), a life of autism is not worse than death by measles.

She also claims that there were no deaths among the 1,153 US cases of measles from 2001–2013. First, that claim is not true. There were two measles deaths from 2001–2003 (CDC 2004), another two in 2009 (Kockanek 2011), and another two in 2010 (Murphy 2010). Nevertheless, the death rates are low, but this argument totally ignores the fact that the reason that we have so few deaths is because we have so few cases of measles, and the reason that we have so few cases is because we have vaccines! Vaccines are the only reason that we don’t have thousands of measles deaths each year. Indeed, it is estimated that between 1994 and 2013, the measles vaccine prevented >70 million cases of measles in the US, which comes out to a total of 57,300 deaths according to the calculations used by the authors (Whitney et al. 2014). That number may, however, be higher. If, for example, we simply apply the normally accepted 1 in 1000 death rate, then it would be 70,000 deaths (3,500 measles induced deaths annually). Either way, the point is that the vaccine prevents thousands of deaths each year, and you simply cannot pass that off as a minor thing.

Finally, she quotes a “study” which said that, “The mass of scientific evidence compiled by researchers clearly indicates that the incidence of autism occurs following vaccination and is most closely associated with the schedule of vaccines culminating in the MMR vaccine” (Ewing 2009). However, that “study” was not actually a study. It wasn’t even a proper review. It was an entirely speculative opinion piece that made one false claim and unmerited assumption after another. It was based on a correlation fallacy, and it never once provided actual evidence to support the quoted statement. Further, it ignored the vast body of large studies that failed to find any evidence of vaccines causing autism. In other words, all that this paper did was propose a hypothesis, but that hypothesis has already been thoroughly tested and falsified.

Indeed, even the first sentence of the paper is demonstrably false. It says, “that the occurrence of autism has risen steadily in the last decades is not in dispute.” In reality, the rise in autism rates is very much in dispute, with many studies concluding that it is at least largely due to a change in diagnostic criteria rather than an actual increase (Rutter 2005; Taylor 2006; Bishop et al. 2008; Baxter et al. 2015; Hansen et al. 2015). When I first looked at this paper, I was baffled by how a paper that opened with such a clearly false statement could possibly have passed peer-review. Then, I checked the journal it was published in, and it was a journal that that is so minor that it doesn’t even have an impact factor (in other words, the scientific community doesn’t take it very seriously). You should always be wary of journals like this.
Bad reason #2: “Measles strengthens the immune system.”
No it doesn’t. I explained this in detail here, but in short, after a measles infection, your body produces antibodies that are specific for measles. So it only “strengthens” your immune system in that you can’t get measles again. It doesn’t help you fight any other infections. Further, as I explained under #1, thanks to vaccines, you can get those exact same antibodies without actually getting measles.

Further, recent research has shown that measles infections are so hard on your immune system that it actually takes up to 2–3 years for a child’s immune system to return to normal functional levels (Mina et al. 2015). In other words, measles infections weaken the immune system for several years, and these weakened immune systems lead to infections and deaths that aren’t generally attributed to measles. As a result, the actual measles death tolls are higher than we realize (not to mention that these additional infections often come with lasting side-effects).

Finally, it is true that associations with some microorganisms help to prevent autoimmune disorders, but that is actually because those organisms “train” the immune system not to over-react. So they aren’t “strengthening” the immune system, they are training it give a reduced response (a “strengthened” or “boosted” immune system is actually what causes autoimmune disorders). Further, the microbes that do this are generally beneficial or benign, not pathogenic (Gaurner et al. 2006). So there is no evidence that a measles infection will help you out later down the road.

 

Bad reason #3: “Autism causes long-term damage.”
I certainly don’t deny that autism can cause life long-problems (though I would like to stress that autism is a spectrum of disorders, and many people who are clinically diagnosed as autistic are very high functioning and live normal lives). However, in this section, Jaclyn reiterates her claims that measles isn’t dangerous, a measles infection gives life-long immunity to measles, etc. This section is really just a rehash of previous sections, so I won’t belabor the point.

 

Bad reason #4: “Measles is easier to cure than autism.”
I’m not arguing against the core statement here, but the pseudoscience that she uses to try to back up that statement is deplorable. For example, she opens with, “There’s actually no cure for measles. Know why? Because it’s a benign childhood disease” (her emphasis). First, there is no cure because it is a viral infection, and they are notoriously hard to cure (that is why we are also lacking cures for HIV, H1N1, the common cold, etc.). Second, another reason that there is no cure is that most people aren’t looking for a cure, because we have vaccines. In other words, we know that vaccines work very well, so it is more effective to invest money in increasing vaccine coverage, rather than in looking for a cure. Third, once again, measles has a death toll of over 100,000 people annually, and, as I’ll explain in #5, without vaccines that number would be well over 1,000,000. You simply cannot describe that as “benign.”

Next, she further tries to downplay the severity of measles by claiming that treatment with vitamin A reduces death rates by 62%. You’d think that the fact that she is talking about reducing death rates would make her rethink her claim that measles is “benign,” but apparently not. Nevertheless, there is some evidence that vitamin A is useful in overcoming a measles infection. However, citing that 62% figure without context is very deceptive, because it comes from a meta-analysis of studies in Africa where vitamin A deficiencies are major problems (Sudfeld et al. 2010). In industrialized counties, however, most of us have plenty of vitamin A, and vitamin A is a standard part of measles treatments (CDC 2016a). In other words, that 1 in 1,000 death rate that we talked about early already includes vitamin A treatments, but if you read Jaclyn’s post, it sounds like vitamin A is a way for you to improve those odds.

Finally, she claims that autism can be cured, which is a dangerous way to give parents false hope. There is currently no cure for autism (after all, it’s largely genetic). There are ways to help manage it, but not cure it (please don’t flood the comments with anecdotes, because anecdotes are not valid scientific evidence. If you don’t have large, properly controlled clinical trials that were published in reputable journals, then you don’t have any evidence).

Note: If you want to be pedantic, neither measles nor autism can be “cured” because your body, not a medication, is actually what fights the measles infection. So if we are going to succumb to pedantry, the core claim of this argument isn’t technically true.


Bad reason #5: “Measles complications are uncommon.”
Much of this section is a regurgitation of previous arguments, so I will focus on the novel parts. The first of which is the claim that 1 in 45 children in the US have autism, coupled with the claim that 1% of the entire world’s population is autistic. I don’t have too much trouble with the 1 in 45 claim because it came from a legitimate study (though it is worth noting that the estimate that is currently accepted by health organizations is actually 1 in 68 children in the US; CDC 2016b), but the 1% figure is misleading. No source is given, but the source that I found lists it as 0.6%, not 1%, and when we are dealing with the world’s entire population, that rounding error is substantial (roughly 28 million people; Elsabbagh et al. 2012). Also, I have a serious problem with the way that those numbers are being used here. Jaclyn is clearly trying to use them to argue that autism is a very common thing, and, therefore, should be feared. However, autism is a spectrum of disorders, and early in the post, Jaclyn said that her post was “referring to children with severe, regressive cases of autism, not those that are high-functioning and more self-sufficient.” Do you see the problem? It is extremely dishonest to cite the overall autism rate if you are only talking about the most extreme versions of it. In other words, 0.6% of the world’s population has some form of autism, but only a small subset of that 0.6% has the type of severe autism that this post was supposedly about. So this is yet another instance of her using numbers dishonestly in a failed attempt to support her flawed position.

Next, she makes the argument that most measles deaths are actually from developing countries that have poor sanitation standards. The claim itself is true, but the insinuation that sanitation standards are the cause of the infections is false. The deaths are from those countries because they are the ones that don’t have vaccines. As I explained here and here, the sanitation standards in the US were essentially the same in the 50s as they are now, yet they still had hundreds of measles deaths each year. Why? Because they didn’t have vaccines. Similarly, when we introduce vaccines into countries with very poor sanitation standards, the death rates drop (Clemens et al. 1988). Why? Because vaccines work (King et al. 1991; CDC 2015b). Further, as I explained earlier, industrialized countries continue to have large measles outbreaks when vaccination rates drop, and some of those outbreaks result in unnecessary deaths (Antona et al. 2013; Knol et al. 2013).

She then goes on to act as if the 100,000+ annual deaths from measles aren’t really a big deal because they only represent a small portion of the total population. First, I find that attitude unconscionable, since those deaths are preventable. Second, vaccines are the only reason that the death rate is so low! Even if we back the clock up to the year 2000, the death toll was 546,800 (WHO. 2016). Why did it drop so rapidly between then and now? Because we have had a massive vaccination campaign and have taken vaccines to many developing countries, including ones that lack sanitation. Indeed, it is estimated that between 2000 and 2014, the measles vaccine prevented over 17.1 million deaths (WHO 2016). That’s 1.14 million per year. So you simply cannot present the current number of mortalities as evidence that we shouldn’t vaccinate, because the current numbers are low because of vaccines.

Additionally, there is a huge problem here because the autism rates and measles death rates aren’t exactly comparable. For example, she claims that 1% of the world’s population has autism (that should be 0.6%), whereas only 0.002% of the world’s population dies from measles each year. Her argument seems to be that the odds of having autism are far greater than the odds of dying of measles, but that is not a fair comparison because measles deaths are not distributed evenly by age, and roughly 55% of deaths occur in children under 5 (Orenstein et al. 2004). So when you want to look at risk, you can’t simply divide the death rate by the world’s entire population (as she did) because children that die at a young age aren’t recruited into the older age groups. Thus, her calculations result in a gross underestimate of the mortality risk. Let’s get around that problem by looking at the risk that a child will die from measles before reaching five years of age, which is the time period during which 55% of measles mortalities occur (Orenstein et al. 2004).

There are currently roughly 177 million children under the age of five (census.gov/popclock), which gives us a mortality rate of 0.036% per year*. You might think that this means that there is a 0.6 in 100 chance of developing autism (using world-wide data), and a 0.036 in 100 chance of dying of measles before age five (using world-wide data), but that’s not actually fair either, because that estimate uses the death rates given our current vaccination status. If we remove vaccines, we jump to 1.14 million deaths per year, and our risk of death for children under five jumps to 0.35 in 100. At this point the mortality and autism rates are quite similar, but we aren’t even done yet, because Jaclyn specified at the beginning that she was talking about severe autism. I don’t know exactly what she counts as “severe,” but let’s assume that it is the top 50% of cases. Well then guess what, at that point, the autism risk for a child is 0.3 in 100 whereas the risk of measles death before the age of five is 0.35 in 100** (i.e., death from measles is more likely). Now, obviously the exact numbers will vary by country and what you count as “severe autism,” but the point is that when you actually do the math, in the absence of vaccines, the measles death rate would be very close (probably even greater) than the current rate of severe autism. Also note that this is just the probability of death prior to the age five, but 45% of measles deaths happen at later ages, so the overall mortality risk is actually much higher.***

In other words, if you want to agree with Jaclyn’s argument, you have to argue that death by measles is better than a life with severe autism, because if everyone stops vaccinating, then the risk of a child dying from measles will be roughly equal to the current risk of a child developing severe autism. So that (according to Jaclyn) is what you are choosing between: death or life with a disability. In reality, of course, you don’t have to choose because vaccines don’t cause autism. So your children can enjoy an exceptionally low risk of death by measles without it affecting their risk of developing autism.

*Note: I calculated the risk by taking the number of annual measles deaths (114,900; WHO 2016), multiplying by the proportion of mortalities that occur in children under five (0.55), dividing by the number of children on the planet (177 million), then multiplying by 100 to convert it into a percentage. The calculation for the death rate without vaccines was identical but it used 1.17 million deaths instead of 114,900.

**Note: the risk of death is actually much higher than I described here because I left out another really important factor. Namely, the autism risk is estimated per child (you only “get” autism once), but the measles deaths are per child per year, so there are four opportunities for death. I left this out of the calculations because the math is convoluted since most children only get measles once. As a result, the probabilities change each year, and the math was more intense than I felt like explaining here, but I thought it was worth at least mentioning that my estimate is actually a gross underestimate. (I’ve found that few people bother to read math-heavy posts, which is quite unfortunate).

***Note: I was too lazy to calculate the overall mortality risk, but if anyone feels like doing it, it is a cumulative probability. So you calculate the probability for each age class, then sum those. You have to do it this way because anyone who dies of measles in one age class is automatically eliminated from all subsequent age classes. In other words, if you try to do the math using a large age range (say 1–20) you won’t get reliable results, because the mortality risk for a 20 year-old is vastly different from the risk for a 1 year-old and, as a result, many 1 year-olds die and never make it to 20. The best way to think about this is that for each age bracket, you are calculating the mortality risk given that you survived to enter that age bracket. Thus, the mortality risk for children under five is quite high, but if you survive to age five, then the mortality risk before reaching age eight is much lower, and the overall mortality risk for a 0–7 year old is the sum of the risk from ages 0–4 and the risk from ages 5–7.

 

Conclusion/summary
In short, measles is in fact a very serious disease. It currently kills over well over 100,000 people annually, and it is estimated that without vaccines that number would be over one million. Further, you have to remember that the majority of measles deaths occur in children under five years old, so without vaccines, we would expect a minimum of 0.35 deaths per every 100 children age 1–4. That is not something that should be taken lightly or described as “benign.” Additionally, this entire post is based on a false dichotomy, because vaccines don’t cause autism. So you don’t have to choose between the risk of a measles death and the risk of autism, because getting vaccinated does not increase your chance of developing autism. In short, this post made one unscientific, misleading, and downright dishonest claim after another. The measles vaccine is extremely safe and effective, and you should not give in to the baseless fear-mongering.

Related Posts

Citations

  • Antona et al. 2013. Measles elimination efforts and the 2008-2011 outbreak in France. Emerging Infections Diseases 19.
  • Baxter et al. 2015. The epidemiology and global burden of autism spectrum disorders. Psychological Medicine 45:601–613.
  • Bishop et al. 2008. Autism and diagnostic substitution: evidence from a study of adults with a history of developmental language disorder. Dev Med Child Neurol 50: 341–345.
  • CDC 2004. Epidemiology of Measles—United States, 2001–2003. Morbidity and Mortality Weekly Report 53:713–716.
  • CDC. 2011. Measles — United States, January–May 20, 2011. Morbidity and Mortality Weekly Report 60:666–668.
  • CDC. 2015a. Measles (Rubeola): Complications of Measles. Accessed 20-Aug-16.
  • CDC. 2015b. Vaccines & Immunizations: Measles–Q&A about Disease & Vaccine. Accessed 20-Aug-16.
  • CDC. 2016a. Measles (Rubeola): For Healthcare Professionals. Accessed 21-Aug-16.
  • CDC. 2016b. Autism Spectrum Disorder (ASD): Data & Statistics. Accessed 21-Aug-16.
  • Christenson and Bottiger. 1994. Long-term follow-up study of rubella antibodies in naturally immune and vaccinated young adults. Vaccine 12:41–45.
  • Clemens et al. 1988. Measles vaccination and childhood mortality in rural Bangladesh. American Journal of Epidemiology 128:1330–1339.
  • Elsabbagh et al. 2012. Global prevalence of autism and other pervasive developmental disorders. Autism Research 5:160–179.
  • Ewing. 2009. What is regressive autism and why does it occur? Is it the consequence of multi-systemic dysfunction affecting the elimination of heavy metals and the ability to regulate neural temperature? North American Journal of Medical Sciences 1:28–47.
  • Gaurner et al. 2006. Mechanisms of disease: The hygiene hypothesis revisited.  Nature Clinical Practice Gastroenterology and Hepatology 3:275–284.
  • Hansen et al. 2015. Explaining the increase in the prevalence of autism spectrum disorders: the proportion attributable to changes in reporting practices. JAMA Pediatrics 169:56–62
  • Jokinen et al. 2007. Cellular immunity to mumps virus in young adults 21 years after measles-mumps-rubella vaccination. Journal of Infectious Diseases 196:861–867.
  • King et al. 1991. Clinical efficacy of measles vaccine during the 1990 measles epidemic. Paediatric Infectious Disease Journal 10:883–888.
  • Knol et al. 2013. Large ongoing measles outbreak in a religious community in the Netherlands since May 2013. Eurosurveillance 18.
  • Kockanek. 2011. Deaths: Preliminary Data for 2009. National Vital Statistics Reports 56.
  • Mina et al. 2015. Long-term measles-induced immunomodulation increases overall childhood infectious disease mortality. 6235:694–699.
  • Murphy et al. 2010. Deaths: Final data for 2010. National Vital Statistics Reports 61.
  • Rutter. 2005. Incidence of autism spectrum disorders: changes over time and their meaning. Acta Paediatr 94:2–15.
  • Orenstein et al. 2004. The clinical significance of measles: a review. Journal of Infectious Diseases 189:S4–S16.
  • Schmitz et al. 2011. Vaccination status and health in children and adolescents findings of the German health interview and examination survey for children and adolescents (KiGGS). Dtsch Arztebl Int 108:99–104.
  • Sudfeld et al. 2010. Effectiveness of measles vaccination and vitamin A treatment. International Journal of Epidemiology 39:i48-i55.
  • Taylor et al. 2014. Vaccines are not associated with autism: and evidence-based meta-analysis of case-control and cohort studies. Elsevier 32:3623-3629.
  • Whitney et al. 2014. Benefits from immunization during the vaccines for children program era — United States, 1994–2013. Morbidity and Mortality Weekly Report 63:352–355.
  • WHO. 2016. Measles. Accessed 20-Aug-16.
Posted in Vaccines/Alternative Medicine | Tagged , , , , | 106 Comments

4 meaningless words: toxin, natural, organic, and GMO

News articles and blog posts are often full of buzzwords that are heavy on emotional impact but light on substance, and for scientific topics such as nutrition, health, medicine, and agriculture, four of the most common buzzwords are “toxins,” “natural,” “organic,” and “GMO.” These words are used prolifically and are typically stated with clear implications (“toxin” and “GMO” = bad; “natural” and “organic” = good). The problem is that these words are poorly defined and constantly misused. Their definitions are so arbitrary and amorphous that they are functionally meaningless. In other words, they do not add anything useful to a discussion without first attaching a list of qualifiers to them. They are often used in a way that shifts them into the category of what are referred to as “weasel words,” meaning that their use gives the impression that the author said something concrete and meaningful, when in fact the statement was a null sentence that lacked any real substance. The point of this post is, therefore, to dissect the usage of these words, and explain why they are so lacking in meaning as well as why their usage is often misleading and deceptive.

To be clear, when I say that a word is meaningless, I mean that the word is functionally meaningless, rather than semantically meaningless (i.e., the word may have a clear definition, but that definition is so arbitrary that using the word does not add any meaningful content to a statement, and it is not useful for making decisions or assessing the risks and benefits of the thing being described).

Note: Yes, I know that “GMO” is technically an acronym initialism not a word. Deal with it.


“Toxins”
Our society seems to be obsessed with “toxins.” The internet is full of purveyors of woo selling everything from expensive fruit cleanses to “earthing” mats, all with the intended purpose of ridding your body of vaguely defined “toxins.” The problem is simply that there is no such thing as a “toxin.” All matter is made of chemicals (excluding subatomic particles for a minute), and essentially all chemicals are safe at a low enough dose and toxic at a high enough dose (i.e., the dose makes the poison). So there are toxic doses not toxic chemicals. Even water becomes lethally toxic at a high enough dose (Garigan and Ristedt 1999). So this idea that something is going to rid your body of “toxins” doesn’t make any sense, because the chemicals themselves are not “toxins,” and they only become toxic at a high enough dose. Take formaldehyde, for example. I often hear people talk about it as a “toxin,” but the reality is that it is an inevitable bi-product of normal biological processes. So not only is it in many fruits and vegetables, but it is actually produced by your body! The chemical itself is not dangerous, but it can become dangerous at a high enough dose.

To be clear, I’m not saying that we shouldn’t pay attention to what we put into our bodies. Of course we should, but we need to evaluate chemicals based on the dose at which they become toxic, not simply based on whether or not they are present. Further, we need to always carefully define exactly which chemicals we are talking about, rather than using the vague and amorphous term “toxin,” because any chemical will become a “toxin” at a high enough dose.

 Side note: Your body is wonderfully equipped with a liver and kidneys which do a fantastic job of “detoxing” you and keeping chemicals at a safe level. “Detox” supplements/treatments are total nonsense.

Addendum (16-Aug-16): It should be noted that the word “toxin” does have an actual biological meaning in the context of chemicals that are released by microscopic organisms. These are often toxic to individual cells at incredibly lo doses because a cell itself is so small. So when I talked about “toxins” in the post, I was referring to the notion that certain chemicals are automatically dangerous for you as an organism, rather than on a cell by cell basis.

 

 “Natural”
The definition of “natural” seems obviously to be, “found in nature,” but that’s actually a lot more ambiguous and arbitrary than it sounds. First, let’s deal with why this definition is arbitrary, and the best way to explain that is by talking about chemical compounds. Everything around you is made of chemicals (including you), and in chemistry, a compound is simply the combination of two or more different elements. So most of the things that are around you are in fact chemical compounds (there are several thousand compounds that make your body, for example). Now, many people like to distinguish between “natural” and “synthetic” chemicals, where “natural” chemicals can be found in nature, while “synthetic” ones were produced in a lab, but that distinction is arbitrary. A chemical is a chemical, and on a molecular level, there is nothing that separates natural and synthetic chemicals. All chemical compounds are made by stringing different elements together, and there is no inherent difference between nature stinging elements together and scientists stringing elements together. We can make acids in the lab and you can find acids in nature, we can make chemicals that are poisonous at anything but a low dose in the lab, and you can find chemicals that are poisonous at anything but a low dose in nature, etc. The fact that something was synthesized in a lab doesn’t make it any more dangerous or any safer than a chemical that was found in nature. Indeed, saying that a given chemical is “natural” tells you absolutely nothing useful about the toxicity of that chemical, which is why I am arguing that the term is meaningless. Saying “chemical X is natural but chemical Y is synthetic” doesn’t tell me anything about the properties of those two chemicals. After all, nature is full of chemicals that are toxic at anything but a very low dose (cyanide and arsenic come to mind).

chemical structure acetylsalicylic acid salicylic acidConsider, for example, acetylsalicylic acid and salicylic acid. One of those is natural and the other is synthetic. Can you tell which? No, and neither could a chemist. If you showed those two molecules to a chemist who had no prior information about those chemicals, there is no way they she could tell you which was natural and which was synthetic, because that distinction is arbitrary. In all likelihood though, she would know which is which because these are two very well-known compounds. Salicylic acid is the compound in willow bark that gives it medicinal value, and acetylsalicylic acid is the synthetic version of it that we all know as aspirin. Further, we switched to the synthetic version largely because straight salicylic acid has a lot of unpleasant side effects like gastrointestinal problems (Hedner and Everts 1997). To be clear, aspirin has side effects as well (as do all chemicals), but they tend to be less severe, and the point is, once again, that simply being natural doesn’t automatically make something better. Indeed, asserting that something is better because it is natural is a logical fallacy known as an appeal to nature.

Batman and robin GMO meme

Almost none of our crops are truly natural in that most of them aren’t actually found in nature in their present form.

Moving beyond the arbitrariness of what is natural, the typical definition of “found in nature” doesn’t apply to some things that most people would intuitively think of as natural. Take apples, for example. They’re natural, right? Not so much. The fruit that we know as an apple does not grow in nature. As I will talk about more later, essentially all of our crops have been modified by thousands of years of careful breeding, so, technically speaking, they aren’t natural.

The situation is even more problematic when we talk about actions rather than objects. People often say things like, “we should do X, because X is natural,” but what on earth does that mean? Generally, I hear people say that it means what our ancestors did, but that raises the obvious question of how far back do we have to go for something to be natural? Are we talking about 200 years ago? 1,000 years ago? 10,000 years ago? etc. This definition is horribly ambiguous.

To get around this problem, some people say that natural actions are those that are found in the animal kingdom, but that is also an extremely problematic definition for a number of reasons. First, how widespread does it need to be in the animal kingdom? Is it simply required to find one animal that does it? Further, there are lots of human actions that most people think of as natural, even though other animals don’t do them. For example, we cook our food. Does that making cooking unnatural? Finally, this definition is fundamentally flawed because we are just highly evolved animals, so doesn’t that make everything that we do natural? Actually think about this for a second. I think that we can all agree that structures like bird nests and beaver dams are natural, but those are not structures that just form spontaneously in nature. Rather, they are carefully and deliberately constructed by an animal who uses materials to make them. Nevertheless, if I make a wooden table, most people would agree that the table is unnatural, but how on earth is that any different from a beaver dam? The beaver is an animal that took materials found in nature and combined them to make a new structure, and I am an animal that took materials found in nature and combined them to make a new structure. What’s the difference? Further, we can logically extend this to all human structures. When you get right down to it, all of the parts of a skyscraper came from nature, and there is no logical reason to say that a beaver combining sticks and mud to make a dam is natural but me combing two metals to make steel is unnatural. Again, the definition of natural is completely arbitrary and functionally meaningless.

 

“GMO”
GMO stands for “Genetically Modified Organism,” and you may think that this has a very clear and precise definition…but it really doesn’t. Before reading the rest of this, try to come up with a definition of it yourself, then see how that definition holds up.

The most general line of thought would be that a GMO is exactly what is says: “an organism whose genes have been altered,” but that definition is much too broad. Every living organism has a genetic code that has been altered from its ancestral state by millions of years of evolution. If you really think about it, we are all just heavily modified cyanobacteria (cyanobacteria [or some similar organisms] where most likely the first living cells).

Now you may think that I am stretching things a bit here, and perhaps I am, but “nature” does all sorts of crazy things like hybridizing species (as plants do frequently) and even stealing the DNA from one organism and inserting it into the genetic code of another. For example, at some point in the evolution of the sweet potato, it managed to modify its genetic code by inserting bacterial genes into its DNA. In other words, it is a transgenic species whose genetic code is a combination of the genes of several species. Shouldn’t that make it a GMO? Further, this is not limited to sweet potatoes, because bacteria themselves are well known for their ability to incorporate the DNA of other species into their own genomes. So nature is constantly doing the types of things that most people would associate with GMOs, and foods like sweet potatoes really are transgenic species.

natural corn teosinte

The corn that we eat is quite different from wild corn (teosinte). Our crops have been genetically modified via thousands of years of careful breeding, and the fruits, vegetables, and animals that we eat today contain novel genetic codes that are not found in nature. Image via mentalfloss.com.

Nevertheless, you can try to qualify the term GMO by saying that GMOs are, “organisms that have been genetically modified by humans,” but that definition is also fraught with problems. Beyond the fact that it is totally arbitrary (see the “natural” section), it also would encompass all modern agriculture. Those delicious fruits that you know as watermelons don’t exist in nature (at least not in their current form). Similarly, natural bananas are small and full of giant seeds, and wild corn does not produce those nice juicy husks that you slather in butter and salt. Both our livestock and crops have been genetically modified through years selective breeding, and they contain genetic codes that aren’t found in nature.

At this point, people often try to add something about moving genes between species, but that just creates more problems. First, I’ve already shown that nature does that as well, so we have to keep that arbitrary “man did it” qualifier. Second, that would also include lots of “non-GMO” crops such as pluots, plumcots, tangelos, etc. all of which are hybrids that used selective breeding to combine the DNA of two different species. Third, this definition does not include all of the crops that are typically described as “GMOs.” Indeed, GMOs that take the DNA of one organism and put it into another (a.k.a. transgenic GMOs) are actually only one type of GMO. There are others that simply modify the existing genome (i.e., they activate, deactivate, or alter genes that are already present).

Given the problems with that definition, you might try defining a GMO as an organism that is “modified by humans via a method other than selective breeding,” but that definition includes mutation breeding, which is typically not considered to be a GMO. This method uses chemicals or UV radiation to randomly mutate organisms’ DNA in order to produce new and useful traits (i.e., it makes genetic modifications via inducing mutations). However, this method typically does not receive the label “GMO,” and in some cases, even farms that label themselves as “organic” can us crops that were produced by this method.

This leaves us with the outrageous definition that a GMO is, “an organism whose DNA was modified by humans via a method other than selective breeding or mutation breeding,” but at that point we have tacked so many arbitrary qualifiers onto the term, that the term itself is essentially meaningless. To put this another way, our “non-GMO” agricultural practices constantly make genetic modifications, including swapping genes between species and randomly mutating DNA, and the techniques that we label as “genetic engineering” are only different in that they are faster and more precise than the other methods. Therefore, there is no good reason to talk about GMOs as if they are fundamentally different than the other methods, because the definition of a GMO is completely arbitrary.

Note: You may be tempted to say that a GMO is simply, “an organism that was produced by genetic engineering” but that doesn’t really solve the problem, because you then need a non-arbitrary definition for GE which excludes mutation breeding, selective breeding, and what happens in nature, without excluding any crops that are typically thought of as GMOs. So you still have the exact same problem.


“Organic”
Finally, let’s talk about the term “organic.” This is perhaps the greatest marketing term ever coined, and the problem with it is not that a definition doesn’t exist, but rather that the definition is arbitrary and most people don’t use it correctly (to be clear, I am talking specifically about organic farming practices, not organic chemistry or the general use of “organic” to mean something that formed of its own accord without intervention).

Here is a question for you, true or false, organic farming doesn’t use pesticides? A very large portion of people think that the answer is “true,” but it is actually “false.” Organic farmers absolutely use pesticides, and many of those pesticides are toxic at comparable doses to the pesticides used in traditional farming. Indeed, organic pesticides have can harm wild species, pollute waterways, and do all of the other harmful things that traditional pesticides can do (Bahlai et al. 2010). In fact, one of the most common organic pesticides is “Bacillus thuringiensis (Bt) toxin,” which is the exact same chemical that GMO corn produces (i.e., Bt maize). So one the one hand, organic farmers use Bt liberally, and on the other hand, they demonize corn that produces Bt. Are you starting to see why this is arbitrary (dare I say hypocritical)?

So if organic crops use potentially dangerous pesticides just as much as traditional crops, then what exactly does it take for a crop to be considered organic? Generally speaking, they have to be grown without synthetic pesticides (“natural” are fine) and without the use of GMOs (some countries place additional requirements like no petroleum-based fertilizers). If you have read the rest of this post, then you should know where I am about to go with this.  I have already shown that the distinction between “natural” and “synthetic” chemicals is arbitrary and all chemicals are safe at a low dose and toxic at a high enough dose, and I have shown that what we consider to be a GMO is really arbitrary. So, since the definition of organic relies on those other terms, the “organic” label is itself arbitrary. To put this another way, organic crops are not automatically healthier or more nutritious than traditional crops. Indeed, reviews of the literature have been unable to find consistent and compelling evidence that organic food is healthier (Smith-Spangler et al. 2012; Galgano et al. 2015).

Now, at this point, you may be thinking that organic crops aren’t healthier, but surely they are better for the environment. However, that is also a misnomer. Some practices that are typically associated with organic farming are better for the environment, but those practices are sometimes included in non-organic farming as well, and organic farming has serious drawbacks, such as the fact that it often uses far more land and resources than traditional farming (Tuomisto et al. 2012). As a result, you can’t make a blanket statement like, “organic farming is better for the environment” because in many cases it isn’t.

The point is that simply saying that something is “organic” doesn’t actually tell you anything useful about how healthy it is or whether or not it was grown in a sustainable way. You need to actually look at its nutrient content, what chemicals were used to grow it, how much chemical residue is present and at what does those chemicals becomes toxic, how much land was used to grow the crop, etc. In other words, if all that you say is, “crop A is organic and crop B is non-organic” then you have not provided any useful information for describing which is better. To do that, you have to say, “crop A is organic and contains X nutrients, was grown on Y amount of land, was sprayed with Z chemicals, etc.; whereas crop B is non-organic and contains X’ nutrients, was grown on Y’ amount of land, was sprayed with Z’ chemicals, etc.” You can, however, say the exact same thing without including the words “organic” and “non-organic” and you won’t lose any useful information. In other words, it’s the X, Y, and Z bits that matter. The label “organic” is completely irrelevant. Therefore, simply saying that something is “organic” doesn’t tell you anything useful about how healthy or sustainable it is.

Conclusion
In short, all four of these terms are either ambiguous or arbitrary, and they really don’t give you much in the way of useful information. Everything becomes a toxin at a high enough dose, natural chemicals can be just as dangerous as synthetic chemicals, virtually all of our food has been genetically modified, and organic crops are not inherently healthier or better for the environment than traditional foods. None of these words actually indicate the health or quality of the products that they describe, and you always need to look at additional information like the dose of a chemical contained in the product, the dose at which that chemical becomes toxic, etc. Simply saying that something is “natural” or “organic” or “a GMO” or “contains a toxin” isn’t useful because that doesn’t describe anything meaningful about that product.

Related posts

Citations

Bahlai et al. 2010. Choosing organic pesticides over synthetic pesticides may not effectively mitigate environmental risk in soybeans. PLoS ONE 5:e11250.

Doucleff. 2015. Natural GMO? Sweet potato genetically modified 8,000 years ago. NPR: Food and Culture

 Garigan and Ristedt 1999. Death from hyponatremia as a result of acute water intoxication in an Army basic trainee. Military Medicine 164:234–238.

Galgano et al. 2015. Conventional and organic foods: A comparison focused on animal products. Cogent Food and Agriculture 2: 1142818.

Hedner and Everts 1997. The early clinical history of salicylates in rheumatology and pain. Clinical Rheumatology 17:17–25.

Ruishalme. 2015. Natural assumptions. Thoughtscapism.com. Accessed 15-Aug-16

Smith-Spangler et al. 2012. Are organic foods safer or healthier than conventional alternatives? A systematic review. Annals of Internal Medicine 157:348–366.

Tuomisto et al. 2012. Does organic farming reduce environmental impacts? A meta-analysis. Journal of Environmental Management, 112:309­–320.

Wilcox. 2011. Mythbusting 101: Organic farming > conventional agriculture. Scientific American.

Posted in GMO, Rules of Logic | Tagged , | 73 Comments

Evolutionary mechanisms part 6: Genetic drift

Many people are under the false impression that evolution is always beneficial, but it can actually be quite harmful. This is especially true for evolution via genetic drift. Like natural selection, genetic drift removes variation from a population, but unlike selection, it is random and it can remove beneficial traits. Indeed, we often become so focused on selection that we assume that all of the features that we see in organisms were selected because of some beneficial role, but that is not true. Some of those features arose via the random process of genetic drift, and at times this process can even swamp natural selection and cause harmful traits to rise to prominence. In other words, evolution via genetic drift can actually be very detrimental to populations. So in this post, I want to talk about what this mechanism is, how it works, and how it interacts with the other evolutionary mechanisms.

What is genetic drift?
In its simplest terms genetic drift is just a random change in the allele frequencies of a population over time. This is in stark contrast to natural selection, in which the change in allele frequencies is not random (it is a common misconception that natural selection is random). Remember, evolution is simply any change in a population’s allele frequencies over time, so the key defining feature of genetic drift is that this change is random. The obvious question is then, what causes allele frequencies to change randomly? There are several answers to that question, but the classic answer (and indeed the one that is constantly at play) is the random nature of independent assortment.

If you think back to high school biology, you have two copies of your chromosomes, one of which you received from your mom, and one of which you received from your dad. Further, these chromosomes are arranged into pairs (i.e., you got one copy of chromosome #1 from mom, and one copy of chromosome #1 from dad). Additionally, you have two copies of each gene (each copy is known as an allele), and each copy is on a different chromosome in the pair (one from mom and one from dad). So, for example, if you have the blood type AB and your mom is AA and your dad is BB, then that means that you received a chromosome with the A allele from your mom, while the other chromosome in that pair came from your dad and contained the B allele. Now, every time that you produce an egg or a sperm cell, you will only pass on half of your chromosomes (one chromosome from each pair). So each individual sperm or egg will receive the A allele or the B allele, but not both. Importantly, which allele you pass to a given egg/sperm is completely random and is not affected by which other chromosomes you pass on. This is what we call independent assortment, and it is a key source of genetic drift. (Technical note: if you want to be more precise, independent assortment refers the random arrangement of homologous chromosomes along the metaphase plate during meiosis I, but the key point is simply that which member of a chromosome pair gets passed on is random)

This is a simple illustration of the coin simulation. Each circle is an allele, and each pair is an individual. The lines show which allele gets passed. So the individual on the left in generation 1 happened to pass an H to one offspring and a T to the other, whereas the individual on the right randomly passed an H to each. As a result, the individual on the left in the second generation had two Hs and could not pass a T. The individual on the right could have passed a T, but two Hs were randomly selected instead, so by the third generation, H had become fixed, and the T allele was lost.

This is a simple illustration of the coin simulation. Each circle is an allele, and each pair is an individual. The lines show which allele gets passed. So the individual on the left in generation 1 happened to pass an H to one offspring and a T to the other, whereas the individual on the right randomly passed an H to each. As a result, the individual on the left in the second generation had two Hs and could not pass a T. The individual on the right could have passed a T, but two Hs were randomly selected instead, so by the third generation, H had become fixed, and the T allele was lost.

Because which chromosomes you pass on is completely random, the allele frequencies can shift overtime if, just by chance, one chromosome happens to get passed more than the other. I’m going to use a simulator to illustrate this in a minute, but for now, let me illustrate with a coin (if you have one handy, please grab a piece of paper and try this yourself). First, assume that you have two individuals, each of which is a heterozygote, meaning that they have one copy of each allele (which in this case will be heads and tails). Now, to make the first individual, flip the coin twice, and whatever it lands on, that will be your first offspring. The first coin flip represents the allele from parent 1 (i.e., there is a 50% chance of passing H and a 50% chance of passing T, just like in independent assortment), and the second flip represents the allele from parent 2. Now, do it again to make a second offspring. There are only six possible outcomes: both TT, both HH, both HT, one HH one TT, one HT one TT, one HT one HH (TH and HT are the same). Write down your first two offspring, then “mate” them. If both of your offspring are HT, then the procedure is identical to what you did before; however, if one of them is a homozygote (i.e., only has one type of allele), then it can only pass on that allele (i.e., a TT can only pass T and HH can only pass H, thus if your offspring are TT and HH then both grandchildren will be TH). Make two more children in this manner (e.g., if you have an HH and a HT, then each of your two new individuals will receive an H from the HH parent and either an H or a T from the HT parent, flip the coin to decide which). Keep flipping your coin and making new generations like this, and you will pretty quickly get to a point where one of your alleles disappears and all that you have is TT or HH. At that point, we say that the allele is “fixed,” meaning that one of the alleles has been lost, and every individual in the population is now homozygous for the other allele (i.e., has two copies of it). The first time that I tried this, it only took five generations for one allele to become fixed, but if you do it numerous times, you’ll notice that the number of generations until fixation varies, and if you do it enough times, 50% of the time heads will become fixed and 50% of the time tails will become fixed.

genetic drift random walk

This shows a population of 100 individuals evolving over time via genetic drift

What you have just done is simulate genetic drift. That’s really all that it is. Random chance produces slight variations in allele frequencies until one allele eventually becomes fixed. The image on the left illustrates the situation more clearly. It shows genetic drift in a population of 100 individuals. As you can see, the allele frequency randomly oscillates up and down until eventually the dominant allele becomes fixed.

Effects of population size
If you think about the math behind genetic drift, you would intuitively expect genetic drift to be more severe when population sizes are small, and indeed that is exactly what happens in nature. In fact, at very large population sizes, we expect genetic drift to have a relatively small effect. To illustrate this, I used a simulator to simulate four different scenarios (illustrated below). In each situation, I simulated 10 populations that consisted of 2, 10, 100, or 1000 individuals, and each population started with even allele frequencies for the gene being simulated (i.e. half of the alleles were dominant and half were recessive). As you can see, when there were only 2 or 10 individuals in the populations, alleles became fixed very rapidly (the simulations with 2 individuals are the same thing as what you did with a coin). When the population size jumps up to 100, however, things are more stable, but by the end of the 50 generations shown, you can see that some populations are trending towards having fixed alleles, and indeed after a few hundred generations all of the populations became fixed (not shown). Finally, the populations with 1000 individuals retained fairly stable allele frequencies, but even in those cases, alleles will eventually become fixed unless they are acted upon by another evolutionary force, and, in fact, in natural populations, those other mechanisms (particularly gene flow) do often interact with genetic drift and prevent the fixation of alleles in large populations.

Note: the next post in this series will be entirely devoted to gene flow, so I will talk about how it interacts with genetic drift in detail there.

genetic drift illustration simulation population size

Each line represents a different population, and as you can see, when those populations are small (red and blue lines) the alleles become fixed quite rapidly.

Interactions with selection
When talking about genetic drift, we are often talking about neutral alleles (i.e., alleles that are neither beneficial nor harmful to individuals); however, it can occur for alleles that are not selectively neutral. Imagine a situation where half of the alleles in a population for a given trait are dominant (A) and half are recessive (a); however, the environment changes, and as a result, individuals with a dominant phenotype (i.e., they have at least one dominant allele, so they are either AA or Aa) survive to a reproductive age 100% of the time, whereas individuals with a recessive phenotype (aa) only survive to a reproductive age 90% of the time. In other words, there is selection against the recessive allele, because it reduces an individual’s ability to reproduce when two copies of it are present. Now, you should intuitively expect that natural selection will act on this situation and remove the receive allele from the population, and when the population is large enough, you would be correct. However, when the population is small then, depending on the strength of selection, genetic drift can actually overpower selection.

Each red dashed line shows a population with 100 individuals, and each solid light blue line shows a population with 10 individuals. Even though populations that are 0% dominant have lower survival (in these simulations), genetic drift still caused several of the small populations to lose the dominant allele completely.

Each red dashed line shows a population with 100 individuals, and each solid light blue line shows a population with 10 individuals. Even though populations that are 0% dominant have lower survival (in these simulations), genetic drift still caused several of the small populations to lose the dominant allele completely.

To illustrate this, consider the figure on the right. I once again used the simulator to simulate 10 populations with 10 individuals and 10 populations with 100 individuals, but this time I set a selection differential so that all of the AA and Aa individuals would survive to a reproductive age, but individuals who had two recessive alleles (aa) only had a 90% chance of surviving. As you can see, in all 10 populations of 100 individuals, selection removed the harmful allele and fixed the beneficial one. In four of the populations with only 10 individuals, however, the populations were so small that genetic drift overpowered selection and actually caused the recessive allele to become fixed! That is a really bad situation because, barring any gene flow or mutations, those populations are now stuck with the harmful allele, and the overall survival of those populations is 10% lower than the survival of populations that managed to rid themselves of the harmful allele.

As I’m sure you can imagine, this has extremely important implications for wildlife conservation efforts, and it is one of the key reasons that conservationists are so concerned with maintaining large numbers of individuals. When populations are small, genetic drift can cause harmful alleles to rise to prominence or even become fixed, and that is a very bad thing for the survival of those populations.

Genetic drift removes variation
It should now be clear that genetic drift removes variation from populations. Indeed, in many of the simulations that I have illustrated you can see that one of the alleles became fixed while the other was lost from the population. This is important because it means that, just like selection, genetic drift is constantly causing populations to become less diverse. As a result, populations are heavily reliant on gene flow from neighboring populations (which can restock their genepool with alleles that they had lost) as well as mutations (which are the only mechanism that is capable of making new variation).

Even when the alleles that are lost are neutral, this is often a serious problem for populations in the long run. Remember, selection simply adapts organisms to their current environment, so although an allele may be neutral at the moment, it may become very important if the environment changes. Indeed, a high level of genetic variation is one of the key factors for determining whether or not a population will survive changes in the environment, the introduction of a new disease or predator, etc.

Bottlenecks and founder events
Now that you understand the basics of genetic drift, I want to introduce you to two final concepts. The first of these is a genetic bottleneck. Bottlenecks are often considered to be a type of genetic drift, but they act a bit differently from the type of genetic drift that we have been talking about so for. In bottlenecks, a large number of individuals is rapidly lost, and as a result, the genetic variation is reduced to a small subset of what it was before. Image, for example, that there is a large population of frogs living at the base of a volcano, and 80% of the frogs contain an allele for green pigment while 20% contain an allele for brown pigment. Then, the volcano erupts and kills off 90% of the frogs. Further, just by chance, all of the frogs with the allele for brown pigment were killed during the eruption. This is, therefore, clearly a case of instantaneous evolution by genetic drift because evolution is a change in allele frequencies, and the allele frequencies change from 20% and 80% to 0% and 100%. Additionally, this would likely represent a genetic bottleneck because it is likely that many other alleles were lost as well. Indeed, one of the characteristics of a bottleneck is the loss of many rare alleles. Finally, the effects of a bottleneck are largely determined by how many generations it lasts for (i.e., how long the population remains small), because as you recall, small populations have greater genetic drift. So even alleles that survived the volcanic eruption may quickly be lost due to genetic drift, if the population does not grow rapidly.

At this point you may be wondering when it is appropriate to talk about a bottleneck as a type of genetic drift, and that is honestly something of a grey area, with some people/books preferring to entirely separate the two, while others lump them together. I personally think that the best way to think about this is to remember than genetic drift is a random change in allele frequencies. So, if the bottleneck killed individuals randomly (i.e., no individuals had alleles that made them more likely to survive the eruption) then it is ok to talk about it as a type of genetic drift. However, if something kills many individuals but does so by selecting its victims, then it should really be thought of as a natural selection event. For example, if a disease outbreak kills off 90% of individuals, and only the 10% of individuals that had alleles that made them resistant to the disease survived, then that likely would be a genetic bottleneck, but it would not be genetic drift because the survivors were selected rather than being chosen randomly. Genetic drift could, however, come in to play in the following generations if the populations do not recover rapidly enough (i.e., they remain small).

On a side note, the recovery of populations from disease outbreaks and whether or not the outbreaks caused genetic bottlenecks is actually a key focus of my current research.

A founder effect is really just a special type of a bottleneck, and it occurs when a new population is formed from a subset of the original population (i.e., the new population only contains a small portion of the genetic variation found in the original population). For example, let’s go back to our population of brown and green frogs, but this time, instead of a volcano, imagine that they live on the edge of a lake, and one day a storm blows several of them out to a previously uninhabited island. However, all 10 of the individuals that form the new population on the island contained only the allele for brown pigment. We would describe this as a founder event, because the new population is limited to the genetic material contained in the individuals that founded it (e.g., the green allele is not present in this new population), but it often also presents a bottleneck, because these new populations usually only contain a small portion of the variation that was in the source population.

Conclusion
In summary, genetic drift is simply an evolutionary mechanism that causes random changes in allele frequencies over time. It is most powerful when population sizes are small, and in some situations it can actually cause harmful alleles to become fixed in a population. As a result, it is a major concern for conservation efforts, and it is one of the reasons that conservation biologists place a high priority on maintaining large populations.

Other posts in this series

 

 

Posted in Science of Evolution | Tagged , | 6 Comments

Are scientists arrogant, close-minded, and dismissive?

I spend a lot of time debating people who reject science, and I have observed a common tendency for people to try to dismiss scientific results by attacking scientists with accusations that they are, “arrogant,” “close-minded,” “dismissive” and don’t “take people’s concerns/anecdotes seriously or engage the public” It’s important to realize that all of these accusations are simply ad hominem fallacies, and even if they were true, that wouldn’t make the scientists’ results any less accurate. In most cases, however, they aren’t even true, and they are actually reflective of the people making the accusations, rather than the scientists themselves. So I want to consider each of these claims and see whether or not they actually apply to most scientists. I am going to try to deal with each accusation more or less separately, but there will inevitably be some overlap and, indeed, each section builds on and connects to the other sections. I am especially going to focus on the claim that scientists are arrogant and pretentious, because the other claims are really built on that one.

Note: To be clear, I am not suggesting that no scientists are arrogant, close-minded, etc. Obviously there will be a few bad apples in any group. However, the accusation is that scientists in general display these qualities, and that is what I am taking issue with.

Are scientists arrogant?
Let’s start with the accusation that scientists are arrogant, elitist, pretentious, etc. The Merriam-Webster dictionary defines “arrogant” as:

“having or showing the insulting attitude of people who believe that they are better, smarter, or more important than other people.”

There are several important things to note about this definition. First, simply insisting that you are right and someone else is wrong is not necessarily arrogant. It only becomes arrogant when it is done in a haughty “better than thou” manner or when the assertion is based on an unrealistic assessment of one’s own skills, knowledge, and ability. That last part is really important. It is not arrogant for someone who is highly trained and has lots of experience to think that he/she knows more about the topic than someone with no training or experience on that topic.

In most areas of life, people have no problems deferring to experts, but for some reason when it comes to science, people view expertise as a bad thing.

In most areas of life, people have no problems deferring to experts, but for some reason, when it comes to science, people view expertise as a bad thing.

Let me give several examples to demonstrate what I mean by that. First, imagine that someone who has never taken an engineering course and has no relevant experience reads several blogs and comes to the conclusion that a particular bridge is unsafe. However, multiple professional engineers (each of whom went through several years of training to earn advanced degrees and have subsequently spent years working as an engineer) carefully examine the bridge, examine the arguments made by those who are concerned, and conclude that the bridge is safe. Are those engineers being arrogant? Is it presumptuous of them to “assume” that their advanced degrees and years of experience have made them more qualified than a bunch of bloggers to assess the safety of the bridge? Obviously it isn’t. We expect that people with that type of training and experience will know more than the average person. That’s why we have the word “professional.” It is insane to think that reading a few blogs is worth more than a degree from MIT.

Similarly, imagine that someone who has never even sat in the cockpit of a plane boards a commercial airliner and proudly proclaims that he is more qualified than the pilot because he has “done his homework” on the internet and logged lots of hours on X-box air combat games. This man then proceeds to lecture the pilot on everything that he/she is doing “wrong.” Would it be arrogant of the pilot to ignore him? Obviously not. No one on that plane would be OK with that man taking over for the pilot (except the man himself, of course), and everyone would agree that the man in question is being arrogant and foolish and needs to take his seat and shut up.

I can, of course, give countless examples like this that everyone would agree with. No one would argue that a neurosurgeon is arrogant for not taking surgical advice from unqualified family members you got their surgical licenses from Youtube. No one would accuse a professional mechanic of arrogance for ignoring a customer who doesn’t know the difference between a wrench and a screwdriver. No one would say that a lawyer is arrogant for thinking that they know more about the legal system than someone who has never set foot in a court room or opened a law book. Yet for some reason, when it comes to science and some fields of medicine, people feel entitled to think that they are experts. They actually seem to think that Google is equivalent to an advanced degree.

When you think about this, it is ludicrous. Becoming a professional scientist takes, on average, four years of undergraduate studies (note: science majors are usually rated among the most difficult/time consuming), 6–10 years of intense graduate training (most grad students work/study 60+ hours a week and rarely take holidays), and several years of doing a post-doc. Further, after all of that training, you spend your life actually doing science, which means that you are constantly gaining experience and new knowledge. The idea that reading a bunch of blogs and non-academic books will put you on par with that type of training and experience is the epitome of arrogance and hubris. It is just about the most pretentious thing that I can think of. Of course scientists know more about science than the average person on the street, just as plumbers know more about plumbing than the average person, and mechanics know more about cars than the average person. We intuitively expect that anyone who goes through that type of training will be extremely knowledgeable.

What I have been describing here is, of course, a well-established phenomenon known as the Dunning-Kruger effect. In short, people who are unskilled/unqualified tend to grossly over-estimate their own abilities/knowledge, whereas people who have the proper training/experience tend to have a more accurate view of their abilities or even under-estimate them. In other words, statistically speaking, it is the untrained people who tend to be arrogant about their abilities, not the highly trained scientists. To be clear, scientists certainly can still have unrealistic views of their own knowledge, and first rule of the Dunning-Kruger effect is that you don’t know if you are succumbing to it. So I am not suggesting that scientists are immune to this problem, but it is true that, on average, people without scientific training have a more unrealistic view of their scientific knowledge/abilities than actual scientists do.

To be clear, it is entirely possible to be highly trained and still arrogant. If, for example, a scientist said, “I am a scientist, therefore, I am smarter and better than you” that would be arrogance, but that is not usually what I see occurring. Rather, what I generally see, is that people accuse scientists of being arrogant simply because the scientists won’t accept their pseudoscience. For example, I frequently see a situation where an anti-vaccer makes an utterly ridiculous claim, and a scientist carefully and politely explains why that argument is unscientific, at which point, the anti-vaccer says something to the effect of, “well you’re just being arrogant and think that you know more than everyone else.” It is not arrogant to politely correct errors and debunk bad arguments.

Similarly, I often see people make the bizarre claim that scientists are arrogant because they think that they understand some very well-studied phenomena. For example, on multiple occasions I have heard someone say that scientists who accept climate change are arrogant for thinking that they understand how the climate works. How is it arrogant to think that thousands of studies have allowed us to understand something? It is arrogant to say, based on no evidence whatsoever, that all of those studies are wrong, but simply accepting the results of a massive body of research is in no way shape or form arrogant. Similarly, it is not arrogant of doctors to think that vaccines are safe, because vaccines have been so thoroughly tested.

In short, it is not arrogant for a highly qualified and experienced expert to think that they know more about their area of expertise than someone who has no training or experience in that area, nor is it arrogant to think that thousands of careful studies have produced reliable results. It is, however, arrogant to think that reading a few blogs or sitting around conjecturing puts you on the same level as a highly trained expert. Additionally, experts can certainly be arrogant if they misuse their training/experience, and statements such as, “I’m right because I’m a scientist” certainly display arrogance, but that is not usually the situation that I encounter.

Note: Do not confuse what I am saying here. I am not saying that being an expert automatically makes you right. It clearly doesn’t. Rather, I am talking about a person’s perception of themselves relative to their actual training and experience. Please read this post for an explanation of the difference between deferring to experts and appealing to authority. 

Are scientists close-minded?
At the outset, I want to make it clear that there is an extraordinary difference between being open-minded and being willing to accept utter nonsense. Most scientists are actually open to new ideas. That is, in fact, the reason why science has been able to progress so far. In other words, if scientists were truly close-minded and refused to consider any idea other than the “dogma” of their fields, then our scientific knowledge wouldn’t have changed in decades. It has changed, however, because science is inherently a process of discrediting old ideas and replacing them with better ideas. So most scientists are open to opposing ideas, but those ideas have to be based on good evidence and sound logic. If you want to convince a scientist that they are wrong, then you need to present them with actual high quality evidence (i.e., large properly conducted studies that were published in reputable peer-reviewed journals).

Tim minchin song meme if you open your mind too much your brain will fall out

The title to one of Tim Minchin’s songs

This is the fundamental point that so many people seem to miss: being open-minded means being willing to change your view when presented with high quality evidence. It does not mean being will to change your view based on anecdotes, blogs, Youtube videos, and hearsay. More often than not, when someone says “open your mind” they really mean “accept something totally ridiculous without any solid evidence to support it.” That’s not being open-minded, that’s being gullible.

Do scientists take parent’s/the public’s concerns seriously?
This one is a bit baffling to me, because the answer is so obvious. Yes, scientists absolutely take the public’s concerns seriously! That is why there are over 1,700 studies on GMOs, numerous large studies on vaccines and autism, etc. Nevertheless, I constantly hear people who insist that scientists aren’t looking for the “real” cause of autism, aren’t looking for better cancer treatments, haven’t studied GMOs, aren’t looking at the possibility that global warming is natural, etc., but all of these claims are utter nonsense. They are demonstrably false. For any of these topics, you can find multiple studies addressing those concerns. For example, in this post, I explained why we know that climate change isn’t natural, and I cited multiple studies that have examined that possibility. The topic has been extremely well studied, but I felt compelled to write that post because I encounter so many people who seem to think that scientists have never even bothered to look at the possibility that climate change is natural.

To put it simply, when someone says that scientists are ignoring them, more often than not, the problem isn’t actually that scientists are ignoring them, rather, the “problem” is that scientists didn’t find the result that they were hoping for. There are, however, exceptions which I will discuss in the next section.

A related claim is that scientists don’t engage the public. First, many scientists do in fact engage the public (for example, this blog exists because I am a scientist who thinks that it is worth my time to engage the public). Second, scientists are extremely busy people. We usually work over 60 hours a week, so asking us to add public engagement to our work schedule is not a small request. Third, those of us who do try to engage the public are rewarded with a constant wave of hate and insults. I wake up every day to find new messages telling my what a blind idiot I am for accepting the results of carefully controlled studies. I am constantly accused of being a paid shill, and I have had numerous people tell me that I am an evil, murdering, monster for supporting vaccines/GMOs. Given that level of vitriol, is it really surprising that most scientists don’t take time out of their busy schedules to engage the public? Again, most of the people making this claim don’t actually want scientists to engage them, rather they just want scientists to pander to them and tell them that they are right.

Are scientists dismissive?
The claim that scientists are dismissive is similar to the claim that they don’t take parents/the public seriously, but it has an important difference. Usually, I hear the latter being used to argue that there is a lack of research; whereas I typically hear the claim that scientists are dismissive in association with debates (this is more closely connected to the arrogance claim). In other words, many people argue that when they present scientists with their arguments, the scientists are dismissive of them and ignore their arguments rather than dealing with them. There certainly are many cases where scientists ignore people’s claims/arguments, but that needs to be qualified in several ways.

First, more often than not, this occurs when someone is making a ludicrous claim/argument that is contrary to everything that we know about the universe, and in those cases, ignoring the claim is often the appropriate response. If, for example, someone tries to tell you that rainbows form when unicorns defecate while flying through the air, you are not in any way obliged to take that claim seriously. That is obviously a silly example, but this happens all the time with real arguments that are equally absurd. Quite simply, if you are arguing for a position that has no scientific support and has been refuted by multiple studies and a basic understanding of science, then your opponent is not being “dismissive” by not taking that argument seriously. Rather, they are being rational.

To be clear, I’m not suggesting that it is OK to respond in a way that is haughty or belittling, but you are under no obligation to treat an absurd proposition as if it is a rational one. Additionally, it is always technically possible that numerous studies are wrong or even that our most fundamental concepts about the universe are wrong, so you should be willing to challenge them if presented with proper evidence. If someone comes to you with a stack of legitimate peer-reviewed studies that document the existence of unicorns and their ability to fly and poop out rainbows, as well as explaining why our current understanding of the physics of rainbows is wrong, then, at that point, you are obliged to take the argument seriously. However, if all that they are presenting is blogs, anecdotes, etc. than you absolutely can dismiss their arguments, because the burden of proof is on them to support their position, it isn’t your duty to refute it.

acid diet absurd homeostasis bad argument response to homeostasis memeAs an illustration of how this typically plays out, consider the meme on the right. It shows the type of dialogue that many people would label as “dismissive” or “arrogant” and, indeed, when I shared that on my blog’s Facebook page, one person commented with precisely that claim, so let’s examine that comment. First, the scientist in the meme actually asked a question rather than instantly rejecting the argument. If the person supporting the acid diet had responded with, “yes I do understand homeostasis, and here are several recent studies which showed that our previous knowledge was wrong and foods can actually affect the pH of our blood” then the scientist would have been forced to look at those studies. The person did not say that, however, because those studies don’t exist. Homeostasis is one of the fundamental concepts of physiology. It has been very well studied and we have a really good understanding of how it works. So claiming that food will shift the pH of our blood isn’t actually that far below unicorns on the absurdity scale. Therefore, ignoring claims that the acid diet works is not being “arrogant” or “dismissive” it’s being rational.

Conclusion
Although scientists certainly can be arrogant, close-minded, etc., usually when I see people making these accusations they are simply committing ad hominem fallacies. There is nothing arrogant about thinking that years of advanced training and experience make you more knowledgeable than someone that lacks that training and experience. Indeed, in most areas, we readily acknowledge experts and are happy to defer to them. Yet for some bizarre reason, when it comes to science and medicine, unqualified people feel entitled to think of themselves as experts, and that delusion is the truly arrogant one. Similarly, there is nothing dismissive or close-minded about rejecting anecdotes and shoddy arguments. Being open-minded means being willing to change your mind when presented with solid evidence. It does not mean being willing to accept utter nonsense despite a mountain of contrary evidence.

Note: You could argue that this post is actually anecdotal because it is based on my observations of debates rather than actual statistics, and if someone can find a solid peer-reviewed study that showed that scientists have an above-average level of arrogance, then I will happily write and addendum. However, that line of reasoning is really missing the point, because the fundamental question isn’t actually “are scientists arrogant?” Rather it is, “is it arrogant to think that advanced training and experience makes someone more qualified than someone who lacks that training and experience?”

Related posts

Posted in Nature of Science | Tagged , , , | 27 Comments