Who reviews scientific papers and how do reviews work?

I spent my afternoon reviewing a paper for a scientific journal and making a recommendation about whether or not the paper should be published. As a scientist, this is not an uncommon task for me, but it is a process that is largely foreign to the general public. Indeed, the peer-review system often seems to be a mystery to those who don’t participate in it, and, as a result, it is a frequent topic for this blog. For example, I have previously written about what it takes to publish a paper. However, I have not previously written a post specifically about what it is like to be a reviewer or even who reviewers are. So, I thought I would take this opportunity to explain the process from a reviewer’s point of view and offer you a window into the system that determines which papers get published.

 

Who are reviewers and how are they chosen?
In short, reviewers are scientists. The peer-review system is rather interesting because essentially every involved with it acts as both an author and a reviewer. In other words, reviewers are themselves scientists who also submit papers for review. This is a good system, because it means that scientific papers are being reviewed by other scientists, not by politicians, corporations, etc.

Nevertheless, reviewers obviously aren’t chosen at random from within the scientific community. Rather, journal editors choose them based on relevant experience and expertise. This can happen in several ways. Often, journals require you to recommend reviewers when you submit your paper. In other words, as part of the submission process, you have to nominate several people to serve as reviewers, as well as providing contact information and (often) a justification for why they would make suitable reviewers. As a general rule, you want to suggest people who have published similar papers (often papers that you cited in the paper you are submitting) and who aren’t in anyway affiliated with you (this insightful post provides more details about how to select reviewers). Ultimately though, the editor has the final say, and he/she will consider your suggestions and make the final call.

Another common method that journals use to select reviewers is simply to recruit reviewers during the paper submission process (or via society memberships). In other words, the submission form often includes an optional check-box that says something to the effect of “I am willing to be a reviewer for this journal.” Generally, this is also accompanied by a section where you list your areas of expertise. Thus, if you are submitting to a herpetology journal (i.e., reptiles and amphibians) and you check the box and list yourself as a turtle expert, then you will go on the list of potential reviewers for turtle papers.

A third mechanism that editors use is simply to contact authors who recently published similar research in the journal that they edit. Indeed, after publishing a paper in a particular journal, it is quite common for the editor of that journal to contact you about reviewing a different paper for that journal (this has happened to me several times).

Finally, editors may choose simply to send the paper to someone who they know does research in a similar area or who was recommended to them by someone else. For example, the editor may have recently read a similar paper and contact one of the authors of that paper. Indeed, the paper that I reviewed today was in a journal that I have never published in or signed up to be a reviewer for, and the paper did not cite my previous work, so I doubt that the authors recommended me. However, I have previously published extremely similar and relevant research (honestly, it is pretty irritating that the authors didn’t cite me), so I suspect that I was selected by the editor because of my previous papers on this topic.

Regardless of how they are chosen, the point is that reviewers are generally experts on the topic of the paper that they are being asked to review. In other words, editors select scientists who have the necessary skills, knowledge, and experience to assess the paper and determine whether or not it is worthy of publication. Usually, at least two reviewers are selected per paper, but it is not uncommon to have three reviewers, and some journals use four or more.

Who can’t be a reviewer?
Most publishing scientists also serve as reviewers, but not all scientists are eligible to be a reviewer for a given paper. First, to be a potential reviewer, you have to have expertise on the particular field that the paper is about. I, for example, am predominately a herpetologist, so I would never get asked to review a paper on physics. For that matter, I wouldn’t even get asked to review a paper on botany. I might, however, be asked to review papers on other areas of zoology, but only if the topic of the paper was closely aligned with topics that I study (e.g., one of my areas of research is population ecology, and although I study the population ecology of reptiles and amphibians, the same concepts, statistics, etc. apply to other taxa, so I have the necessary skills and knowledge to assess a population paper on birds, for example).

A second criteria is a lack of conflicts of interest. Exactly how that is defined is variable, but as a general rule, reviewers should not be in anyway associated with the paper in question, and they should not be institutionally linked to any of the authors. I could not, for example, serve as a reviewer for a paper that was written by another graduate student in my lab, because that would be a conflict of interest (i.e., even though I would try to be objective, I would be less likely to criticize the paper because I work with and like the authors).

Finally, in many cases, authors can recommend people who should not serve as reviewers. Editors are under no obligation to follow these recommendations, but if there is someone who you really don’t want reviewing your paper, you can make that case. For example, if you have a long standing rivalry with someone and, as a result, that person likely would not be objective, then you can argue that he/she should not be selected as a reviewer. To be clear though, there needs to be a legitimate reason why that person would not make a good reviewer, and you can’t include someone in that list simply because you are concerned that they will find problems with your paper.


What happens when you receive a request to review a paper?
Each journal is different, but this is generally how things play out. First, you receive an email from an editor asking if you would be willing to review a given paper. This email usually includes the abstract for the paper, or, at the very least, the topic that the paper is on. It may also include other information such as the number of pages, words, figures, and/or tables in the paper. Sometimes this email will have information about the reviewing standards of that journal such as whether or not the review is anonymous and the time frame in which you are expected to provide the review.

As a reviewer, you then look at the information that you have been given and decide whether or not the paper is on a topic that you are sufficiently knowledgeable about, whether or not you have any conflicts of interest, and whether or not you currently have the time to review it. You then respond to the editor to tell them whether or not you are willing to be a reviewer. If you reject the request, it is generally considered good form to suggest an alternative reviewer (which is another way that editors identify potential reviewers).

Once you accept your duty as a reviewer, you generally receive a copy of the paper as well as paperwork on confidentiality. Some journals also provide you with a review template that you are supposed to fill out, whereas others let you provide feedback in whatever way you see fit. Generally, with either system, the final review will consist of several summary statements about your views on the paper, a list/explanations of your major criticisms, and a list of comments on specific lines (these are either annotations to the original document, or a list with corresponding line numbers).

What do reviewers look for?
Imagine that you are a reviewer, and you have just been given a paper to read. What are you going to look for? There are actually lots of things that you should look for, but the key thing to keep in mind is that the job of a reviewer is to assess the quality of the research rather than acting as an editor. In other words, reviewers do look at things like grammar, readability, and the presentation of the data (e.g. good/appropriate figures and tables), but their main duty is to act as a filtering mechanism that blocks bad science from getting published and helps authors to improve potentially useful research.

As such, the most important thing that reviewers look at is the methodology. They check things like sample size, experimental design, the statistics that were used, how the statistical models were set up, etc. All of this is intended to identify poor methodology and ensure that the study was done correctly. Following that, reviewers will check to see if the results were reported and discussed properly. It is not at all uncommon for authors to jump to conclusions that are not supported by the data, and it is the reviewers’ job to reign them back in and make sure that all of the conclusions are merited.

Additionally, reviewers are tasked with making sure that the paper is well grounded in the scientific literature. All research inevitably builds on previous research. As such, papers are supposed to cite and discuss relevant papers, especially if similar studies reached different conclusions, so reviewers check to make sure that this was done appropriately. This is another reason why it is important for reviewers to be experts on the given field. As experts, they know the literature, so if important papers are missing, they will be able to point them out. Indeed, it is exceptionally common for reviewers to suggest specific papers that the authors should have cited (I made several such suggestions on the paper that I reviewed today).

Finally, it is the reviewers’ job to actually be helpful to the authors. Many otherwise nice people become utter jerks when reviewing, but the idea is actually for reviewers to provide constructive criticism that will help the authors improve their paper. Thus, reviewers are asked not only to point out problems with the paper, but also to suggest ways to fix those problems (e.g., “your statistical method is inappropriate, and you should use method X instead”).

What do reviewers recommend?
You have now read the paper and made a list of comments, which means that it is time for you to make a decision. Does this paper deserve to be published? You generally have 4–5 options to choose from (some journals don’t use option 2).

  1. Reject without the option to resubmit — This means that the paper is seriously flawed and will rejected without further consideration.
  2. Reject with the option to resubmit — This means that the paper has serious flaws, but it. also has merit if those flaws can be correct. The authors can then revise the manuscript based on your comments and resubmit it back to that journal. At that point it goes back out for review, and you will often (but not always) be asked to review the revised paper.
  3. Accept pending major revisions — This means that the study has merit, but there are still some substantial issues that need to be addressed. If the authors can correct those errors to the editor’s satisfaction, then it will be accepted for publication without further review.
  4. Accepted pending minor revisions — This means that the paper is solid, but there are some minor issues that need to be dealt with before it will be published.
  5. Accepted in its current form — This is a theoretical state in which a paper is accepted without any changes being required. I’m not convinced that it ever actually happens to real papers (it definitely hasn’t in my experience as either an author or reviewer).

After you make and justify your decision, the editor will look at your comments as well as the comments made by the other reviewer(s), then make the final decision about the fate of the paper. This system of having several reviewers is another strength of the peer-review process, because even if one reviewer does a crappy job and misses major flaws, the other reviewer(s) are there to pick up the slack.


Why do scientists serve as reviewers?
It is worth mentioning that you don’t get paid to be a reviewer. It is entirely a volunteer service, and it is quite time consuming. I, for example, had a large data set that I was hoping to analyze this afternoon, but instead I spent six hours reviewing a paper, and I never got around to the data (and this was a fairly short paper). So you may be wondering why on earth scientists do it. Why don’t we always just reject requests to review papers? I obviously can’t speak for every scientist, but I can tell you my views and why I do it and take it seriously, and I know many other scientists who feel the same way. So I can’t state anything statistical, but I suspect that this would be a common response.

First, it is simply reciprocity. Every time that I submit a paper, I am imposing on other scientists to take time out of their busy schedules and review my research. As such, it is only fair that I then take time out of my schedule to review other scientists’ research. If everyone tried to pass the buck, then all of the reviews would be being conducted by an increasingly small and disgruntled group of scientists, and that would be bad for everyone. So acting as a reviewer is really just paying my dues as a member of the scientific community.

Second, as a reviewer, I get to play an active role in ensuring the quality of the research in my field, and that is a duty that I take very seriously. I obviously care greatly about my field and the advancement thereof, and I want the papers in my area of research to be of the highest quality possible. So, by serving as a reviewer, I get to block flawed research, promote high quality research, and make recommendations about how to improve research. This lets me extend my influence and impact on my field far beyond my own publications, and I see that as both a duty and a privilege.

To put this another way, I think that being a reviewer is an enormous responsibility. I know what it takes to do original research. I know the amount of work and effort that is involved, and I know what it is like to have all of your work torn to shreds by a reviewer. So, when I receive a paper, I always want it to be good and publishable, because I want those scientists to be rewarded for their extraordinary effort. At the same time though, when I agree to review a paper, I accept responsibility for preventing bad research from being published, and that responsibility motivates me to make absolutely sure that the research is solid before I give it my stamp of approval.

Summary
In short, reviewers are simply other scientists who have been matched with a paper based on their experience and expertise. It is their duty to carefully examine the paper, determine whether or not it was conducted correctly, and recommend if it should be published or rejected. Scientists do not get paid for this service, but it is an important task that researchers tend to take seriously.

Note: Journals vary widely with regards to anonymity. In some cases, reviewers don’t know who the authors are and authors don’t know who the reviewers are. In other cases, reviewers have the authors names, but authors don’t know who the reviewers are, and in yet other cases, both parties know the identities of the other party, and some reviews are even made public. There is an interesting debate about which system is best, and I had planned on going into it, but this post became longer than intended, so I will save that topic for a later post.

flowchart diagram how to publish scientific peer-reviewed paper blog

This flowchart summarizes the steps required to publish a peer-reviewed paper. See this post for details.

Posted in Nature of Science | Tagged | 8 Comments

Measles is not better than autism: Debunking anti-vaccine arguments

Over the weekend, I was unfortunate enough to come across an article by Jaclyn Harwell at “Modern Alternative Health” called “5 Reasons That Measles is Better Than Autism.” Unsurprisingly, it was full of misinformation and shoddy arguments. Indeed, it was so full of counterfactual claims and dishonest distortions of reality that I felt compelled to write a rebuttal, especially since the faulty arguments contained in the post are prevalent among antivaccers. Therefore, I am going to dissect that post and explain why it is nonsense. Before I get to Jaclyn’s “5 reasons,” however, I need to deal with several serious problems in the opening statements of the article.

First, this post is fundamentally flawed because the entire thing is based on the false dichotomy that you have to choose between because vaccines and autism. In reality, of course, vaccines do not cause autism. As I explained at length in this post, numerous enormous studies have tested the vaccine/autism hypothesis and failed to find any evidence of vaccines causing autism. The only studies that suggested that vaccines cause autism were tiny and riddled with problems. Anecdotes don’t matter, deceptive documentaries are irrelevant, and the “CDC whistle-blower” did not present any actual evidence of vaccines causing autism (or of CDC corruption, for that matter). Only scientific evidence matters, and science is overwhelmingly on the side of vaccines.

measles isn't harmless meme anual deaths

Second, the core argument throughout the article is that measles is a mild illness, but that argument is a blatant lie. I’ll deal with this at length under reason #5, but briefly, in developed countries with access to modern medicine, measles still has a death rate of 1 in 1,000 infected individuals. Nevertheless, it is true that in terms of sheer numbers, there are few measles deaths in industrialized countries today, but that is only because of vaccines! Indeed, in the US, in the 10 years prior to the introduction of the measles vaccine, measles killed an average of 440.3 children each year in the US alone (details and sources here; also note that the death rate per infected individual was roughly 1 in 1,000 then as well). Further, when we look at the entire world, measles still kills well over 100,000 people annually, and the World Health Organization describes it as, “one of the leading causes of death among young children” (WHO 2016). That’s not fear-mongering, that’s a fact. So while anti-vaccers want you to think that measles is trivial, actual epidemiologists have a very different view of this disease.

Further, even when children don’t die from measles, there is still a 1 in 10 chance that they will develop an ear infection (which can cause permanent hearing loss), a 1 in 20 chance that they will develop pneumonia, and a 1 in 1,000 chance that they will develop encephalitis (a swelling of the brain; CDC 2015a). There are also other complications such as febrile seizures in 0.1–2.3% of patients (Orenstein et al. 2004). Indeed, during a large outbreak in France, nearly 25% of victims had to be hospitalized (Antona et al. 2013), and in the US from 2001–2008, 40% of measles patients were hospitalized (CDC 2011). You simply cannot describe a disease that hospitalizes 25–40% of its victims and kills over 100,000 people annually as a minor illness (WHO 2016). That is extremely dishonest (see reason #5 for a more thorough explanation of just how deadly this disease truly is).

measles outbreaks, low vaccine rates

Please carefully note how the measles outbreaks are centered around the communities with low vaccination rates. Image from Knol et al. 2013

Third, the author claims that, “in the majority of outbreaks, most affected people have, indeed, been vaccinated.” This is an extremely common anti-vaccine argument, and it is horribly flawed. First, in the case of measles the claim itself isn’t even true. For example, in the 2001–2008 study that I mentioned earlier, 89% of patients were unvaccinated (CDC 2011). Similarly, during an enormous outbreak in France that involved over 20,000 people, 80% of patients were unvaccinated (Antona et al. 2013), and during a large outbreak in the Netherlands, 91.7% of patients were unvaccinated (Knol et al. 2013). Those are not cherry-picked examples. Rather they are the norm for measles outbreaks.

Additionally, and far more importantly, this argument ignores rudimentary math. You have to look at the proportions not the raw numbers. Most people are vaccinated, so of course many outbreaks will affect vaccinated people. Simply looking at the raw number of cases doesn’t tell us whether or not vaccinated people actually get the disease more often, and when we look at the actual rates, we find that infection rates are substantially higher among the unvaccinated (King et al. 1991; Schmitz et al. 2011). To give a completely analogous example, most car accidents involve sober drivers, but that doesn’t mean that driving drunk isn’t dangerous. Most people drive sober. Therefore, in terms of raw numbers, most accidents involve sober drivers. However, when you look at the rates, you find that the rate of accidents is far higher among drunk drivers than among sober drivers.

Finally, the author insinuates that “the efficacy of vaccinations is questionable at best.” Again, this is simply not true, especially for the measles vaccine. Even beyond the examples that I cited early of outbreaks corresponding to unvaccinated communities and higher measles rates among the unvaccinated, many other studies have examined the efficiency of the measles vaccine and it is quite high (93% with one dose and 95–97% with two doses; King et al. 1991; CDC 2015b). For example, Clemens et al. (1988), found that introducing the measles vaccine into a population reduced the measles death rates by 57%, but please go ahead and say that the vaccine isn’t effective or important (note the immense sarcasm).

Now that we are clear on the actual facts, let’s look at the five arguments in the article. My intention is to address the original article point by point, but the original jumped around quite a bit and didn’t follow a great logical structure. As a result, I thought that it might be useful to provide a bullet list of key points and the sections in which they are discussed.

  • Vaccines don’t cause autism (introduction, #1)
  • Getting measles to avoid getting measles is idiotic (#1)
  • Measles actually weakens the immune system (#2)
  • Neither measles nor autism can be cured, but measles can be prevented (#4)
  • Total measles mortalities are relatively low because of vaccines (#1, #5)
  • Measles still kills over 100,000 people annually (introduction, #1, #5)
  • Without vaccines measles would kill over 1 million people annually (#5)
  • Without vaccines, at least 0.35 in 100 children under the age of five would die from measles each year (#5)

Note: To be clear, if you want to say that surviving a measles infection without any serious or lasting consequences is better than a life-long affliction with severe autism, fine, I’m not going to disagree with you. What I take issue with is the dishonest and misleading claims about measles mortality rates, the dangerous and unmerited vilification of vaccines, and the damaging way that autism is presented. We need to stop talking about autism as if it is the world’s worst disease and as if people with autism are inferior, damaged, and in need of repair. That is an indignity that autistic people should not have to suffer. Autism is not the worst thing that could happen to your child.

 

Bad reason #1: “Measles imparts lifelong immunity.”
I have repeatedly had to address this argument from anti-vaccers (for example here and here), which is frustrating because it is such an obviously ridiculous argument. We can rephrase this argument simply as, “getting measles is good because it prevents you from getting measles.” Think about that for a second. This argument i

s actually proposing that you should avoid getting sick by getting sick. In contrast, a vaccine will prevent you from ever getting sick. In other words, a vaccine prevents you from ever getting measles, whereas measles only prevents you from getting measles a second time. Using a measles infection to protect you from a measles infection is like using pregnancy as a contraceptive.

To be fair, some studies have found that vaccine-induced immunity does not last as long as natural immunity (Christenson and Bottiger 1994), but other research has found that both forms last equally well for many years (Jokinen et al. 2007), and since measles is predominantly a childhood disease, those first years are the really important ones. Additionally, the longevity of immunity can easily, safely, and effectively be extended with boosters, which, once again, prevent you from getting measles in the first place. Further, I have already provided multiple citations showing that the measles vaccine is very effective and infection rates are much higher among the unvaccinated. In other words, this anti-vaccine argument makes it sound like getting measles is the best way to avoid getting measles, but that is clearly ridiculous. The vaccine is by far the best way to avoid ever having to suffer through measles.

Jaclyn goes on to reiterate that measles “isn’t a big deal” by citing the fact that between 1950 and 1960 the death rate for the US was less than 1 per 100,000 individuals (for the entire population, not per infected individual). As I showed earlier, however, that comes out to well over 400 deaths annually, which is in fact a big deal (also see #5 for an explanation of why the 1 in 100,000 figure is misleading). Even if vaccines caused autism (which again, they don’t), a life of autism is not worse than death by measles.

She also claims that there were no deaths among the 1,153 US cases of measles from 2001–2013. First, that claim is not true. There were two measles deaths from 2001–2003 (CDC 2004), another two in 2009 (Kockanek 2011), and another two in 2010 (Murphy 2010). Nevertheless, the death rates are low, but this argument totally ignores the fact that the reason that we have so few deaths is because we have so few cases of measles, and the reason that we have so few cases is because we have vaccines! Vaccines are the only reason that we don’t have thousands of measles deaths each year. Indeed, it is estimated that between 1994 and 2013, the measles vaccine prevented >70 million cases of measles in the US, which comes out to a total of 57,300 deaths according to the calculations used by the authors (Whitney et al. 2014). That number may, however, be higher. If, for example, we simply apply the normally accepted 1 in 1000 death rate, then it would be 70,000 deaths (3,500 measles induced deaths annually). Either way, the point is that the vaccine prevents thousands of deaths each year, and you simply cannot pass that off as a minor thing.

Finally, she quotes a “study” which said that, “The mass of scientific evidence compiled by researchers clearly indicates that the incidence of autism occurs following vaccination and is most closely associated with the schedule of vaccines culminating in the MMR vaccine” (Ewing 2009). However, that “study” was not actually a study. It wasn’t even a proper review. It was an entirely speculative opinion piece that made one false claim and unmerited assumption after another. It was based on a correlation fallacy, and it never once provided actual evidence to support the quoted statement. Further, it ignored the vast body of large studies that failed to find any evidence of vaccines causing autism. In other words, all that this paper did was propose a hypothesis, but that hypothesis has already been thoroughly tested and falsified.

Indeed, even the first sentence of the paper is demonstrably false. It says, “that the occurrence of autism has risen steadily in the last decades is not in dispute.” In reality, the rise in autism rates is very much in dispute, with many studies concluding that it is at least largely due to a change in diagnostic criteria rather than an actual increase (Rutter 2005; Taylor 2006; Bishop et al. 2008; Baxter et al. 2015; Hansen et al. 2015). When I first looked at this paper, I was baffled by how a paper that opened with such a clearly false statement could possibly have passed peer-review. Then, I checked the journal it was published in, and it was a journal that that is so minor that it doesn’t even have an impact factor (in other words, the scientific community doesn’t take it very seriously). You should always be wary of journals like this.
Bad reason #2: “Measles strengthens the immune system.”
No it doesn’t. I explained this in detail here, but in short, after a measles infection, your body produces antibodies that are specific for measles. So it only “strengthens” your immune system in that you can’t get measles again. It doesn’t help you fight any other infections. Further, as I explained under #1, thanks to vaccines, you can get those exact same antibodies without actually getting measles.

Further, recent research has shown that measles infections are so hard on your immune system that it actually takes up to 2–3 years for a child’s immune system to return to normal functional levels (Mina et al. 2015). In other words, measles infections weaken the immune system for several years, and these weakened immune systems lead to infections and deaths that aren’t generally attributed to measles. As a result, the actual measles death tolls are higher than we realize (not to mention that these additional infections often come with lasting side-effects).

Finally, it is true that associations with some microorganisms help to prevent autoimmune disorders, but that is actually because those organisms “train” the immune system not to over-react. So they aren’t “strengthening” the immune system, they are training it give a reduced response (a “strengthened” or “boosted” immune system is actually what causes autoimmune disorders). Further, the microbes that do this are generally beneficial or benign, not pathogenic (Gaurner et al. 2006). So there is no evidence that a measles infection will help you out later down the road.

 

Bad reason #3: “Autism causes long-term damage.”
I certainly don’t deny that autism can cause life long-problems (though I would like to stress that autism is a spectrum of disorders, and many people who are clinically diagnosed as autistic are very high functioning and live normal lives). However, in this section, Jaclyn reiterates her claims that measles isn’t dangerous, a measles infection gives life-long immunity to measles, etc. This section is really just a rehash of previous sections, so I won’t belabor the point.

 

Bad reason #4: “Measles is easier to cure than autism.”
I’m not arguing against the core statement here, but the pseudoscience that she uses to try to back up that statement is deplorable. For example, she opens with, “There’s actually no cure for measles. Know why? Because it’s a benign childhood disease” (her emphasis). First, there is no cure because it is a viral infection, and they are notoriously hard to cure (that is why we are also lacking cures for HIV, H1N1, the common cold, etc.). Second, another reason that there is no cure is that most people aren’t looking for a cure, because we have vaccines. In other words, we know that vaccines work very well, so it is more effective to invest money in increasing vaccine coverage, rather than in looking for a cure. Third, once again, measles has a death toll of over 100,000 people annually, and, as I’ll explain in #5, without vaccines that number would be well over 1,000,000. You simply cannot describe that as “benign.”

Next, she further tries to downplay the severity of measles by claiming that treatment with vitamin A reduces death rates by 62%. You’d think that the fact that she is talking about reducing death rates would make her rethink her claim that measles is “benign,” but apparently not. Nevertheless, there is some evidence that vitamin A is useful in overcoming a measles infection. However, citing that 62% figure without context is very deceptive, because it comes from a meta-analysis of studies in Africa where vitamin A deficiencies are major problems (Sudfeld et al. 2010). In industrialized counties, however, most of us have plenty of vitamin A, and vitamin A is a standard part of measles treatments (CDC 2016a). In other words, that 1 in 1,000 death rate that we talked about early already includes vitamin A treatments, but if you read Jaclyn’s post, it sounds like vitamin A is a way for you to improve those odds.

Finally, she claims that autism can be cured, which is a dangerous way to give parents false hope. There is currently no cure for autism (after all, it’s largely genetic). There are ways to help manage it, but not cure it (please don’t flood the comments with anecdotes, because anecdotes are not valid scientific evidence. If you don’t have large, properly controlled clinical trials that were published in reputable journals, then you don’t have any evidence).

Note: If you want to be pedantic, neither measles nor autism can be “cured” because your body, not a medication, is actually what fights the measles infection. So if we are going to succumb to pedantry, the core claim of this argument isn’t technically true.


Bad reason #5: “Measles complications are uncommon.”
Much of this section is a regurgitation of previous arguments, so I will focus on the novel parts. The first of which is the claim that 1 in 45 children in the US have autism, coupled with the claim that 1% of the entire world’s population is autistic. I don’t have too much trouble with the 1 in 45 claim because it came from a legitimate study (though it is worth noting that the estimate that is currently accepted by health organizations is actually 1 in 68 children in the US; CDC 2016b), but the 1% figure is misleading. No source is given, but the source that I found lists it as 0.6%, not 1%, and when we are dealing with the world’s entire population, that rounding error is substantial (roughly 28 million people; Elsabbagh et al. 2012). Also, I have a serious problem with the way that those numbers are being used here. Jaclyn is clearly trying to use them to argue that autism is a very common thing, and, therefore, should be feared. However, autism is a spectrum of disorders, and early in the post, Jaclyn said that her post was “referring to children with severe, regressive cases of autism, not those that are high-functioning and more self-sufficient.” Do you see the problem? It is extremely dishonest to cite the overall autism rate if you are only talking about the most extreme versions of it. In other words, 0.6% of the world’s population has some form of autism, but only a small subset of that 0.6% has the type of severe autism that this post was supposedly about. So this is yet another instance of her using numbers dishonestly in a failed attempt to support her flawed position.

Next, she makes the argument that most measles deaths are actually from developing countries that have poor sanitation standards. The claim itself is true, but the insinuation that sanitation standards are the cause of the infections is false. The deaths are from those countries because they are the ones that don’t have vaccines. As I explained here and here, the sanitation standards in the US were essentially the same in the 50s as they are now, yet they still had hundreds of measles deaths each year. Why? Because they didn’t have vaccines. Similarly, when we introduce vaccines into countries with very poor sanitation standards, the death rates drop (Clemens et al. 1988). Why? Because vaccines work (King et al. 1991; CDC 2015b). Further, as I explained earlier, industrialized countries continue to have large measles outbreaks when vaccination rates drop, and some of those outbreaks result in unnecessary deaths (Antona et al. 2013; Knol et al. 2013).

She then goes on to act as if the 100,000+ annual deaths from measles aren’t really a big deal because they only represent a small portion of the total population. First, I find that attitude unconscionable, since those deaths are preventable. Second, vaccines are the only reason that the death rate is so low! Even if we back the clock up to the year 2000, the death toll was 546,800 (WHO. 2016). Why did it drop so rapidly between then and now? Because we have had a massive vaccination campaign and have taken vaccines to many developing countries, including ones that lack sanitation. Indeed, it is estimated that between 2000 and 2014, the measles vaccine prevented over 17.1 million deaths (WHO 2016). That’s 1.14 million per year. So you simply cannot present the current number of mortalities as evidence that we shouldn’t vaccinate, because the current numbers are low because of vaccines.

Additionally, there is a huge problem here because the autism rates and measles death rates aren’t exactly comparable. For example, she claims that 1% of the world’s population has autism (that should be 0.6%), whereas only 0.002% of the world’s population dies from measles each year. Her argument seems to be that the odds of having autism are far greater than the odds of dying of measles, but that is not a fair comparison because measles deaths are not distributed evenly by age, and roughly 55% of deaths occur in children under 5 (Orenstein et al. 2004). So when you want to look at risk, you can’t simply divide the death rate by the world’s entire population (as she did) because children that die at a young age aren’t recruited into the older age groups. Thus, her calculations result in a gross underestimate of the mortality risk. Let’s get around that problem by looking at the risk that a child will die from measles before reaching five years of age, which is the time period during which 55% of measles mortalities occur (Orenstein et al. 2004).

There are currently roughly 177 million children under the age of five (census.gov/popclock), which gives us a mortality rate of 0.036% per year*. You might think that this means that there is a 0.6 in 100 chance of developing autism (using world-wide data), and a 0.036 in 100 chance of dying of measles before age five (using world-wide data), but that’s not actually fair either, because that estimate uses the death rates given our current vaccination status. If we remove vaccines, we jump to 1.14 million deaths per year, and our risk of death for children under five jumps to 0.35 in 100. At this point the mortality and autism rates are quite similar, but we aren’t even done yet, because Jaclyn specified at the beginning that she was talking about severe autism. I don’t know exactly what she counts as “severe,” but let’s assume that it is the top 50% of cases. Well then guess what, at that point, the autism risk for a child is 0.3 in 100 whereas the risk of measles death before the age of five is 0.35 in 100** (i.e., death from measles is more likely). Now, obviously the exact numbers will vary by country and what you count as “severe autism,” but the point is that when you actually do the math, in the absence of vaccines, the measles death rate would be very close (probably even greater) than the current rate of severe autism. Also note that this is just the probability of death prior to the age five, but 45% of measles deaths happen at later ages, so the overall mortality risk is actually much higher.***

In other words, if you want to agree with Jaclyn’s argument, you have to argue that death by measles is better than a life with severe autism, because if everyone stops vaccinating, then the risk of a child dying from measles will be roughly equal to the current risk of a child developing severe autism. So that (according to Jaclyn) is what you are choosing between: death or life with a disability. In reality, of course, you don’t have to choose because vaccines don’t cause autism. So your children can enjoy an exceptionally low risk of death by measles without it affecting their risk of developing autism.

*Note: I calculated the risk by taking the number of annual measles deaths (114,900; WHO 2016), multiplying by the proportion of mortalities that occur in children under five (0.55), dividing by the number of children on the planet (177 million), then multiplying by 100 to convert it into a percentage. The calculation for the death rate without vaccines was identical but it used 1.17 million deaths instead of 114,900.

**Note: the risk of death is actually much higher than I described here because I left out another really important factor. Namely, the autism risk is estimated per child (you only “get” autism once), but the measles deaths are per child per year, so there are four opportunities for death. I left this out of the calculations because the math is convoluted since most children only get measles once. As a result, the probabilities change each year, and the math was more intense than I felt like explaining here, but I thought it was worth at least mentioning that my estimate is actually a gross underestimate. (I’ve found that few people bother to read math-heavy posts, which is quite unfortunate).

***Note: I was too lazy to calculate the overall mortality risk, but if anyone feels like doing it, it is a cumulative probability. So you calculate the probability for each age class, then sum those. You have to do it this way because anyone who dies of measles in one age class is automatically eliminated from all subsequent age classes. In other words, if you try to do the math using a large age range (say 1–20) you won’t get reliable results, because the mortality risk for a 20 year-old is vastly different from the risk for a 1 year-old and, as a result, many 1 year-olds die and never make it to 20. The best way to think about this is that for each age bracket, you are calculating the mortality risk given that you survived to enter that age bracket. Thus, the mortality risk for children under five is quite high, but if you survive to age five, then the mortality risk before reaching age eight is much lower, and the overall mortality risk for a 0–7 year old is the sum of the risk from ages 0–4 and the risk from ages 5–7.

 

Conclusion/summary
In short, measles is in fact a very serious disease. It currently kills over well over 100,000 people annually, and it is estimated that without vaccines that number would be over one million. Further, you have to remember that the majority of measles deaths occur in children under five years old, so without vaccines, we would expect a minimum of 0.35 deaths per every 100 children age 1–4. That is not something that should be taken lightly or described as “benign.” Additionally, this entire post is based on a false dichotomy, because vaccines don’t cause autism. So you don’t have to choose between the risk of a measles death and the risk of autism, because getting vaccinated does not increase your chance of developing autism. In short, this post made one unscientific, misleading, and downright dishonest claim after another. The measles vaccine is extremely safe and effective, and you should not give in to the baseless fear-mongering.

Related Posts

Citations

  • Antona et al. 2013. Measles elimination efforts and the 2008-2011 outbreak in France. Emerging Infections Diseases 19.
  • Baxter et al. 2015. The epidemiology and global burden of autism spectrum disorders. Psychological Medicine 45:601–613.
  • Bishop et al. 2008. Autism and diagnostic substitution: evidence from a study of adults with a history of developmental language disorder. Dev Med Child Neurol 50: 341–345.
  • CDC 2004. Epidemiology of Measles—United States, 2001–2003. Morbidity and Mortality Weekly Report 53:713–716.
  • CDC. 2011. Measles — United States, January–May 20, 2011. Morbidity and Mortality Weekly Report 60:666–668.
  • CDC. 2015a. Measles (Rubeola): Complications of Measles. Accessed 20-Aug-16.
  • CDC. 2015b. Vaccines & Immunizations: Measles–Q&A about Disease & Vaccine. Accessed 20-Aug-16.
  • CDC. 2016a. Measles (Rubeola): For Healthcare Professionals. Accessed 21-Aug-16.
  • CDC. 2016b. Autism Spectrum Disorder (ASD): Data & Statistics. Accessed 21-Aug-16.
  • Christenson and Bottiger. 1994. Long-term follow-up study of rubella antibodies in naturally immune and vaccinated young adults. Vaccine 12:41–45.
  • Clemens et al. 1988. Measles vaccination and childhood mortality in rural Bangladesh. American Journal of Epidemiology 128:1330–1339.
  • Elsabbagh et al. 2012. Global prevalence of autism and other pervasive developmental disorders. Autism Research 5:160–179.
  • Ewing. 2009. What is regressive autism and why does it occur? Is it the consequence of multi-systemic dysfunction affecting the elimination of heavy metals and the ability to regulate neural temperature? North American Journal of Medical Sciences 1:28–47.
  • Gaurner et al. 2006. Mechanisms of disease: The hygiene hypothesis revisited.  Nature Clinical Practice Gastroenterology and Hepatology 3:275–284.
  • Hansen et al. 2015. Explaining the increase in the prevalence of autism spectrum disorders: the proportion attributable to changes in reporting practices. JAMA Pediatrics 169:56–62
  • Jokinen et al. 2007. Cellular immunity to mumps virus in young adults 21 years after measles-mumps-rubella vaccination. Journal of Infectious Diseases 196:861–867.
  • King et al. 1991. Clinical efficacy of measles vaccine during the 1990 measles epidemic. Paediatric Infectious Disease Journal 10:883–888.
  • Knol et al. 2013. Large ongoing measles outbreak in a religious community in the Netherlands since May 2013. Eurosurveillance 18.
  • Kockanek. 2011. Deaths: Preliminary Data for 2009. National Vital Statistics Reports 56.
  • Mina et al. 2015. Long-term measles-induced immunomodulation increases overall childhood infectious disease mortality. 6235:694–699.
  • Murphy et al. 2010. Deaths: Final data for 2010. National Vital Statistics Reports 61.
  • Rutter. 2005. Incidence of autism spectrum disorders: changes over time and their meaning. Acta Paediatr 94:2–15.
  • Orenstein et al. 2004. The clinical significance of measles: a review. Journal of Infectious Diseases 189:S4–S16.
  • Schmitz et al. 2011. Vaccination status and health in children and adolescents findings of the German health interview and examination survey for children and adolescents (KiGGS). Dtsch Arztebl Int 108:99–104.
  • Sudfeld et al. 2010. Effectiveness of measles vaccination and vitamin A treatment. International Journal of Epidemiology 39:i48-i55.
  • Taylor et al. 2014. Vaccines are not associated with autism: and evidence-based meta-analysis of case-control and cohort studies. Elsevier 32:3623-3629.
  • Whitney et al. 2014. Benefits from immunization during the vaccines for children program era — United States, 1994–2013. Morbidity and Mortality Weekly Report 63:352–355.
  • WHO. 2016. Measles. Accessed 20-Aug-16.
Posted in Vaccines/Alternative Medicine | Tagged , , , , | 103 Comments

4 meaningless words: toxin, natural, organic, and GMO

News articles and blog posts are often full of buzzwords that are heavy on emotional impact but light on substance, and for scientific topics such as nutrition, health, medicine, and agriculture, four of the most common buzzwords are “toxins,” “natural,” “organic,” and “GMO.” These words are used prolifically and are typically stated with clear implications (“toxin” and “GMO” = bad; “natural” and “organic” = good). The problem is that these words are poorly defined and constantly misused. Their definitions are so arbitrary and amorphous that they are functionally meaningless. In other words, they do not add anything useful to a discussion without first attaching a list of qualifiers to them. They are often used in a way that shifts them into the category of what are referred to as “weasel words,” meaning that their use gives the impression that the author said something concrete and meaningful, when in fact the statement was a null sentence that lacked any real substance. The point of this post is, therefore, to dissect the usage of these words, and explain why they are so lacking in meaning as well as why their usage is often misleading and deceptive.

To be clear, when I say that a word is meaningless, I mean that the word is functionally meaningless, rather than semantically meaningless (i.e., the word may have a clear definition, but that definition is so arbitrary that using the word does not add any meaningful content to a statement, and it is not useful for making decisions or assessing the risks and benefits of the thing being described).

Note: Yes, I know that “GMO” is technically an acronym initialism not a word. Deal with it.


“Toxins”
Our society seems to be obsessed with “toxins.” The internet is full of purveyors of woo selling everything from expensive fruit cleanses to “earthing” mats, all with the intended purpose of ridding your body of vaguely defined “toxins.” The problem is simply that there is no such thing as a “toxin.” All matter is made of chemicals (excluding subatomic particles for a minute), and essentially all chemicals are safe at a low enough dose and toxic at a high enough dose (i.e., the dose makes the poison). So there are toxic doses not toxic chemicals. Even water becomes lethally toxic at a high enough dose (Garigan and Ristedt 1999). So this idea that something is going to rid your body of “toxins” doesn’t make any sense, because the chemicals themselves are not “toxins,” and they only become toxic at a high enough dose. Take formaldehyde, for example. I often hear people talk about it as a “toxin,” but the reality is that it is an inevitable bi-product of normal biological processes. So not only is it in many fruits and vegetables, but it is actually produced by your body! The chemical itself is not dangerous, but it can become dangerous at a high enough dose.

To be clear, I’m not saying that we shouldn’t pay attention to what we put into our bodies. Of course we should, but we need to evaluate chemicals based on the dose at which they become toxic, not simply based on whether or not they are present. Further, we need to always carefully define exactly which chemicals we are talking about, rather than using the vague and amorphous term “toxin,” because any chemical will become a “toxin” at a high enough dose.

 Side note: Your body is wonderfully equipped with a liver and kidneys which do a fantastic job of “detoxing” you and keeping chemicals at a safe level. “Detox” supplements/treatments are total nonsense.

Addendum (16-Aug-16): It should be noted that the word “toxin” does have an actual biological meaning in the context of chemicals that are released by microscopic organisms. These are often toxic to individual cells at incredibly lo doses because a cell itself is so small. So when I talked about “toxins” in the post, I was referring to the notion that certain chemicals are automatically dangerous for you as an organism, rather than on a cell by cell basis.

 

 “Natural”
The definition of “natural” seems obviously to be, “found in nature,” but that’s actually a lot more ambiguous and arbitrary than it sounds. First, let’s deal with why this definition is arbitrary, and the best way to explain that is by talking about chemical compounds. Everything around you is made of chemicals (including you), and in chemistry, a compound is simply the combination of two or more different elements. So most of the things that are around you are in fact chemical compounds (there are several thousand compounds that make your body, for example). Now, many people like to distinguish between “natural” and “synthetic” chemicals, where “natural” chemicals can be found in nature, while “synthetic” ones were produced in a lab, but that distinction is arbitrary. A chemical is a chemical, and on a molecular level, there is nothing that separates natural and synthetic chemicals. All chemical compounds are made by stringing different elements together, and there is no inherent difference between nature stinging elements together and scientists stringing elements together. We can make acids in the lab and you can find acids in nature, we can make chemicals that are poisonous at anything but a low dose in the lab, and you can find chemicals that are poisonous at anything but a low dose in nature, etc. The fact that something was synthesized in a lab doesn’t make it any more dangerous or any safer than a chemical that was found in nature. Indeed, saying that a given chemical is “natural” tells you absolutely nothing useful about the toxicity of that chemical, which is why I am arguing that the term is meaningless. Saying “chemical X is natural but chemical Y is synthetic” doesn’t tell me anything about the properties of those two chemicals. After all, nature is full of chemicals that are toxic at anything but a very low dose (cyanide and arsenic come to mind).

chemical structure acetylsalicylic acid salicylic acidConsider, for example, acetylsalicylic acid and salicylic acid. One of those is natural and the other is synthetic. Can you tell which? No, and neither could a chemist. If you showed those two molecules to a chemist who had no prior information about those chemicals, there is no way they she could tell you which was natural and which was synthetic, because that distinction is arbitrary. In all likelihood though, she would know which is which because these are two very well-known compounds. Salicylic acid is the compound in willow bark that gives it medicinal value, and acetylsalicylic acid is the synthetic version of it that we all know as aspirin. Further, we switched to the synthetic version largely because straight salicylic acid has a lot of unpleasant side effects like gastrointestinal problems (Hedner and Everts 1997). To be clear, aspirin has side effects as well (as do all chemicals), but they tend to be less severe, and the point is, once again, that simply being natural doesn’t automatically make something better. Indeed, asserting that something is better because it is natural is a logical fallacy known as an appeal to nature.

Batman and robin GMO meme

Almost none of our crops are truly natural in that most of them aren’t actually found in nature in their present form.

Moving beyond the arbitrariness of what is natural, the typical definition of “found in nature” doesn’t apply to some things that most people would intuitively think of as natural. Take apples, for example. They’re natural, right? Not so much. The fruit that we know as an apple does not grow in nature. As I will talk about more later, essentially all of our crops have been modified by thousands of years of careful breeding, so, technically speaking, they aren’t natural.

The situation is even more problematic when we talk about actions rather than objects. People often say things like, “we should do X, because X is natural,” but what on earth does that mean? Generally, I hear people say that it means what our ancestors did, but that raises the obvious question of how far back do we have to go for something to be natural? Are we talking about 200 years ago? 1,000 years ago? 10,000 years ago? etc. This definition is horribly ambiguous.

To get around this problem, some people say that natural actions are those that are found in the animal kingdom, but that is also an extremely problematic definition for a number of reasons. First, how widespread does it need to be in the animal kingdom? Is it simply required to find one animal that does it? Further, there are lots of human actions that most people think of as natural, even though other animals don’t do them. For example, we cook our food. Does that making cooking unnatural? Finally, this definition is fundamentally flawed because we are just highly evolved animals, so doesn’t that make everything that we do natural? Actually think about this for a second. I think that we can all agree that structures like bird nests and beaver dams are natural, but those are not structures that just form spontaneously in nature. Rather, they are carefully and deliberately constructed by an animal who uses materials to make them. Nevertheless, if I make a wooden table, most people would agree that the table is unnatural, but how on earth is that any different from a beaver dam? The beaver is an animal that took materials found in nature and combined them to make a new structure, and I am an animal that took materials found in nature and combined them to make a new structure. What’s the difference? Further, we can logically extend this to all human structures. When you get right down to it, all of the parts of a skyscraper came from nature, and there is no logical reason to say that a beaver combining sticks and mud to make a dam is natural but me combing two metals to make steel is unnatural. Again, the definition of natural is completely arbitrary and functionally meaningless.

 

“GMO”
GMO stands for “Genetically Modified Organism,” and you may think that this has a very clear and precise definition…but it really doesn’t. Before reading the rest of this, try to come up with a definition of it yourself, then see how that definition holds up.

The most general line of thought would be that a GMO is exactly what is says: “an organism whose genes have been altered,” but that definition is much too broad. Every living organism has a genetic code that has been altered from its ancestral state by millions of years of evolution. If you really think about it, we are all just heavily modified cyanobacteria (cyanobacteria [or some similar organisms] where most likely the first living cells).

Now you may think that I am stretching things a bit here, and perhaps I am, but “nature” does all sorts of crazy things like hybridizing species (as plants do frequently) and even stealing the DNA from one organism and inserting it into the genetic code of another. For example, at some point in the evolution of the sweet potato, it managed to modify its genetic code by inserting bacterial genes into its DNA. In other words, it is a transgenic species whose genetic code is a combination of the genes of several species. Shouldn’t that make it a GMO? Further, this is not limited to sweet potatoes, because bacteria themselves are well known for their ability to incorporate the DNA of other species into their own genomes. So nature is constantly doing the types of things that most people would associate with GMOs, and foods like sweet potatoes really are transgenic species.

natural corn teosinte

The corn that we eat is quite different from wild corn (teosinte). Our crops have been genetically modified via thousands of years of careful breeding, and the fruits, vegetables, and animals that we eat today contain novel genetic codes that are not found in nature. Image via mentalfloss.com.

Nevertheless, you can try to qualify the term GMO by saying that GMOs are, “organisms that have been genetically modified by humans,” but that definition is also fraught with problems. Beyond the fact that it is totally arbitrary (see the “natural” section), it also would encompass all modern agriculture. Those delicious fruits that you know as watermelons don’t exist in nature (at least not in their current form). Similarly, natural bananas are small and full of giant seeds, and wild corn does not produce those nice juicy husks that you slather in butter and salt. Both our livestock and crops have been genetically modified through years selective breeding, and they contain genetic codes that aren’t found in nature.

At this point, people often try to add something about moving genes between species, but that just creates more problems. First, I’ve already shown that nature does that as well, so we have to keep that arbitrary “man did it” qualifier. Second, that would also include lots of “non-GMO” crops such as pluots, plumcots, tangelos, etc. all of which are hybrids that used selective breeding to combine the DNA of two different species. Third, this definition does not include all of the crops that are typically described as “GMOs.” Indeed, GMOs that take the DNA of one organism and put it into another (a.k.a. transgenic GMOs) are actually only one type of GMO. There are others that simply modify the existing genome (i.e., they activate, deactivate, or alter genes that are already present).

Given the problems with that definition, you might try defining a GMO as an organism that is “modified by humans via a method other than selective breeding,” but that definition includes mutation breeding, which is typically not considered to be a GMO. This method uses chemicals or UV radiation to randomly mutate organisms’ DNA in order to produce new and useful traits (i.e., it makes genetic modifications via inducing mutations). However, this method typically does not receive the label “GMO,” and in some cases, even farms that label themselves as “organic” can us crops that were produced by this method.

This leaves us with the outrageous definition that a GMO is, “an organism whose DNA was modified by humans via a method other than selective breeding or mutation breeding,” but at that point we have tacked so many arbitrary qualifiers onto the term, that the term itself is essentially meaningless. To put this another way, our “non-GMO” agricultural practices constantly make genetic modifications, including swapping genes between species and randomly mutating DNA, and the techniques that we label as “genetic engineering” are only different in that they are faster and more precise than the other methods. Therefore, there is no good reason to talk about GMOs as if they are fundamentally different than the other methods, because the definition of a GMO is completely arbitrary.

Note: You may be tempted to say that a GMO is simply, “an organism that was produced by genetic engineering” but that doesn’t really solve the problem, because you then need a non-arbitrary definition for GE which excludes mutation breeding, selective breeding, and what happens in nature, without excluding any crops that are typically thought of as GMOs. So you still have the exact same problem.


“Organic”
Finally, let’s talk about the term “organic.” This is perhaps the greatest marketing term ever coined, and the problem with it is not that a definition doesn’t exist, but rather that the definition is arbitrary and most people don’t use it correctly (to be clear, I am talking specifically about organic farming practices, not organic chemistry or the general use of “organic” to mean something that formed of its own accord without intervention).

Here is a question for you, true or false, organic farming doesn’t use pesticides? A very large portion of people think that the answer is “true,” but it is actually “false.” Organic farmers absolutely use pesticides, and many of those pesticides are toxic at comparable doses to the pesticides used in traditional farming. Indeed, organic pesticides have can harm wild species, pollute waterways, and do all of the other harmful things that traditional pesticides can do (Bahlai et al. 2010). In fact, one of the most common organic pesticides is “Bacillus thuringiensis (Bt) toxin,” which is the exact same chemical that GMO corn produces (i.e., Bt maize). So one the one hand, organic farmers use Bt liberally, and on the other hand, they demonize corn that produces Bt. Are you starting to see why this is arbitrary (dare I say hypocritical)?

So if organic crops use potentially dangerous pesticides just as much as traditional crops, then what exactly does it take for a crop to be considered organic? Generally speaking, they have to be grown without synthetic pesticides (“natural” are fine) and without the use of GMOs (some countries place additional requirements like no petroleum-based fertilizers). If you have read the rest of this post, then you should know where I am about to go with this.  I have already shown that the distinction between “natural” and “synthetic” chemicals is arbitrary and all chemicals are safe at a low dose and toxic at a high enough dose, and I have shown that what we consider to be a GMO is really arbitrary. So, since the definition of organic relies on those other terms, the “organic” label is itself arbitrary. To put this another way, organic crops are not automatically healthier or more nutritious than traditional crops. Indeed, reviews of the literature have been unable to find consistent and compelling evidence that organic food is healthier (Smith-Spangler et al. 2012; Galgano et al. 2015).

Now, at this point, you may be thinking that organic crops aren’t healthier, but surely they are better for the environment. However, that is also a misnomer. Some practices that are typically associated with organic farming are better for the environment, but those practices are sometimes included in non-organic farming as well, and organic farming has serious drawbacks, such as the fact that it often uses far more land and resources than traditional farming (Tuomisto et al. 2012). As a result, you can’t make a blanket statement like, “organic farming is better for the environment” because in many cases it isn’t.

The point is that simply saying that something is “organic” doesn’t actually tell you anything useful about how healthy it is or whether or not it was grown in a sustainable way. You need to actually look at its nutrient content, what chemicals were used to grow it, how much chemical residue is present and at what does those chemicals becomes toxic, how much land was used to grow the crop, etc. In other words, if all that you say is, “crop A is organic and crop B is non-organic” then you have not provided any useful information for describing which is better. To do that, you have to say, “crop A is organic and contains X nutrients, was grown on Y amount of land, was sprayed with Z chemicals, etc.; whereas crop B is non-organic and contains X’ nutrients, was grown on Y’ amount of land, was sprayed with Z’ chemicals, etc.” You can, however, say the exact same thing without including the words “organic” and “non-organic” and you won’t lose any useful information. In other words, it’s the X, Y, and Z bits that matter. The label “organic” is completely irrelevant. Therefore, simply saying that something is “organic” doesn’t tell you anything useful about how healthy or sustainable it is.

Conclusion
In short, all four of these terms are either ambiguous or arbitrary, and they really don’t give you much in the way of useful information. Everything becomes a toxin at a high enough dose, natural chemicals can be just as dangerous as synthetic chemicals, virtually all of our food has been genetically modified, and organic crops are not inherently healthier or better for the environment than traditional foods. None of these words actually indicate the health or quality of the products that they describe, and you always need to look at additional information like the dose of a chemical contained in the product, the dose at which that chemical becomes toxic, etc. Simply saying that something is “natural” or “organic” or “a GMO” or “contains a toxin” isn’t useful because that doesn’t describe anything meaningful about that product.

Related posts

Citations

Bahlai et al. 2010. Choosing organic pesticides over synthetic pesticides may not effectively mitigate environmental risk in soybeans. PLoS ONE 5:e11250.

Doucleff. 2015. Natural GMO? Sweet potato genetically modified 8,000 years ago. NPR: Food and Culture

 Garigan and Ristedt 1999. Death from hyponatremia as a result of acute water intoxication in an Army basic trainee. Military Medicine 164:234–238.

Galgano et al. 2015. Conventional and organic foods: A comparison focused on animal products. Cogent Food and Agriculture 2: 1142818.

Hedner and Everts 1997. The early clinical history of salicylates in rheumatology and pain. Clinical Rheumatology 17:17–25.

Ruishalme. 2015. Natural assumptions. Thoughtscapism.com. Accessed 15-Aug-16

Smith-Spangler et al. 2012. Are organic foods safer or healthier than conventional alternatives? A systematic review. Annals of Internal Medicine 157:348–366.

Tuomisto et al. 2012. Does organic farming reduce environmental impacts? A meta-analysis. Journal of Environmental Management, 112:309­–320.

Wilcox. 2011. Mythbusting 101: Organic farming > conventional agriculture. Scientific American.

Posted in GMO, Rules of Logic | Tagged , | 73 Comments

Evolutionary mechanisms part 6: Genetic drift

Many people are under the false impression that evolution is always beneficial, but it can actually be quite harmful. This is especially true for evolution via genetic drift. Like natural selection, genetic drift removes variation from a population, but unlike selection, it is random and it can remove beneficial traits. Indeed, we often become so focused on selection that we assume that all of the features that we see in organisms were selected because of some beneficial role, but that is not true. Some of those features arose via the random process of genetic drift, and at times this process can even swamp natural selection and cause harmful traits to rise to prominence. In other words, evolution via genetic drift can actually be very detrimental to populations. So in this post, I want to talk about what this mechanism is, how it works, and how it interacts with the other evolutionary mechanisms.

What is genetic drift?
In its simplest terms genetic drift is just a random change in the allele frequencies of a population over time. This is in stark contrast to natural selection, in which the change in allele frequencies is not random (it is a common misconception that natural selection is random). Remember, evolution is simply any change in a population’s allele frequencies over time, so the key defining feature of genetic drift is that this change is random. The obvious question is then, what causes allele frequencies to change randomly? There are several answers to that question, but the classic answer (and indeed the one that is constantly at play) is the random nature of independent assortment.

If you think back to high school biology, you have two copies of your chromosomes, one of which you received from your mom, and one of which you received from your dad. Further, these chromosomes are arranged into pairs (i.e., you got one copy of chromosome #1 from mom, and one copy of chromosome #1 from dad). Additionally, you have two copies of each gene (each copy is known as an allele), and each copy is on a different chromosome in the pair (one from mom and one from dad). So, for example, if you have the blood type AB and your mom is AA and your dad is BB, then that means that you received a chromosome with the A allele from your mom, while the other chromosome in that pair came from your dad and contained the B allele. Now, every time that you produce an egg or a sperm cell, you will only pass on half of your chromosomes (one chromosome from each pair). So each individual sperm or egg will receive the A allele or the B allele, but not both. Importantly, which allele you pass to a given egg/sperm is completely random and is not affected by which other chromosomes you pass on. This is what we call independent assortment, and it is a key source of genetic drift. (Technical note: if you want to be more precise, independent assortment refers the random arrangement of homologous chromosomes along the metaphase plate during meiosis I, but the key point is simply that which member of a chromosome pair gets passed on is random)

This is a simple illustration of the coin simulation. Each circle is an allele, and each pair is an individual. The lines show which allele gets passed. So the individual on the left in generation 1 happened to pass an H to one offspring and a T to the other, whereas the individual on the right randomly passed an H to each. As a result, the individual on the left in the second generation had two Hs and could not pass a T. The individual on the right could have passed a T, but two Hs were randomly selected instead, so by the third generation, H had become fixed, and the T allele was lost.

This is a simple illustration of the coin simulation. Each circle is an allele, and each pair is an individual. The lines show which allele gets passed. So the individual on the left in generation 1 happened to pass an H to one offspring and a T to the other, whereas the individual on the right randomly passed an H to each. As a result, the individual on the left in the second generation had two Hs and could not pass a T. The individual on the right could have passed a T, but two Hs were randomly selected instead, so by the third generation, H had become fixed, and the T allele was lost.

Because which chromosomes you pass on is completely random, the allele frequencies can shift overtime if, just by chance, one chromosome happens to get passed more than the other. I’m going to use a simulator to illustrate this in a minute, but for now, let me illustrate with a coin (if you have one handy, please grab a piece of paper and try this yourself). First, assume that you have two individuals, each of which is a heterozygote, meaning that they have one copy of each allele (which in this case will be heads and tails). Now, to make the first individual, flip the coin twice, and whatever it lands on, that will be your first offspring. The first coin flip represents the allele from parent 1 (i.e., there is a 50% chance of passing H and a 50% chance of passing T, just like in independent assortment), and the second flip represents the allele from parent 2. Now, do it again to make a second offspring. There are only six possible outcomes: both TT, both HH, both HT, one HH one TT, one HT one TT, one HT one HH (TH and HT are the same). Write down your first two offspring, then “mate” them. If both of your offspring are HT, then the procedure is identical to what you did before; however, if one of them is a homozygote (i.e., only has one type of allele), then it can only pass on that allele (i.e., a TT can only pass T and HH can only pass H, thus if your offspring are TT and HH then both grandchildren will be TH). Make two more children in this manner (e.g., if you have an HH and a HT, then each of your two new individuals will receive an H from the HH parent and either an H or a T from the HT parent, flip the coin to decide which). Keep flipping your coin and making new generations like this, and you will pretty quickly get to a point where one of your alleles disappears and all that you have is TT or HH. At that point, we say that the allele is “fixed,” meaning that one of the alleles has been lost, and every individual in the population is now homozygous for the other allele (i.e., has two copies of it). The first time that I tried this, it only took five generations for one allele to become fixed, but if you do it numerous times, you’ll notice that the number of generations until fixation varies, and if you do it enough times, 50% of the time heads will become fixed and 50% of the time tails will become fixed.

genetic drift random walk

This shows a population of 100 individuals evolving over time via genetic drift

What you have just done is simulate genetic drift. That’s really all that it is. Random chance produces slight variations in allele frequencies until one allele eventually becomes fixed. The image on the left illustrates the situation more clearly. It shows genetic drift in a population of 100 individuals. As you can see, the allele frequency randomly oscillates up and down until eventually the dominant allele becomes fixed.

Effects of population size
If you think about the math behind genetic drift, you would intuitively expect genetic drift to be more severe when population sizes are small, and indeed that is exactly what happens in nature. In fact, at very large population sizes, we expect genetic drift to have a relatively small effect. To illustrate this, I used a simulator to simulate four different scenarios (illustrated below). In each situation, I simulated 10 populations that consisted of 2, 10, 100, or 1000 individuals, and each population started with even allele frequencies for the gene being simulated (i.e. half of the alleles were dominant and half were recessive). As you can see, when there were only 2 or 10 individuals in the populations, alleles became fixed very rapidly (the simulations with 2 individuals are the same thing as what you did with a coin). When the population size jumps up to 100, however, things are more stable, but by the end of the 50 generations shown, you can see that some populations are trending towards having fixed alleles, and indeed after a few hundred generations all of the populations became fixed (not shown). Finally, the populations with 1000 individuals retained fairly stable allele frequencies, but even in those cases, alleles will eventually become fixed unless they are acted upon by another evolutionary force, and, in fact, in natural populations, those other mechanisms (particularly gene flow) do often interact with genetic drift and prevent the fixation of alleles in large populations.

Note: the next post in this series will be entirely devoted to gene flow, so I will talk about how it interacts with genetic drift in detail there.

genetic drift illustration simulation population size

Each line represents a different population, and as you can see, when those populations are small (red and blue lines) the alleles become fixed quite rapidly.

Interactions with selection
When talking about genetic drift, we are often talking about neutral alleles (i.e., alleles that are neither beneficial nor harmful to individuals); however, it can occur for alleles that are not selectively neutral. Imagine a situation where half of the alleles in a population for a given trait are dominant (A) and half are recessive (a); however, the environment changes, and as a result, individuals with a dominant phenotype (i.e., they have at least one dominant allele, so they are either AA or Aa) survive to a reproductive age 100% of the time, whereas individuals with a recessive phenotype (aa) only survive to a reproductive age 90% of the time. In other words, there is selection against the recessive allele, because it reduces an individual’s ability to reproduce when two copies of it are present. Now, you should intuitively expect that natural selection will act on this situation and remove the receive allele from the population, and when the population is large enough, you would be correct. However, when the population is small then, depending on the strength of selection, genetic drift can actually overpower selection.

Each red dashed line shows a population with 100 individuals, and each solid light blue line shows a population with 10 individuals. Even though populations that are 0% dominant have lower survival (in these simulations), genetic drift still caused several of the small populations to lose the dominant allele completely.

Each red dashed line shows a population with 100 individuals, and each solid light blue line shows a population with 10 individuals. Even though populations that are 0% dominant have lower survival (in these simulations), genetic drift still caused several of the small populations to lose the dominant allele completely.

To illustrate this, consider the figure on the right. I once again used the simulator to simulate 10 populations with 10 individuals and 10 populations with 100 individuals, but this time I set a selection differential so that all of the AA and Aa individuals would survive to a reproductive age, but individuals who had two recessive alleles (aa) only had a 90% chance of surviving. As you can see, in all 10 populations of 100 individuals, selection removed the harmful allele and fixed the beneficial one. In four of the populations with only 10 individuals, however, the populations were so small that genetic drift overpowered selection and actually caused the recessive allele to become fixed! That is a really bad situation because, barring any gene flow or mutations, those populations are now stuck with the harmful allele, and the overall survival of those populations is 10% lower than the survival of populations that managed to rid themselves of the harmful allele.

As I’m sure you can imagine, this has extremely important implications for wildlife conservation efforts, and it is one of the key reasons that conservationists are so concerned with maintaining large numbers of individuals. When populations are small, genetic drift can cause harmful alleles to rise to prominence or even become fixed, and that is a very bad thing for the survival of those populations.

Genetic drift removes variation
It should now be clear that genetic drift removes variation from populations. Indeed, in many of the simulations that I have illustrated you can see that one of the alleles became fixed while the other was lost from the population. This is important because it means that, just like selection, genetic drift is constantly causing populations to become less diverse. As a result, populations are heavily reliant on gene flow from neighboring populations (which can restock their genepool with alleles that they had lost) as well as mutations (which are the only mechanism that is capable of making new variation).

Even when the alleles that are lost are neutral, this is often a serious problem for populations in the long run. Remember, selection simply adapts organisms to their current environment, so although an allele may be neutral at the moment, it may become very important if the environment changes. Indeed, a high level of genetic variation is one of the key factors for determining whether or not a population will survive changes in the environment, the introduction of a new disease or predator, etc.

Bottlenecks and founder events
Now that you understand the basics of genetic drift, I want to introduce you to two final concepts. The first of these is a genetic bottleneck. Bottlenecks are often considered to be a type of genetic drift, but they act a bit differently from the type of genetic drift that we have been talking about so for. In bottlenecks, a large number of individuals is rapidly lost, and as a result, the genetic variation is reduced to a small subset of what it was before. Image, for example, that there is a large population of frogs living at the base of a volcano, and 80% of the frogs contain an allele for green pigment while 20% contain an allele for brown pigment. Then, the volcano erupts and kills off 90% of the frogs. Further, just by chance, all of the frogs with the allele for brown pigment were killed during the eruption. This is, therefore, clearly a case of instantaneous evolution by genetic drift because evolution is a change in allele frequencies, and the allele frequencies change from 20% and 80% to 0% and 100%. Additionally, this would likely represent a genetic bottleneck because it is likely that many other alleles were lost as well. Indeed, one of the characteristics of a bottleneck is the loss of many rare alleles. Finally, the effects of a bottleneck are largely determined by how many generations it lasts for (i.e., how long the population remains small), because as you recall, small populations have greater genetic drift. So even alleles that survived the volcanic eruption may quickly be lost due to genetic drift, if the population does not grow rapidly.

At this point you may be wondering when it is appropriate to talk about a bottleneck as a type of genetic drift, and that is honestly something of a grey area, with some people/books preferring to entirely separate the two, while others lump them together. I personally think that the best way to think about this is to remember than genetic drift is a random change in allele frequencies. So, if the bottleneck killed individuals randomly (i.e., no individuals had alleles that made them more likely to survive the eruption) then it is ok to talk about it as a type of genetic drift. However, if something kills many individuals but does so by selecting its victims, then it should really be thought of as a natural selection event. For example, if a disease outbreak kills off 90% of individuals, and only the 10% of individuals that had alleles that made them resistant to the disease survived, then that likely would be a genetic bottleneck, but it would not be genetic drift because the survivors were selected rather than being chosen randomly. Genetic drift could, however, come in to play in the following generations if the populations do not recover rapidly enough (i.e., they remain small).

On a side note, the recovery of populations from disease outbreaks and whether or not the outbreaks caused genetic bottlenecks is actually a key focus of my current research.

A founder effect is really just a special type of a bottleneck, and it occurs when a new population is formed from a subset of the original population (i.e., the new population only contains a small portion of the genetic variation found in the original population). For example, let’s go back to our population of brown and green frogs, but this time, instead of a volcano, imagine that they live on the edge of a lake, and one day a storm blows several of them out to a previously uninhabited island. However, all 10 of the individuals that form the new population on the island contained only the allele for brown pigment. We would describe this as a founder event, because the new population is limited to the genetic material contained in the individuals that founded it (e.g., the green allele is not present in this new population), but it often also presents a bottleneck, because these new populations usually only contain a small portion of the variation that was in the source population.

Conclusion
In summary, genetic drift is simply an evolutionary mechanism that causes random changes in allele frequencies over time. It is most powerful when population sizes are small, and in some situations it can actually cause harmful alleles to become fixed in a population. As a result, it is a major concern for conservation efforts, and it is one of the reasons that conservation biologists place a high priority on maintaining large populations.

Other posts in this series

 

 

Posted in Science of Evolution | Tagged , | 3 Comments

Are scientists arrogant, close-minded, and dismissive?

I spend a lot of time debating people who reject science, and I have observed a common tendency for people to try to dismiss scientific results by attacking scientists with accusations that they are, “arrogant,” “close-minded,” “dismissive” and don’t “take people’s concerns/anecdotes seriously or engage the public” It’s important to realize that all of these accusations are simply ad hominem fallacies, and even if they were true, that wouldn’t make the scientists’ results any less accurate. In most cases, however, they aren’t even true, and they are actually reflective of the people making the accusations, rather than the scientists themselves. So I want to consider each of these claims and see whether or not they actually apply to most scientists. I am going to try to deal with each accusation more or less separately, but there will inevitably be some overlap and, indeed, each section builds on and connects to the other sections. I am especially going to focus on the claim that scientists are arrogant and pretentious, because the other claims are really built on that one.

Note: To be clear, I am not suggesting that no scientists are arrogant, close-minded, etc. Obviously there will be a few bad apples in any group. However, the accusation is that scientists in general display these qualities, and that is what I am taking issue with.

Are scientists arrogant?
Let’s start with the accusation that scientists are arrogant, elitist, pretentious, etc. The Merriam-Webster dictionary defines “arrogant” as:

“having or showing the insulting attitude of people who believe that they are better, smarter, or more important than other people.”

There are several important things to note about this definition. First, simply insisting that you are right and someone else is wrong is not necessarily arrogant. It only becomes arrogant when it is done in a haughty “better than thou” manner or when the assertion is based on an unrealistic assessment of one’s own skills, knowledge, and ability. That last part is really important. It is not arrogant for someone who is highly trained and has lots of experience to think that he/she knows more about the topic than someone with no training or experience on that topic.

In most areas of life, people have no problems deferring to experts, but for some reason when it comes to science, people view expertise as a bad thing.

In most areas of life, people have no problems deferring to experts, but for some reason, when it comes to science, people view expertise as a bad thing.

Let me give several examples to demonstrate what I mean by that. First, imagine that someone who has never taken an engineering course and has no relevant experience reads several blogs and comes to the conclusion that a particular bridge is unsafe. However, multiple professional engineers (each of whom went through several years of training to earn advanced degrees and have subsequently spent years working as an engineer) carefully examine the bridge, examine the arguments made by those who are concerned, and conclude that the bridge is safe. Are those engineers being arrogant? Is it presumptuous of them to “assume” that their advanced degrees and years of experience have made them more qualified than a bunch of bloggers to assess the safety of the bridge? Obviously it isn’t. We expect that people with that type of training and experience will know more than the average person. That’s why we have the word “professional.” It is insane to think that reading a few blogs is worth more than a degree from MIT.

Similarly, imagine that someone who has never even sat in the cockpit of a plane boards a commercial airliner and proudly proclaims that he is more qualified than the pilot because he has “done his homework” on the internet and logged lots of hours on X-box air combat games. This man then proceeds to lecture the pilot on everything that he/she is doing “wrong.” Would it be arrogant of the pilot to ignore him? Obviously not. No one on that plane would be OK with that man taking over for the pilot (except the man himself, of course), and everyone would agree that the man in question is being arrogant and foolish and needs to take his seat and shut up.

I can, of course, give countless examples like this that everyone would agree with. No one would argue that a neurosurgeon is arrogant for not taking surgical advice from unqualified family members you got their surgical licenses from Youtube. No one would accuse a professional mechanic of arrogance for ignoring a customer who doesn’t know the difference between a wrench and a screwdriver. No one would say that a lawyer is arrogant for thinking that they know more about the legal system than someone who has never set foot in a court room or opened a law book. Yet for some reason, when it comes to science and some fields of medicine, people feel entitled to think that they are experts. They actually seem to think that Google is equivalent to an advanced degree.

When you think about this, it is ludicrous. Becoming a professional scientist takes, on average, four years of undergraduate studies (note: science majors are usually rated among the most difficult/time consuming), 6–10 years of intense graduate training (most grad students work/study 60+ hours a week and rarely take holidays), and several years of doing a post-doc. Further, after all of that training, you spend your life actually doing science, which means that you are constantly gaining experience and new knowledge. The idea that reading a bunch of blogs and non-academic books will put you on par with that type of training and experience is the epitome of arrogance and hubris. It is just about the most pretentious thing that I can think of. Of course scientists know more about science than the average person on the street, just as plumbers know more about plumbing than the average person, and mechanics know more about cars than the average person. We intuitively expect that anyone who goes through that type of training will be extremely knowledgeable.

What I have been describing here is, of course, a well-established phenomenon known as the Dunning-Kruger effect. In short, people who are unskilled/unqualified tend to grossly over-estimate their own abilities/knowledge, whereas people who have the proper training/experience tend to have a more accurate view of their abilities or even under-estimate them. In other words, statistically speaking, it is the untrained people who tend to be arrogant about their abilities, not the highly trained scientists. To be clear, scientists certainly can still have unrealistic views of their own knowledge, and first rule of the Dunning-Kruger effect is that you don’t know if you are succumbing to it. So I am not suggesting that scientists are immune to this problem, but it is true that, on average, people without scientific training have a more unrealistic view of their scientific knowledge/abilities than actual scientists do.

To be clear, it is entirely possible to be highly trained and still arrogant. If, for example, a scientist said, “I am a scientist, therefore, I am smarter and better than you” that would be arrogance, but that is not usually what I see occurring. Rather, what I generally see, is that people accuse scientists of being arrogant simply because the scientists won’t accept their pseudoscience. For example, I frequently see a situation where an anti-vaccer makes an utterly ridiculous claim, and a scientist carefully and politely explains why that argument is unscientific, at which point, the anti-vaccer says something to the effect of, “well you’re just being arrogant and think that you know more than everyone else.” It is not arrogant to politely correct errors and debunk bad arguments.

Similarly, I often see people make the bizarre claim that scientists are arrogant because they think that they understand some very well-studied phenomena. For example, on multiple occasions I have heard someone say that scientists who accept climate change are arrogant for thinking that they understand how the climate works. How is it arrogant to think that thousands of studies have allowed us to understand something? It is arrogant to say, based on no evidence whatsoever, that all of those studies are wrong, but simply accepting the results of a massive body of research is in no way shape or form arrogant. Similarly, it is not arrogant of doctors to think that vaccines are safe, because vaccines have been so thoroughly tested.

In short, it is not arrogant for a highly qualified and experienced expert to think that they know more about their area of expertise than someone who has no training or experience in that area, nor is it arrogant to think that thousands of careful studies have produced reliable results. It is, however, arrogant to think that reading a few blogs or sitting around conjecturing puts you on the same level as a highly trained expert. Additionally, experts can certainly be arrogant if they misuse their training/experience, and statements such as, “I’m right because I’m a scientist” certainly display arrogance, but that is not usually the situation that I encounter.

Note: Do not confuse what I am saying here. I am not saying that being an expert automatically makes you right. It clearly doesn’t. Rather, I am talking about a person’s perception of themselves relative to their actual training and experience. Please read this post for an explanation of the difference between deferring to experts and appealing to authority. 

Are scientists close-minded?
At the outset, I want to make it clear that there is an extraordinary difference between being open-minded and being willing to accept utter nonsense. Most scientists are actually open to new ideas. That is, in fact, the reason why science has been able to progress so far. In other words, if scientists were truly close-minded and refused to consider any idea other than the “dogma” of their fields, then our scientific knowledge wouldn’t have changed in decades. It has changed, however, because science is inherently a process of discrediting old ideas and replacing them with better ideas. So most scientists are open to opposing ideas, but those ideas have to be based on good evidence and sound logic. If you want to convince a scientist that they are wrong, then you need to present them with actual high quality evidence (i.e., large properly conducted studies that were published in reputable peer-reviewed journals).

Tim minchin song meme if you open your mind too much your brain will fall out

The title to one of Tim Minchin’s songs

This is the fundamental point that so many people seem to miss: being open-minded means being willing to change your view when presented with high quality evidence. It does not mean being will to change your view based on anecdotes, blogs, Youtube videos, and hearsay. More often than not, when someone says “open your mind” they really mean “accept something totally ridiculous without any solid evidence to support it.” That’s not being open-minded, that’s being gullible.

Do scientists take parent’s/the public’s concerns seriously?
This one is a bit baffling to me, because the answer is so obvious. Yes, scientists absolutely take the public’s concerns seriously! That is why there are over 1,700 studies on GMOs, numerous large studies on vaccines and autism, etc. Nevertheless, I constantly hear people who insist that scientists aren’t looking for the “real” cause of autism, aren’t looking for better cancer treatments, haven’t studied GMOs, aren’t looking at the possibility that global warming is natural, etc., but all of these claims are utter nonsense. They are demonstrably false. For any of these topics, you can find multiple studies addressing those concerns. For example, in this post, I explained why we know that climate change isn’t natural, and I cited multiple studies that have examined that possibility. The topic has been extremely well studied, but I felt compelled to write that post because I encounter so many people who seem to think that scientists have never even bothered to look at the possibility that climate change is natural.

To put it simply, when someone says that scientists are ignoring them, more often than not, the problem isn’t actually that scientists are ignoring them, rather, the “problem” is that scientists didn’t find the result that they were hoping for. There are, however, exceptions which I will discuss in the next section.

A related claim is that scientists don’t engage the public. First, many scientists do in fact engage the public (for example, this blog exists because I am a scientist who thinks that it is worth my time to engage the public). Second, scientists are extremely busy people. We usually work over 60 hours a week, so asking us to add public engagement to our work schedule is not a small request. Third, those of us who do try to engage the public are rewarded with a constant wave of hate and insults. I wake up every day to find new messages telling my what a blind idiot I am for accepting the results of carefully controlled studies. I am constantly accused of being a paid shill, and I have had numerous people tell me that I am an evil, murdering, monster for supporting vaccines/GMOs. Given that level of vitriol, is it really surprising that most scientists don’t take time out of their busy schedules to engage the public? Again, most of the people making this claim don’t actually want scientists to engage them, rather they just want scientists to pander to them and tell them that they are right.

Are scientists dismissive?
The claim that scientists are dismissive is similar to the claim that they don’t take parents/the public seriously, but it has an important difference. Usually, I hear the latter being used to argue that there is a lack of research; whereas I typically hear the claim that scientists are dismissive in association with debates (this is more closely connected to the arrogance claim). In other words, many people argue that when they present scientists with their arguments, the scientists are dismissive of them and ignore their arguments rather than dealing with them. There certainly are many cases where scientists ignore people’s claims/arguments, but that needs to be qualified in several ways.

First, more often than not, this occurs when someone is making a ludicrous claim/argument that is contrary to everything that we know about the universe, and in those cases, ignoring the claim is often the appropriate response. If, for example, someone tries to tell you that rainbows form when unicorns defecate while flying through the air, you are not in any way obliged to take that claim seriously. That is obviously a silly example, but this happens all the time with real arguments that are equally absurd. Quite simply, if you are arguing for a position that has no scientific support and has been refuted by multiple studies and a basic understanding of science, then your opponent is not being “dismissive” by not taking that argument seriously. Rather, they are being rational.

To be clear, I’m not suggesting that it is OK to respond in a way that is haughty or belittling, but you are under no obligation to treat an absurd proposition as if it is a rational one. Additionally, it is always technically possible that numerous studies are wrong or even that our most fundamental concepts about the universe are wrong, so you should be willing to challenge them if presented with proper evidence. If someone comes to you with a stack of legitimate peer-reviewed studies that document the existence of unicorns and their ability to fly and poop out rainbows, as well as explaining why our current understanding of the physics of rainbows is wrong, then, at that point, you are obliged to take the argument seriously. However, if all that they are presenting is blogs, anecdotes, etc. than you absolutely can dismiss their arguments, because the burden of proof is on them to support their position, it isn’t your duty to refute it.

acid diet absurd homeostasis bad argument response to homeostasis memeAs an illustration of how this typically plays out, consider the meme on the right. It shows the type of dialogue that many people would label as “dismissive” or “arrogant” and, indeed, when I shared that on my blog’s Facebook page, one person commented with precisely that claim, so let’s examine that comment. First, the scientist in the meme actually asked a question rather than instantly rejecting the argument. If the person supporting the acid diet had responded with, “yes I do understand homeostasis, and here are several recent studies which showed that our previous knowledge was wrong and foods can actually affect the pH of our blood” then the scientist would have been forced to look at those studies. The person did not say that, however, because those studies don’t exist. Homeostasis is one of the fundamental concepts of physiology. It has been very well studied and we have a really good understanding of how it works. So claiming that food will shift the pH of our blood isn’t actually that far below unicorns on the absurdity scale. Therefore, ignoring claims that the acid diet works is not being “arrogant” or “dismissive” it’s being rational.

Conclusion
Although scientists certainly can be arrogant, close-minded, etc., usually when I see people making these accusations they are simply committing ad hominem fallacies. There is nothing arrogant about thinking that years of advanced training and experience make you more knowledgeable than someone that lacks that training and experience. Indeed, in most areas, we readily acknowledge experts and are happy to defer to them. Yet for some bizarre reason, when it comes to science and medicine, unqualified people feel entitled to think of themselves as experts, and that delusion is the truly arrogant one. Similarly, there is nothing dismissive or close-minded about rejecting anecdotes and shoddy arguments. Being open-minded means being willing to change your mind when presented with solid evidence. It does not mean being willing to accept utter nonsense despite a mountain of contrary evidence.

Note: You could argue that this post is actually anecdotal because it is based on my observations of debates rather than actual statistics, and if someone can find a solid peer-reviewed study that showed that scientists have an above-average level of arrogance, then I will happily write and addendum. However, that line of reasoning is really missing the point, because the fundamental question isn’t actually “are scientists arrogant?” Rather it is, “is it arrogant to think that advanced training and experience makes someone more qualified than someone who lacks that training and experience?”

Related posts

Posted in Nature of Science | Tagged , , , | 27 Comments

Debunking anti-vaccine arguments: VAERS, package inserts, and the VICP do not prove that vaccines are dangerous

If you spend any time talking to those who oppose vaccines, you will likely hear them cite the Vaccine Adverse Event Reporting System (VAERS), vaccine package inserts, and the national Vaccine Injury Compensation Program (VICP) as evidence that vaccines are dangerous. These three arguments are extremely prevalent among anti-vaccers, and many well-intentioned parents are duped by them. In many ways, this is entirely

One of the many anti-vaccine double-standards. Image via Refutations to Anti-Vaccine Memes.

One of the many anti-vaccine double-standards. Image via Refutations to Anti-Vaccine Memes.

understandable. I think that most parents who refuse to vaccinate are truly trying to do what is best for their children, and I understand why parents are concerned when they see a long list of symptoms that have been reported after a vaccine. However, if you really want to do what is best for your child, then it is important to fact check, use good sources, and use those sources in the way that they are intended to be used. It is on that last point that these three arguments fail. Indeed, none of these three sources are intended to provide evidence of causation, and anyone who cites them as evidence that vaccines are dangerous is misusing them. As I will explain, the fact that an adverse event was reported in one of these three sources does not actually indicate that vaccines caused that event; therefore, you cannot present these sources as evidence that vaccines are dangerous.

Before I talk about these three topics specifically, I want to make some very general comments about anecdotes, because that it what all three of these sources report. As I explained in detail here, anecdotes do not provide evidence of causation. Coincidences do happen, and the fact that two things occurred together does not mean that one caused the other. In fact, it is a logical fallacy to say “X happened before Y, therefore X caused Y” (in technical terms, this is a post hoc ergo propter hoc fallacy). This is especially true with very common events such as vaccines. So many children receive vaccines that, statistically, we expect there to be lots of cases where some form of injury/debility closely follows vaccination just by chance. This is a crucially important point: the fact that an adverse event followed the administration of a vaccine does not mean that the vaccine caused it. This is often difficult for people to accept because our brains are wired to see patterns, but the reality is that we often see patterns that aren’t really there.

Look, for example, at “lucky” items. Does Bob’s “lucky shirt” really help his favorite sports team win? No, obviously it doesn’t. It’s just a coincidence that the team won several games while he was wearing the shirt. Nevertheless, our brains are wired to see associations even when there aren’t actually causal relationships. Further, we are prone to confirmation biases. For example, every time that Bob’s team wins while he is wearing the shirt, his confidence in the lucky nature of his shirt will grow; whereas if the team loses while wearing the shirt, his brain will tend to downplay that or invent an excuse for why the shirt didn’t work for that particular game.

The exact same thing happens with medicines like vaccines. We are pattern recognition machines. So when we see SIDS, autism, etc. follow a vaccination, our brains latch onto that association, when the reality is that those two events might be total coincidences (just as Bob wearing his shirt and the team winning are coincidences). Further, confirmation biases can really skew things. If, for example, you suspect that a vaccine might be dangerous, it is extremely easy to find lots of cases of parents reporting an adverse event after a vaccination, and that will inevitably bolster your belief, but, because of the way that confirmation biases work, you will tend to ignore all of the times that the given event occurred without a vaccination or didn’t occur following a vaccination. As you will see, this is one of the big problems with VAERS, package inserts, and the VICP. They make it very easy to bolster an existing suspicion, but they are actually only showing you part of the story. They give your brain enough information to form a pattern, but not enough information to form an accurate pattern.

 

Vaccine Adverse Event Reporting System VAERS
Let’s begin with the VAERS database. This is a self-reported database of adverse events that followed vaccination. There are several very important things to note there. First, the fact that it is self-reported means that the quality of data is highly variable and often low. Anyone, regardless of medical training, background knowledge, biases, etc. can report an adverse event. So there is very little in the way of quality control on the reported information. To quote the VAERS website, “Reports vary in quality and completeness. They often lack details and sometimes can have information that contains errors.”

Self-reported databases like this are problematic for a number of reasons, not the least of which is recall bias. Our memories are imperfect and easily biased, and we tend to fixate on some events more than others. Imagine, for example, that a child becomes very ill. His/her parents will naturally want to know the cause, and events like vaccinations tend to stand out in our memories. In other words, we naturally focus on them more than on other seemingly less significant events which may actually have been very important. Thus, parents may attribute the illness to a vaccination that happened a month ago simply because they are looking for answers, and that stands out in their memories.

In addition to the problems associated with self-reporting, it is extremely important to realize that VAERS is simply a collection of events that happened after a vaccination. It is not a collection of events that were caused by a vaccination. Again, remember that the fact that event X happened before event Y does not mean that event X caused event Y. So you can’t actually use VAERS as evidence of causation. You don’t, of course, have to take my word for this, because VAERS explicitly says this. If you go to the VAERS data page, the very first sentence says the following (my emphasis):

“When evaluating data from VAERS, it is important to note that for any reported event, no cause-and-effect relationship has been established. Reports of all possible associations between vaccines and adverse events (possible side effects) are filed in VAERS. Therefore, VAERS collects data on any adverse event following vaccination, be it coincidental or truly caused by a vaccine. The report of an adverse event to VAERS is not documentation that a vaccine caused the event.”

Later on that same page they state (again my emphasis):

“A report to VAERS generally does not prove that the identified vaccine(s) caused the adverse event described.  It only confirms that the reported event occurred sometime after vaccine was given. No proof that the event was caused by the vaccine is required in order for VAERS to accept the report. VAERS accepts all reports without judging whether the event was caused by the vaccine.”

It doesn’t get any clearer than that. According to the VAERS website itself, the database does not prove that vaccines were responsible, it does not establish causation, and it is not documentation that vaccines are dangerous. So you absolutely cannot use this database as evidence that vaccines are harmful and are causing injuries. That is a gross misuse of this database.

What this database is useful for, and what it is intended for, is to provide an “early warning system” that identifies potential problems that should be studied. In other words, scientists look at databases like this to identify topics that need to be examined more closely. Then, they do large, controlled studies on those topics to determine whether or not the vaccine is actually causing the problem. I cannot overstate the importance of this point. The database simply identifies topics to be studied, and the actual studies determine whether or not a causal relationship exists. You need to know the rate of the event in question with and without vaccines, while controlling all of the other variables before you can reach a causal conclusion, and VAERS simply does not provide those data.

Finally, it is worth mentioning that a number of anti-vaccine scientists (particularly the Geiers) have tried to mine VAERS for evidence that vaccines are dangerous, but that is an extremely problematic statistical design. Being systematic and carefully controlling potential confounding factors is essential in science, but because of the completely unsystematic and totally uncontrolled nature of databases like VAERS, it is really hard to get reliable, high-quality data out of them. In other words, these databases do not contain the controls that are necessary for assessing causal relationships or even for accruately assessing trends over time. As such, you should be extremely skeptical of any paper that is basing conclusions on VAERS.

 

Vaccine package inserts
Another anti-vaccine favorite is the package inserts. The Skeptical Raptor wrote an excellent and detailed post on these, so I will just hit the highlights. These package inserts contain a variety of information about the vaccine, but the part that anti-vaccers focus on is the list of adverse reactions. These contain a wide range of ailments, including things like autism. The situation is, however, very similar to the reports in VAERS.

The lists of adverse reactions in vaccine inserts simply contain any adverse events that were reported during clinical trials for the vaccine (sometimes they also include post-approval reports). To be fair, these are usually limited to events that were severe or were reported multiple times. In other words, the point of these lists is to provide information that is potentially clinically useful, thus they include any reactions which were either severe enough to be potentially concerning, or common enough that it is plausible that the vaccine was causing them. They do not, however, actually demonstrate that they vaccine caused the reaction (with the exception of reactions for which properly controlled trials were conducted). Just like the VAERs database, these lists are simply intended to guide doctors and future researchers, rather than providing evidence of causation.

In other words, if several of the test subjects in a clinical trial became ill and vomited several days after receiving the vaccine, then nausea would be reported as an adverse reaction, but that clearly does not mean that the vaccine causes nausea. It is entirely possible (even likely) that, during the trials, some patients would become sick from things that are completely unrelated to the vaccine. Similarly, if patients experienced headaches, those would get reported regardless of whether or not the vaccine actually caused the headaches. So, just as with VAERs, the fact that something is on the vaccine insert does not mean that the vaccine causes it.

Also, as with VAERs, you don’t have to take my word for this, because the package inserts often explicitly state that they do not provide evidence of causation. For example, here is an excerpt from the Tripedia DTaP vaccine insert (this is an insert that is often cited for including autism in the list; my emphasis).

“Events were included in this list because of the seriousness or frequency of reporting. Because these events are reported voluntarily from a population of uncertain size, it is not always possible to reliably estimate their frequencies or to establish a causal relationship to components of Tripedia vaccine.”

Also, note the fact that the adverse reactions were self-reported. This, once again, makes it very difficult to establish causation or accurate rates.

Finally, even if all of the events were causally related, you need to know their rate with and without vaccines before that information is useful. Everything has risks (including the decision not to vaccinate), so you always have to weigh the risks associated with taking an action against the risks associated with not taking that action. In other words, if a vaccine has a deadly side effect, but that side effect only occurs in 1 in every ten million injections, then the risks associated with avoiding the vaccine will far outweigh the risks associated with taking the vaccine. So you absolutely have to know the rates before you can make a properly informed decision. Simply showing that X causes Y is not sufficient.

 

National Vaccine Injury Compensation Program (VICP)
Finally, we arrive at the VICP. This is a no-fault system that was set up to compensate families for injuries caused by vaccines, and anti-vaccers often cite its mere existence as evidence that vaccines are dangerous. There are several important points that need to be made here. First, vaccines do have side effects (as do all real medications). No one denies that, but the side effects are generally minor, and serious side effects are quite rare. So, the benefits outweigh the risks. Thus, simply showing that vaccine-injuries exist does not prove that vaccines are dangerous. As I explained in the package inserts section, you need to take the rate of side effects into account when assessing risk.

Second, the program was established because pharmaceutical companies were being bogged down with lawsuits, and there was concern that the vaccine supply would be jeopardized. Therefore, since settling is often cheaper than fighting a lawsuit, the government set up this program as a means of protecting the vaccine supply, and they set it up to be very generous to the public. As the Skeptical Raptor explained in detail in this post, the requirements for getting money from the VICP are greatly relaxed compared to traditional courts. Applications don’t have to prove that the vaccine manufacturer was at fault, nor do they have to prove that the vaccine caused the damage. Rather, they generally just have to show that it is plausible that the vaccine caused the damage, and this can often be accomplished by something as simple as having an “expert” testify and say that it is plausible (i.e., you often don’t need actual studies showing that vaccines can cause the side effect in question). In other words, the system is set up to give the benefit of doubt to the applicant. So the fact that someone received money from the VICP is not an admission that the vaccine actually caused the injury (i.e., this is a very different situation from being found guilty or liable in a traditional court).

Finally, and perhaps most importantly, even in situations where a particular court claimed that a vaccine actually caused an injury, that still does not count as valid scientific evidence, because lawyers, judges, etc. can easily reach incorrect conclusions. In other words, the fact that a particular judge awarded money for a claim of vaccine injury only shows that the judge in question thought that the vaccine caused the injury. It does not, however, indicate that the scientific evidence shows that the vaccine caused the injury (i.e., this argument is a form of the appeal to authority fallacy).


Conclusion

In short, neither VAERS, nor the package inserts, nor the VICP provide evidence that vaccines are dangerous. VAERS and package inserts simply show adverse events that were reported following a vaccination, but they do not actually demonstrate that the vaccine caused the event in question. Indeed, both sources even state that the fact that an adverse event was included in them does not mean that the vaccine actually causes that event. Similarly, the VICP is a no-fault system, and applicants to not have to demonstrate that a vaccine caused the injury for which they are seeking compensation. Rather, they simply have to provide some evidence (such as expert testimony) that it is plausible that a vaccine caused the injury. Thus, none of these sources provide evidence of causation, so none of them can be used as evidence against vaccines. To be clear, vaccines do have side effects, as do essentially all real medications; however, to actually know which side effects are caused by vaccines, you need properly controlled studies, not glorified anecdotes. Further, even when a causal relationship has been demonstrated, you also have to consider the rate at which the injury occurs. Every decision has risks (including the decision not to vaccinate), and although vaccines do have complications, series side effects are extremely rare and the benefits are extremely high. Therefore, the benefits of vaccinating outweigh the risks, and vaccinating your children will give them the best chance of living long, healthy lives.

Related Posts

Citations

Dorit Rubinstein Reiss. 2015. National vaccine injury compensation program facts. Skepticalraptor.com. Accessed 25-July-2016.

 VAERS Data. Vaccine Adverse Event Reporting System. vaers.hhs.gov/data/index. Accessed 25-July-2016.

Skeptical Raptor. 2016. Argument by vaccine package inserts — debunking myths. Skepticalraptor.com. Accessed 25-July-2016,

Sanofi Pasteur. 2005. Diptheria and Tetanus Toxoids and Acellular Pertussis Vaccine Absorbed. Tripedia. Fda.gov. Accessed 25-July-2016.

Posted in Vaccines/Alternative Medicine | Tagged , , , , , , | 4 Comments

Sinking Noah’s Ark Part 2: The Order of the Fossil Record Confirms Evolution’s Predictions

In this post, I am going to explain a line of evidence that both convincingly supports evolution and refutes the notion of a world-wide flood. I am, of course, referring to the ordering of the fossil record. If you go to just about any country and dig a deep enough hole, you will find that the soil is organized into geological layers, each of which has distinct properties and contents. According to creationists, the majority of these layers formed during Noah’s flood; whereas scientists argue that these layers represent different time periods in earth’s history. Creationists would have you believe that both of these explanations are simply “interpretations” of the data, but as I will demonstrate, that statement is erroneous because the scientists’ explanation is a logical deduction that started with the evidence, then drew a conclusion, whereas the creationists’ explanation is ad hoc and is simply trying to make the evidence fit a preconceived view.

This post ended up being quite long, so you can use the links below to jump around to different sections.

Note: In this series, I am discussing science, not religion. There are many people who both believe the Bible and accept evolution. So I am not trying to disprove the Bible, turn people into atheists, etc. Rather, I am simply explaining the scientific evidence for evolution (see Part 1 for more details). 

Note: in order to make this post as easily understandable as possible, I am going to try to limit the use of jargon and proper names for geological periods, eras, epochs, etc. I will instead use very crude terms like “dinosaur layers,” rather than “Triassic, Jurassic, and Cretaceous.” Although less precise, I think that this strategy will make it easier to convey my meaning without people getting lost in terminology.


Contrasting the predictions of creationism and evolution
Science is all about making and testing predictions, which is part of why it is so powerful. Before doing an experiment, we can state what we should observe if our hypothesis is true, then we can reject or fail to reject that hypothesis based on whether or not our predictions come true. So let’s apply that type of reasoning to this situation.

Evolution’s predictions
First, let’s look at the predictions of evolution. If evolution is true and life on earth has gradually evolved over billions of years, then we should see that reflected in the fossil record. The deepest layers should only contain fossils of microscopic organisms, and as we move up to more recent layers, we should eventually get multi-cellular marine invertebrates, then fish, then amphibians, then reptiles, then mammals and birds. Further, even within those major groups, we expect to see gradual changes, and most of the modern animals that we see today should only be represented in the uppermost levels (Figure 1A).

There are several important things to note about this prediction. First, it is quite an extraordinary prediction. To borrow a classic example, all that it would take is one Precambrian rabbit to completely refute our current understanding of evolution and the history of life on earth. In other words, if we found fossils that were out of order (e.g., birds before the first reptile layers, amphibians before fish, fish in the oldest layers etc.) that would shatter our understanding of evolution. By that same token, however, finding a very consistent order in all of the thousands of fossil beds from all around the world would provide exceptionally strong evidence for evolution, because we only expect that to occur if evolution is actually true. Think about this way, if God actually created all of the kinds of animals and they all lived at the same time (as creationists argue) what are the odds that we would never find a modern mammal in anything but the uppermost layers? Why is there such a distinct pattern in the fossil record if everything lived together?

The second thing to note about this prediction is that it was largely a priori. In other words, when Darwin and other early biologists first proposed the basic framework of evolution, they had an extremely incomplete fossil record. However, based on the limited fossils that they had, as well as comparisons of embryology, morphology, etc. they thought that the progression went from single-celled organisms to multi-cellular marine invertebrates, to fish, to amphibians, to reptiles, to birds and mammals. In other words, this evolutionary sequence was predicted before we had most of the fossils that we have today. Thus, this is not a situation where scientists are looking at the fossils and trying to make the patterns fit evolution. Rather, the patterns were predicted beforehand and scientists are simply seeing whether or not the patterns fit the predictions. Additionally, this same pattern has been confirmed by intermediate fossils, genetics, and multiple other lines of evidence. So, once again, this is an extraordinary prediction because everything rests on it. Finding just one fossil that was substantially out of place would force us to rethink everything.


Creationism’s predictions
Creationism makes radically different predictions. According to creationists, all of the major “kinds” of organisms were alive prior to the flood (possibly minus a few that went extinct between creation and the flood, but those most likely wouldn’t have fossilized; see Part 1 for an explanation of “kind” and why I am using that term). During the flood, essentially all of the land animals that weren’t on the ark were killed and many of them were buried and formed the fossil record. Thus, essentially all of the fossils and sedimentary rock layers that we see today are the result of the flood.

Most creationists seem to subscribe to one of two schools of thought about how the flood formed the fossil layers. One common idea is basically a flash flood model. This proposes that the water levels rose extremely quickly and rapidly buried animals. Thus, most of the fossils formed early during the flood. The second school of thought proposes that many animals tread water for a while rather than being immediately buried. These animals eventually drowned, then floated for a while, before sinking and being buried as sediments fell out of the water. In other words, the flood would have churned up all sorts of sediment, and that sediment would have then fallen out of suspension and formed layers, trapping the bodies of dead animals in those layers. More often than not, I hear these two ideas combined, and this notion of lots of animals slowly being buried seems common.

Now, let’s think about these situations for a minute and make some predictions. If all of the major groups of plants and animals were alive at the same time (Figure 1) and were buried by a wall of water and mud that swept across the land (the flash flood model), then we would expect the fossil record to be a jumbled mess (Figure 1B). Flash floods aren’t exactly known for neatly sorting their victims by taxonomy, so we shouldn’t see any sort of a pattern (taxonomy = the classifications of living things [e.g., canines and felines represent two different taxonomic groups]). Rather, we should see that dinosaurs, modern birds and mammals, early amphibians, etc. are all mixed in together. In contrast, if animals drowned, floated, and slowly sank, then they should either be randomly scattered throughout the sediment layers, or possibly sorted by size. They should not, however, be sorted by taxonomy (you would not, for example, expect all of the dinosaurs to sink and get buried at the same time, since dinosaurs ranged in size from being smaller than a chicken to larger than a school bus).

Note: For creationism, I am using the term “prediction” loosely, because creationism is entirely retroactive. So when I say “the predictions of creationism” I mean the patterns that we would intuitively expect from a flood.

Figure1: Panel A shows what evolution predicts that the fossil record should like like, and panel B shows what creationism predicts (under a flash flood model). Note: both panels are overly simplified. In reality there are hundreds of layers, and I left out lots of steps. Also, I was simply illustrating the first time that each group appears rather than focusing on how long they persisted.

Figure1: Panel A shows what evolution predicts that the fossil record should like like, and panel B shows what creationism predicts (under a flash flood model). Note: both panels are overly simplified. In reality there are hundreds of layers, and I left out lots of steps. Also, I was simply illustrating the first time that each group appears rather than focusing on how long they persisted.



The evidence
It hopefully won’t surprise you to learn that the predictions of evolution passed with flying colors, while the predictions of creationism epically failed. No matter what fossil bed you go to anywhere in the world, you will find a consistent pattern and progression. This is an absolutely critical blow to creationism for a number of reasons.

First, really ask yourself how likely it is that you would see this pattern everywhere in the world if evolution isn’t true. Why, despite finding untold millions of fossils from thousands of sites from all over the world, have we never once found a mammal fossil in a layer that is older/deeper than the first reptile layer? For that matter, why do we never see modern mammal “kinds” fossilized alongside dinosaurs? Evolution tells that our modern families of mammals didn’t evolve until well after dinosaurs went extinct, but if they all lived at the same time (as creationists claim) then why were they never fossilized together? Why don’t we have birds in layers deeper than the dinosaur layers? We don’t we have reptiles in the same layers as early amphibians? Why do we have layers with nothing but marine invertebrates? etc. This pattern is so consistent and so remarkable that it is utterly inconceivable that it could have formed during a flood.

Second, there are other clues in these layers beyond simply the species that they contain. Look at the dinosaur layers, for example (Figure 1A). There are many places all around the world where you can dig below the dinosaur layers and find older sedimentary layers containing earlier reptiles, amphibians etc. According to creationists, most of those layers formed during the flood, but if you think about that for a second, a huge problem emerges. The dinosaur layers are on top of the other layers, which means that they should have formed after the lower layers, but the dinosaur layers are full of fossilized tracks, nests, and other things that make it clear that the dinosaurs were actually living when those layers formed. So how is that possible if those layers formed during the flood?

Think about the progression of events that would have to take place here. Step 1: The flash flood buries countless organisms and kills pretty much everything. Step two: Sediment slowly settles out and forms the lower layers. Step three: The dead dinosaurs go scuba diving under hundreds of feet of water and walk around under water on this newly formed sediment layer and build nests and leave behind footprints. Do you see the problem? Further, I used dinosaurs as an example because most people are familiar with fossilized dinosaur footprints, but we have plenty of footprints, eggs, nests, burrows, etc. from the other organisms in other layers as well. Indeed, these layers are full of things that make it clear that animals were living on the substrate when they died. This makes it abundantly clear that these layers could not have formed from sediment settling out during a flood. Animals were clearly alive and walking around to make those tracks, nests, burrows, etc. which makes absolutely no sense if those layers were formed from sediment settling out under hundreds of feet of water after all of the animals had already drowned! Additionally, we find clear evidence of plants growing in the layers, consistent pollen patterns, etc. When you actually look at the details of the fossil record, it is abundantly clear that it couldn’t have formed during one massive storm (you can find more details with good sources at this post by talkoriginsorg).

Finally, realize that not only are the fossils in these layers consistent, but their dates are as well. In other words, when we radiometrically date these layers, the deepest layers are always the oldest. If they had all formed during the flood, however, then they should all be the same age. Now inevitably, someone is going to argue that radiometric dating isn’t reliable, but those arguments have no scientific merit. I explained this in more detail here and you can find a good essay at Answersinscience.org, but in short, scientists aren’t “assuming” that decay rates are constantly any more than we are “assuming” that the speed of light is constant. The decay rate is simply a physical property of the chemicals (in fact, there is this fun thing known as the “law of radioactive decay” which mathematically describes the decay rate). Similarly, the other “assumptions” that creationists accuse scientists of are actually logical inferences that are supported by numerous observations. Additionally, even if you choose to believe that radiometric decay rates are not reliable, doesn’t it seem a bit surprising to you that the deepest layers always yield the oldest dates? I mean, if they were all formed at the same time, why does that pattern exist? In other words, you not only need a cogent argument to support the notion that decay rates were faster in the past, but you also need an argument to explain why the rates were consistently faster in the deeper layers (note: please read this article by talkorigins.org before claiming that the dates from the top of the Grand Canyon are older than the dates from the bottom).

As should now be obvious, the evidence is overwhelmingly supportive of evolution and thoroughly refutes the notion of Noah’s flood, but that doesn’t stop creationists from trying to twist the evidence to make it fit their view. So, in the remainder of this post, I want to look at creationists’ responses and counter arguments. Before I do that though, I want to make one thing absolutely clear. For any explanation to actually be reasonable, it needs to not only explain why there are different layers, but also why they are consistently sorted into exactly the order that evolution predicted. Remember, embryology, genetics, transitional fossils, morphology, and physiology, all predict the pattern that we see. So how and why did a flood manage to not only sort animals, plants, etc., but also sort them into precisely the pattern that is predicted by evolution? Also, this explanation must be supported by actual observations and data. It cannot be some purely speculative made-up solution (otherwise it will commit an ad hoc fallacy).


Creationist responses

1. But there are places where the geological column is out of order
This claim is only true if you cherry-pick and don’t look at the full data. There are lots of things that can cause layers to shift after they are formed. For example, layers may slide along fault lines, resulting in an old layer ending up on top of a young layer (e.g., if you dig down along the dashed lines of Figure 2B you would find that the layers are out of order). Folding is another process that can cause an old layer to move on top of a young layer. Importantly, however, both of these processes leave obvious signs behind, and when we look at sites as a whole, we find things like Figure 2B, which clearly shows that they layers were formed in their correct order, and they have just been shifted afterwards. These geological processes are extremely well understood and any introductory geology book will cover them.

To put this another way, when you look at things like faults and folds, it is very obvious that the layers shifted after they were formed. Indeed, most creationists say that these layers formed early in the flood, then shifted later in the flood, which means that creationists are still stuck explaining why the layers formed in such a precise order to being with. In other words, the layers are technically out of order at these sites, but they aren’t out of order in a way that conflicts with evolution or supports creationism.

Figure 2: A simply illustration of how layers can shift along fault lines (the diagonal lines). Layers can form (A) and later shift because they are on faults (B). These can cause inconsistencies if you dig at precise spots (such as the dashed lines), but those inconsistencies disappear when you look at the whole site.

Figure 2: A simple illustration of how layers can shift along fault lines (the diagonal lines). Layers can form (A) and later shift because they are on faults (B). These can cause inconsistencies if you dig at precise spots (such as the dashed lines), but those inconsistencies disappear when you look at the whole site.

 

2. The geological column isn’t real because we never see it all in one place.
The geological column simply refers to the column of sediment layers that I have been talking about throughout this post, and the argument is that there are no locations where we see the entire thing with every single layer in a single place, therefore it is not a real thing.

Figure 3: There are only a few places where the entire geological column is at a single site, but that is not a problem because the pattern is consistent everywhere, so we can piece it back together.

Figure 3: There are only a few places where the entire geological column is at a single site, but that is not a problem because the pattern is consistent everywhere, so we can piece it back together.

First, that claim isn’t true. There are places where we can see the whole thing (details and sources are in this article by noanswersingenesis.org.au). Indeed, even Answers in Genesis admits that is a real thing. Additionally, of course we don’t see the whole thing in most places, because we are talking about layers that are millions of even billions of years old and there are many factors that can prevent them from being persevered for that length of time. Erosion, for example, is quite good at removing layers. That’s not a problem, however, because we can piece together the pieces from sites all over the world and get a very consistent picture (Figure 3). Finally, and perhaps most importantly, this argument completely ignores the problem of the order of the layers. In other words, it is true that there are places where there are gaps and missing layers, but it is equally true that there are no places where things are out of order (see #1 above for a qualifier). If you look at Figure 3 again, we can have a site where we find layers of single-celled organisms and layers of fish but not layers of marine invertebrates, and that is fine as long as the single-cell layer is still below the fish layer, which it always is.

Note: When I say that there are sites with every layer, I do not mean every species. In other words, there are layers which we know are from the dinosaur time periods because they contain index fossils that are only found from dinosaur time periods, but they may not actually contain dinosaur fossils at every site.


3. Different layers contain different species as a result of animals running from the flood.
This argument is so laughably absurd that it is tempting to ignore it entirely, but it is so common that I feel like I should deal with it. First, this proposes a cartoon-like scenario where animals are running from a wall of water, but that is not realistic. When a large storm starts, most animals respond by hiding, not running. So as soon as the rain started, animals would have hunkered down in their burrows, tree hollows, caves, etc. and once the tidal wave hit, they would have been buried right where they were.

Second, let’s assume that they did all try to outrun the flood. That still can’t explain the fossil record because its sorted by taxonomy, not speed. For example, this argument proposes that every single individual of every single modern mammal family was faster than every single individual of every single dinosaur species. Surely a velociraptor could have outrun a giant ground sloth, so why did raptors die out several million years before the first ground sloth fossils? Similarly, why don’t turtles show up until a bit over 200 million years ago? Why are sauropods (the giant long-necked dinosaurs) and tyrannosors found in the same layers together? Am I honestly supposed to believe that they could run at the same rate? What about babies and old individuals? Surely there would be lots of baby birds in nests, baby mammals in burrows, etc.

Third, even if everyone tried to run, and somehow new born puppies outran T-rex, there should still be evidence left behind. For example, if you go to just about any natural pond, you can find footprints of ducks, herons, and other birds all over the shallow areas. So why don’t we find those footprints around the ancient bodies of water where we find the fossils of the first amphibians? Those ponds should have been great places for birds to forage.

Fourth, although I have been talking about animals, we can clearly see the same type of progression in plants. Anyone care to explain to me how they ran from the flood?
Fifth, this argument makes absolutely no sense because we find these layers at the same sites. In other words, this isn’t a situation where at the most inland locations we find nothing but modern animals, and a near the coast we only early amphibians. Rather, you can go to locations all around the world, where you can start digging at a single site and see the progression as you dig. The Grand Canyon is a great example of this. As creationists readily admit, you can go there and see the progression of fossils as you hike down into it. So this clearly isn’t a matter of animals running, because if that was the case, they shouldn’t be fossilized at the same sites.

Finally, for most of these major groups (like dinosaurs) we have them from all over the world. So how exactly did modern mammals run to a location with no dinosaurs when dinosaurs were everywhere?


4. Different taxa lived in different geographical regions
There are two basic versions of this argument, but they both start with the same premise. Namely, they claim that the distinctions between the different taxa in the different layers are because they didn’t live in the same geographical regions. In other words, you had a region with nothing but marine invertebrates, a region with bony fish and early amphibians (but no reptiles, birds, or mammals), another region with dinosaurs, primitive birds, and small rodent-like mammals (but not modern birds or mammals), etc. (Figure 4A).

First, that’s just not how ecology works. If you go just about anywhere in the world you can find birds, mammals, reptiles, and amphibians (as well as fossils of extinct groups, like dinosaurs). The only exceptions are extremely cold areas and a handful of remote islands (also, most amphibians don’t like the ocean). However, even in those cold areas, you can still find birds and mammals, which is really important because they are the two groups that are the most restricted in the fossil record (with regards to the depth of the layers that they occur in). Indeed, you can find birds pretty much everywhere. It doesn’t matter if you are on the coast, in the desert, in a rainforest, in Antarctica, or in the middle of the ocean, there are birds. So this notion that there were tons of massive areas that had absolutely no birds is utterly absurd. Similarly, how does one get tons of marine environments that are totally free of fish? That’s just not something that happens. Additionally, why would there have been tons of aquatic environments with amphibians but no reptiles? I could go on, but hopefully you get my point: the type of taxonomic separations that creationists are proposing here are ludicrous and conflict with everything that we know about ecology.

To be clear, different species and groups of animals certainly have geographic and ecological restrictions. For example, you can’t find kangaroos anywhere but Australia, but that’s not what we are talking about here. We aren’t talking about limits to the ranges of species or families, rather we are talking about massive taxonomic units. In other words, we aren’t talking about never finding a kangaroo with a dinosaur, rather we are talking about never finding any modern mammal with a dinosaur. To give another example, there are families of birds that only live in rainforests, families that only live in deserts, etc. but the problem isn’t that we never find one specific family of birds in the early layers. Rather, the problem is that we never find any birds in the early layers, and that is a huge problem for creationism, because although certain families have restricted ranges, you can find some sort of bird anywhere in the world. In other words, creationists generally argue that these groups existed because of ecological differences, but that is nonsense because all of the major groups of animals have representatives in essentially all of the major ecosystems. Further, we can see that variation in the fossil record. Dinosaurs, for example, occupied an extremely diverse range of habits. Additionally, keep in mind that creationists are proposing not only that animals lived in these bizarre taxonomically constrained geographic groups, but also that these groups just happened to perfectly match the predictions of evolution (that’s quite a coincidence).

Second, creationists are generally adamant that the flood would have been extremely violent, with ridiculously strong currents that carried sediment and debris for hundreds or even thousands of miles. So even if we accept that animals were living in these taxonomic groups, surely some of them would have been swept up by the flood and deposited into a different group. How exactly did a flood as violent as the one that creationists describe preserve such a perfect order?

Third, we still have the problem that the fossils of these groups can be found at the same sites. Again, you can go to a site, dig a hole, and observe the changes as you go down. Indeed, we find fossils of all of the major groups that we have been talking about from all over the world. So we know that they lived in the same areas, because we find their fossils in the same areas.

This brings me to the next major problem. As I explained at the start of this post, we would expect a flood to either form a massive layer that contains most of the fossils (from a flash flood), or many stacked layers (from dead animals slowly settling out), but we know that the fossils weren’t formed from animals slowly settling out because of all the footprints, nests, etc. that could not have formed underwater. This leaves us with the flash flood model, but that model shouldn’t have formed stacks of fossils. So how do we get the stacks?

Figure 4: An illustration of the absurd way in which some creationists think that the fossils formed. (note: some creationists argue that the single cell layers pre-date the flood, which is slightly inaccurate from what I have depicted).

Figure 4: An illustration of the absurd way in which some creationists think that the fossils formed. (note: some creationists argue that the single cell layers pre-date the flood, which is slightly inaccurate from what I have depicted).

This is where creationists seem to split into two camps. One argument, as put forth by Andrew Snelling on Answers in Genesis, argues that the layers are simply the order that things were buried in. In other words, as the sea level rose and covered the land in water, the amphibians lived closest to the shore and were the first land animals to be buried, then the population of reptiles was hit next, etc. (Figure 4A). There are several problems with this (beyond the ones that I’ve already pointed out). First, for this to work, not only do we need to have all of the animals living in distinct and precise taxonomic groups, but those groups have to be carefully spaced in increasing order of complexity, and of course, this order and grouping has to be consistent every single place on the planet (that’s hardly a reasonable assumption). Second, this still doesn’t explain how you get layers. If animals were spaced like this, then you should still have once giant fossil layer, it should just vary from one region to the next (Figure 4B), and, in fact, there should be a lot of spill-over from one layer to the next as the flood would almost certainly have carried some carcases with it, rather than instantly burying them. As far as I can tell, creationists are envisioning a situation like Figure 4C-G, where the flood forms layers as it goes, and magically scoops up each successive layer and adds it onto the massive stack of layers that it is carrying (while still carefully preserving footprints, nests, burrows, etc.). Last time I check, floods don’t do that.

The second school of thought tries to solve this problem by basically arguing that everything initially formed one fossil layer, but then that layer got broken up and shifted on top of itself multiple times, resulting in the layers that we see now. There are several things to note here. First, although layers can shift around (see #1 above), they don’t do so in the type of massive, full scale way that creationists are proposing. They simply fold on top of each other or slide along fault lines, but creationists are proposing that everywhere in the world, hundreds of layers managed to neatly slide on top of each other, without leaving behind any evidence of having done so, and while perfectly preserving the order predicted by evolution. Just the idea that pretty much every layer everywhere in the world was shifting over or under multiple other layers is pretty far-fetched. Given that we find layers pretty much everywhere, it seems like that would cause a huge deficit of layers (e.g., if the Grand Canyon formed from hundreds of layers shifting on top of each other, then surely there would be massive lowlands all around it where those layers used to be). Also, realize that we aren’t talking about small shifts. In cases like the Grand Canyon, we have hundreds of layers, each of which stretches for hundreds of miles.

Additionally, and perhaps most importantly this still hasn’t explained the order of the fossil record. This argument expects me to believe not only that animals lived in these ridiculous taxonomic groups that are completely unlike anything that we have ever actually observed, but also that every single time that plates shifted, they did so in precisely the order that evolution predicted. Think about for a second. Even if you believe everything up to the point of the shifting layers, how likely do you actually think it is that there was never a case of a mammal or bird layer shifting under a dinosaur layer, a reptile layer shifting under an amphibian layer, an amphibian layer shifting under a fish layer, etc. (again, see #1 for a qualifier)? This argument is nothing more than a series of utterly ridiculous assumptions. You might as well say that aliens did it, because that argument would be just as plausible.

Finally, let’s apply our tests for ad hoc fallacies to these arguments. Is there any reason to think that animals lived in these amazingly consistent taxonomic groups other than a desire to believe in the flood? No! There is absolutely no scientific basis for this claim. Similarly, is there any reason to think that all of the fossil layers shifted over top of each other in exactly the order predicted by evolution other than a desire to believe in the flood? Again, no! The only way that anyone would ever believe either of those claims is if they were already convinced that the flood was true. These arguments perfect examples of ad hoc fallacies. They are logically invalid and must be rejected.


Creationist counter arguments
Not to be outdone, creationists try to fall back on “problems” with the fossil record, or “inconsistencies” with the predictions of evolution. So let’s briefly look at some of those.

1. There are sea shells on the highest mountains
It is true that you can find fossils of marine organisms on many of the highest mountains, and creationists would have you believe that this is evidence that those fossils were put there by the flood. In reality, however, our understanding of the history of planet earth suggests that many of our current continents were once submerged beneath giant oceans, and mountains rose as the earth’s plates moved and pushed together. Thus, we expect those fossils to be there.

The argument that creationists are making here is actually an example of a logical fallacy known as affirming the consequent. It takes the form:

  • If A then B
  • B therefore A

The problem is that B can have multiple causes. In this case, creationists are saying:

  • If the flood is true (A) there should be shells on mountains (B)
  • There are shells on mountains (B) therefore the flood is true (A)

The problem is that scientists could make the following identical argument:

  • If our understanding of earth’s history is true (A) there should be shells on mountains (B)
  • There are shells on mountains (B) therefore our understanding of earth’s history is true (A)

Note: affirming the consequent fallacies can be avoided by making exclusive predictions. For example, evolution’s prediction that we should see the gradual progression of fossils is an exclusive prediction, because it is not something that we expect to see under any circumstance other than evolution. Thus, the argument takes the form:
1. If and only if A then B
2. B therefore A is the most likely solution
As such, no fallacy is committed.


2. Many fossils look like they were formed by a flood
It is true that many fossils appear to have been formed in floods, but that doesn’t mean that they were formed in one massive flood (regional floods are certainly a real thing). Indeed, if you understand even the basics of how fossils form, then you should realize that floods are one of the best ways to form fossils. So we expect that many fossils will have been formed during floods, because that’s how fossils form! To put this another way, Ken Ham is very fond of saying that if the flood was true, we should see “millions of dead things buried in rock layers laid down by water all over the earth” and we see “millions of dead things buried in rock layers laid down by water all over the earth.” Therefore, the flood is true. However, if evolution was true, then we should also see “millions of dead things buried in rock layers laid down by water all over the earth.” Can you spot the logical fallacy that creationists are committing? (hint: it’s the same fallacy that was in #1 above).


3. But we have fossils of human footprints next to dinosaurs
No we don’t. All of those have either been hoaxes or misidentifications. Even major creationist organizations admit this.


4. But what about living fossils that are in the fossil record from “millions” of years ago but are still alive today?
First, just to get definitions straight, a living fossil is an organism that is present today and is almost identical to ancient fossils (the exact age required isn’t precisely defined). We actually have multiple examples of these (horseshoe crabs and coelacanths are probably the two most famous), but I don’t understand why creationists think that these are a problem for evolution. There is nothing in evolutionary theory that states that an organism can’t evolve a very good body plan, then maintain that plan for millions of years. To be clear, if a huge portion of today’s animals were living fossils, that would be different from the expectations of evolution, but it still wouldn’t refute it, and the few dozen living fossils that we have simply aren’t a problem for evolution. Quite simply, they don’t contradict evolution in any way shape or form, because the theory of evolution never states that species can’t exist for hundreds of millions of years. So this is a strawman fallacy.

Additionally, I would argue that the relative scarcity of living fossils is an enormous problem for creationism. Think about it, if animals can’t evolve beyond their “kind,” and all “kinds” were both present before the flood and were carried on the ark, then should almost all of our animals be living fossils? In other words, if almost the entire fossil record formed during the flood, and all of the “kinds” of land animals in that record were on the ark, then most of our current animals should be living fossils. Sure, some will have gone extinct since the flood, but only a very tiny portion of our current species are living fossils, which is exactly the opposite of what we would expect if creationism was true.


5. But we have found fossils that are out place
Creationists often like to cite examples of fossils that were found outside of their usual layers, but there are several important things to note about that claim. First, many of the examples that they cite are incorrect. For example, they often claim that pollen can be found in fossils that date to time periods before pollen producing plants existed. Although technically true, an examination of those fossils makes it really obvious that the original fossils were exposed to the pollen long after they had formed. In other words, the fossils were contaminated with pollen from a later time period. Even many creationists admit this. So whenever you hear these claims, you should carefully fact check them to make sure that they add up.

Second, the claims that do actually stack up are always off by just a few million years, which once again, isn’t a problem for evolution. For example, finding out that an animal actually evolved ten million years earlier than we had previously thought it did is not a problem for evolution. That just means that our dates were slightly off (and yes, when we are dealing with timescales of billions of years, a few million is “slightly off”).

The only way that these fossils would be a serious problem for evolution is if they were actually out of order. In other words, you need something like a fully developed bird before the first proto-bird fossils (no, Confuciusornis is not an example of that, it had fossil predecessors). You need a turtle that was swimming with the first amphibians or a modern mammal that was chilling with the dinosaurs. Once again, all it would take is one Precambrian rabbit to bring our understanding of evolution to its knees, but no fossils like that have ever been found.

Just to be 100% clear here, this isn’t an argument from ignorance fallacy. Rather, if the flood was true, then finding these fossils mixed together should be the norm not the exception. In other words, if all of these organisms lived and died at the same time, then we shouldn’t find a pattern that perfectly matches evolution other than a handful of exceptions. Rather, we should find a complete and total lack of a pattern. There should be no correlation between the predictions of evolution and our observations in the fossil record. So that fact that we have never once found a fossil that was significantly out of order (despite having millions of fossils from thousands of fossil beds from all over the world) provides extremely clear evidence that the flood is false. Think again about the fact that all of the pre-reptile layers are completely free from any birds or mammals, and even within the reptile layers, there is a clear gradual progression. That makes absolutely no sense if the flood is true.


6. What about the cases where a single fossil stretches between several layers. How is that possible if one of those layers is millions of years older than the other?
This argument refers to “polystrate fossils” which are fossils that are found upright, sticking through several levels. Usually these are trees, and there are several important things to note about them. First, the layers with these trees aren’t actually millions of years apart. Second, we have understood how they form for well over 100 years. For the sake of time, I won’t go into the details, but the short version is that these trees were rapidly buried by local floods. (regional floods obviously do happen).

Additionally, when we look at these trees, we can see that one of the layers is simply the clay layer that the roots are sticking into (i.e., the layer it was growing on), and the next layer up is often a coal layer (which could have been formed when the plants around the tree were buried). The third layer is then simply the layer the buried the tree and surrounding vegetation. Finally, despite creationists’ claims, for some tree species, having the trunk partially buried is not a death sentence. Indeed, we have examples where trees were partially buried, but continued to grow for years before being fully buried in a second regional flooding event. We can tell this because of the formation of new roots growing out of what was previously the trunk. So in fact, these fossils are a huge problem for creationists, because if these trees were actually buried rapidly in a global flood (as creationists claim), then how did the trees form that second root layer?

A final example that creationists like to cite is a “kamikaze” ichthyosaur. This argument refers to a single fossil of an ancient marine reptile, where the skull sticks straight down through three layers. You can read the science of how this happened in the original article (Wetzel and Reisdorf 2007), but a big part of what happened was simply that the creature sank nose-first because of its canter of gravity, and its pointy noise sank deeply into the soft substrate (bodies of water often have incredibly soft, muddy bottoms that are easy to sink into). Also, many of the claims that creationists make (such as the idea that the layers are millions of years apart) are not in the original article, so they seem to be straw men. Finally, let’s flip this thing around and ask a different question. If the flood was true, and thousands of animals slowly sank and were buried in sediment, then why aren’t fossils like this the norm? Shouldn’t fossilized animals that stick through multiple layers be everywhere? Indeed, if the fossils and sediment layers came from the flood, then it is quite surprising that it is so rare to find fossils that stick through multiple layers.


7. Where are the missing links? If evolution is true, there should be lots of transitional fossils, so where are they?
They are everywhere! We have hundreds of clear transitional fossils; creationists just refuse to accept them as such (see this post for details). Further, these fossils show up exactly where we would expect them to. In other words, we see early fish-like amphibians before the modern amphibians that we have today, we see early dinosaur-birds before finding the modern birds that we have today, we find early reptile-like mammals before finding fossils of the modern mammals that we see today, etc.


Conclusion: Ad hoc fallacies and Occam’s razor
As I have shown in this post, the fossil record perfectly matches the predictions of evolution. Well over 100 years ago, evolutionary biologists predicted that all over the earth we should see a consistent pattern with the oldest/deepest layers containing single celled organisms, followed by multi-cellular marine invertebrates, followed by fish, followed by amphibians, followed by reptiles, and ultimately birds and mammals. That prediction has held remarkably true and provides extremely powerful evidence for evolution. In contrast to the predictions of evolution, creationists are retroactively trying to make the data fit their view, but in order to do that, they have to invent ridiculous solutions like proposing that the major groups of animals where geographically constricted and lived in taxonomic units. Given what we know about these animals today (e.g., birds are found everywhere, even Antarctica), and the fact that we find fossils of all of these groups from all over the world, this assumption is ridiculous. Additionally, they also have to assume that countless layers not only managed to shift over-top of one another without leaving behind the typical geological hallmarks of having done so, but also that every single time that this happened, the layers shifted such that the order predicted by evolution was maintained. These arguments are classic ad hoc fallacies. They are ridiculous “solutions” that have no evidence to support them and would never be accepted by anyone who wasn’t already convinced that the flood was true. They are nothing more than baseless speculation and wishful thinking. To put this another way, creationists aren’t looking at the evidence and drawing a logical conclusion. Rather, they are starting with the conclusion that the flood is true, then they are inventing fanciful “solutions” to try to make the evidence fit that conclusion.

In closing, I want to remind everyone of one of the golden principles of both logic and science: Occam’s razor. This states both that you should always limit your assumptions and that the solution with the fewest assumptions is the most likely to be true. In this case, evolution is not making any assumptions. Rather, it made an extraordinary prediction that came true. In contrast, creationism is making a series of increasingly ridiculous assumptions. If you are a creationist, I want you to really think about whether or not the creationists’ arguments make sense. How likely do you really think it is that not only were all animals living in taxonomic groups that exactly matched the predictions of evolution, but their fossil layers also managed to get sorted into an order that perfectly matched evolution’s predictions? Wouldn’t it make more sense to just admit that evolution is correct?

Related posts

Sinking Noah’s Ark Part 1: Introduction

Citations

Gastaldo. 1992. Regenerative growth in fossil horsetails following burial by alluvium. Historical Biology 6:203–219.

Isaak. 1998. Problems with a global flood 2nd ed. talkorigins.org. Accessed 12-July-16.

Issak (ed) 2003. The Talk Origins Archive: Claim CC341. Talkorigins.org. Accessed 18-July-16.

MacRae. 1997. “Polystrate” tree fossils. Ralkorigins.org. Accessed 12-July-16

Morton. 1996. The geological column and its implications to the flood. noanswersingenesis.org.au. Accessed 11-July-16

Snelling. 2010. Order in the fossil record. Answer Magazine. Answersingenesis.org. Accessed 12-July-16.

Stassen. 2003. A criticism of the ICR’s Grand Canyon dating project. talkorigins.org. Accessed 12-July-16

Wetzel and Reisdorf. 2007. Ichnofabrics elucidate the accumulation history of a condensed interval containing a vertically emplaced ichthyosaur skull. SEPM 88:241–251.

Woolf. An essay on radiometric dating. Answersinscience.org. Accessed 10-July-16

Posted in Science of Evolution | Tagged , , , ,