“Follow the money”: the finances of global warming, vaccines, and GMOs

One of the most common unifying themes of the anti-science movement is the notion that large corporations and governments are concealing the truth for the sake of monetary gain. These conspiracy theories pervade blogs against GMOs, vaccines, modern medicine, global warming, etc., and they have led to the common trope, “follow the money.” This is a challenge that I repeatedly see anti-scientists make, and the idea is that if we simply follow the money trail, we will find that climate scientists are being bought off, the food supply is being controlled by the evil overlords at Monsanto, vaccine researchers are being paid by pharmaceutical companies, etc. This challenge is designed to quickly silence all opposition by establishing that all opposing research is biased and the scientists are only in it for the money.

There are, however, several obvious problems with this challenge. First, it is an ad hominem fallacy. For example, the fact that a vaccine researcher works for a pharmaceutical company does not automatically mean that he/she fudges their results. This leads to the second problem: for this argument to work, you need to show that researchers are actually being paid to be dishonest, not simply that their job is doing research. This places the burden of proof on the person telling you to “follow the money.” In other words, they must provide evidence of widespread corruption and dishonesty to support their claim. Finally, this challenge is applied inconsistently, and the people who issue it completely ignore the fact that many of their “experts” have financial conflicts of interest.

So ultimately, this challenge is a bad one because it has a logical fallacy as its core and places the onus on the person issuing the challenge. We have to make decisions based on facts, not based on the people who produced those facts. Nevertheless, for sake of argument, I want to accept this challenge. I am going to “follow the money” on global warming, GMOs, and vaccines, and I am going to show that if we accept this illegitimate challenge, it actually ends very badly for the anti-scientists. In other words, I intend to beat them at their own game (note: this post is long, so you can follow the hyperlinks to the different sections).

Global Warmingglobal warming money conspiracy theory oil companies
When following the money, it’s important to look at what each group stands to gain from their position (this is a task that anti-scientists tend to utterly fail at). Let’s start with the scientists who think that we are causing the climate to change. Most of the climate deniers that I have talked to think that these scientists are either directly getting paid off or they are just going along with it to get grant money. I’ve talked about the problems with the grant argument before, but, since I have agreed to play by the anti-scientists’ rules, I will overlook those problems for the time being and say (for sake of argument only) that it’s a plausible claim. Nevertheless, there is still a substantial problem with this line of reasoning. Namely, where is the money coming from?

Ultimately, if we trace it back far enough, the money for most major grants originates with the government. Indeed, many of the climate change deniers that I know think that corrupt politicians are behind this supposed hoax, but this raises an important question, “why would politicians fake climate change?” There’s no obvious answer to this quandary. At least in the US, climate change has generally been an unpopular political position. So saying that they are faking it to get votes is just silly. Therefore, most people say that they created this hoax to get money. The problem here should be obvious: most politicians don’t get any money for supporting actions to prevent climate change. Yes, climate change initiatives do often involve taxes, but that tax money doesn’t go straight into the politicians pockets. Further, let’s not forget that the government gives numerous tax breaks for installing renewable energy sources. Also, keep in mind that we began the journey down this money trail with the government giving billions of dollars to researcher’s to study climate change. How exactly are the politicians gaining money from taxes while simultaneously spending billions on climate research? Additionally, one of the most common arguments against trying to prevent climate change is that it will cost the government money and destroy the economy. I’ve talked to people who will in a single breath tell me that taking action against climate change will bankrupt the government and the government is paying off scientists in order to make money. These two views are clearly incompatible with one another. Finally, we always need to keep in mind that it’s not just the US. Almost every government in the world has acknowledged climate change and is supporting climate change research.

Given the complete lack of motive for politicians, some people instead say that environmental groups are the source of the funding. The most obvious problem with this is that most environmental groups are non-profits, which by definition means that they aren’t in it for the money. Further, environmental groups exist to deal with serious environmental problems, and there are plenty of real problems to take care of without inventing fictional ones. It makes absolutely no sense for these organizations to fake climate change when there are so many other problems to solve. They have nothing to gain from creating a fake crisis.

Now, having established a complete lack of incentive for starting this supposed conspiracy, I want to flip things around and look at the people who would benefit from denying climate change. This time, we have a very clear and obvious group that benefits enormously from denying global warming. I am, of course, referring to fossil fuel companies. Switching to renewable energy sources majorly hurts their bottom line. Further, unlike those who support action to prevent climate change, we have a clear money trail from fossil fuel companies to climate change deniers. It is undeniable that companies like Koch Brothers and Exxon Mobile have dumped millions of dollars into climate change denial. Further, the flow of money is not limited to think tanks and public groups. Many of the most prominent global warming denying climatologists have financial relationships with oil companies. A recent prominent example is Dr. Soon, who appears to not only have received funding from oil companies, but failed to report a conflict of interest in his publications (that’s a major taboo in science).

I want to be perfectly clear here. I do not personally think that all of these scientists are corrupt (though some of them likely are), nor do I think that we should automatically discredit their papers because they received some funding from oil companies, but I agreed to play by the anti-scientists’ rules, and their rule is “follow the money.” When we do that, we clearly see money going from oil companies to climate change deniers. In contrast, there is no clear financial benefit to creating a climate change hoax. Yes, scientists receive grant funding to study climate change, but there is no reason for governments to give out that money unless climate change is a real thing. Therefore, according to climate change deniers’ own rules, we should reject the “evidence” against climate change because of financial conflicts of interest.

Finally, remember that roughly 97% of climatologists, and over 80% of the general scientific community agree that we are causing climate change. That’s an awful lot of people to pay off. So, the question that you really have to ask yourself is this: which actually seems more plausible, that bankrupt governments and cash-strapped environmental groups have paid off 97% of climatologists without any clear motive for doing so, or enormous and powerful oil companies have paid off 3% of climatologists in order to protect their bottom line?

GMOs
Anytime that the topic of GMOs arises all blame instantly gets placed on Monsanto. This company has been vilified to an utterly absurd degree, and most anti-GMO activists seem to be under the impression that it has a monopoly on our food supply. The reality, however, is that Monsanto isn’t that large. For example, in 2013 Monsanto’s net sales were worth $14.9 billion. To be clear, that’s a lot of money, but it’s hardly enough to buy a strong scientific consensus or to monopolize the food supply. It’s roughly the same annual net sales as Starbucks.

Perhaps the most telling number is, however, the annual net earnings of Whole Foods (one of the largest organic food chains). According to their own reports, Whole Foods earned a net of $12.9 billion in 2013. That’s only two billion less that Monsanto. Are you honestly going to tell me that two billion dollars is enough to pay off thousands of scientists from around the world? How exactly has an extra two billion dollars allowed Monsanto to monopolize the food supply?

Further, many people act as if Monsanto is the only player in the GMO world, but that is far from true. In fact, it’s not even the largest company involved. Cargill is much larger. Now, lest anyone suddenly jump over to saying that Cargill is the one paying off scientists, realize that Monsanto has always been the target of this conspiracy theory, so you can’t just switch over to Cargill in an attempt to save your argument (that would be a logically flawed tactic known as shifting the goal posts). Further, Cargill is only slightly larger than Koch Industries, and remember that oil companies like Koch and Exxon Mobile have been totally unable to purchase a scientific consensus, so it would be rather curious if Cargill had succeeded at that endeavor.

In addition to these large companies, there are numerous independent scientists, non-profit organizations, and smaller companies that research and create GMOs. In fact, the Bill and Melinda Gates Foundation has invested millions of dollars in GMOs. One would think that investments by a major humanitarian organization would demonstrate that GMOs aren’t just about evil corporations trying to take over the world, but in the minds of conspiracy theorists, this is nothing more than evidence that Gates is in fact a sinister man trying to depopulate the planet. The problem with that line of reasoning is, of course, that it commits an ad hoc fallacy (or possibly a question begging fallacy depending on how it’s worded). In other words, I would not accept that Gates is trying to commit genocide unless I had already accepted that GMOs are evil.

Finally, I want to look at who benefits from opposing GMOs. I’ve already pointed out that Whole Foods makes almost as much as Monsanto, but Whole Foods is not alone. There are numerous organic companies that make huge profits off of their products, and all of those companies have an enormous financial interest in smearing GMOs. Beyond the actual companies, there are plenty of individuals who make their living off of attacking GMOs and “Big Ag.” Vani Hari (a.k.a. Food Babe) is probably the most prominent of these activists. Her favorite response to critics is simply to call them “shills,” and she argues that we shouldn’t trust scientists because they are paid to do research on GMOs. The problem is that she makes her entire income off of her blog, store, books, talks, etc. So, by her own logic, we shouldn’t trust her since she has a financial conflict of interest.

Again, I do not personally think that we should ignore her because of her finances (we should ignore her because she’s full of crap). I think that she probably does truly believe the non-sense that she promotes, but my point is that if we are going to play this game of following the money, then we must acknowledge that many of the people opposing GMOs have a large financial interest in doing so. Yes, Monsanto makes billions of dollars and is a for-profit company, but Whole Foods makes almost as much and it is also a for-profit company. Yes, there are plenty of GMO supporters who get paid to do research on GMOs, but there are also plenty of GMO opponents who get paid for opposing GMOs. Therefore, if we are going to follow the money, it is, at best, a stalemate.

Vaccines
It is rare to talk to an anti-vaccer without them calling someone a “shill,” and perhaps their most common argument is that Big Pharma is covering up the truth about vaccines in order to make money. As I will demonstrate, however, this claim is completely erroneous.

The first problem is simply that vaccines aren’t worth that much to pharmaceutical companies. Skeptical Raptor did a fantastic job of explaining this, but to give a quick summary, vaccines are expensive to produce and cheap to purchase. In fact, many governments force pharmaceutical companies to provide them for free to lots of people. So even at a quick pass that doesn’t take into account costs like R&D, vaccines make up less than 2% of pharmaceutical companies revenue. Once you actually account for factors like the billions of dollars spent on vaccine research, you end up with around 2.5 billion dollars in annual profit from vaccines. Now, two and a half billion may sound like a lot, but remember that we are dealing with companies that make several hundred billion or even trillion dollars in a single year. With that type of cash flow, 2.5 billion is almost nothing, and it’s certainly not worth creating a massive global conspiracy that involves paying off tens of thousands of scientists and doctors from numerous universities and hospitals from every country in the world. The math just doesn’t add up.

Further, if pharmaceutical companies were really only after money, then they shouldn’t be producing vaccines because it costs far more to treat a disease than to prevent it. For example, one study found that it costs over $10,000 per person to treat measles. Another study estimated that it cost between 2.7 and 5.3 million dollars to treat 107 measles cases. For those playing along at home, that’s roughly 25–50 thousand dollars per case! In contrast, the measles vaccine only costs $19–50. So pharmaceutical companies could make way more money from treating measles than from preventing it (on a side note, most outbreaks are in fact caused by unvaccinated people, and you are far less likely to get a given disease if you have been vaccinated against it).

To further drive this point home, consider the fact that prior to the polio vaccine there were entire hospitals devoted to treating that one disease. Think about that for a minute, the vaccine made entire hospitals (complete with doctors, nurses, administrative staff, etc.) totally obsolete. A study of the costs and benefits of the polio vaccine estimate that by 2015, the polio vaccine will have saved $178 billion in the US alone. Please, tell me again how vaccines are worth billions of dollars to pharmaceutical companies. The numbers don’t lie. It’s cheaper to prevent a disease than it is to treat it.

Despite all of this evidence that vaccines aren’t in pharmaceutical companies best interests, many anti-vaxxers continue to insist that it’s all about the money, and a common claim is that all of the studies supporting vaccines were paid for by vaccine companies and conducted by scientists that work for those companies. On numerous occasions, I have provided an anti-vaxxer with a peer-reviewed study only to have them instantly reject it with a comment like, “why should I trust a study that was funded by Big Pharma?” To quote an  unfortunately popular article by Natural Health Warriors, “vaccine safety trials are paid for by the very people who make the vaccines, so there is no possibility of the information being unbiased or truthful.” That’s about like saying, “the safety trials of Toyotas were conducted by Toyota, therefore there is no possibility that Toyotas are safe.” Nevertheless, looking beyond the patent absurdity of the “no possibility” clause, this claim simply isn’t true. Sure, vaccine companies have been behind some of the safety trials, but there have been plenty of trials conducted by independent scientists working off of grants that did not originate with pharmaceutical companies. Further, all scientific papers list their funding sources, the author affiliations, and any financial conflicts of interest. Half of the time when I see people blindly writing off a study as “biased,” they completely failed to look at this information. Therefore, I wanted to examine a small sample of the literature to see what type of pharmaceutical influences I could find.

I decided to take a quick look at the literature on vaccines and autism (since this is generally the number one safety concern I see people bring up). So, I chose 10 scientifically sound papers and looked at their funding and author affiliations (note: I only looked at their scientific content when selecting these 10 papers, I did not know anything about their funding or authors until after I had selected all 10; the papers are listed at the end of this post). These 10 papers were authored by 57 different researchers. Only seven authors were involved in more than one paper, and they only authored two of these papers each. These 57 authors were affiliated with 22 different organizations (note: I did not split up departments within organizations, there were, for example, several departments of the CDC). Twelve of the organizations were hospitals/universities (some were hospitals attached to a university, thus I lumped those categories), seven were from government organizations like the CDC (multiple countries were represented), and only three were from companies. Two of the companies were Abt Associates Inc. and Kaiser Permanente Northern California (the third was a health care company), and as far as I can tell, neither of these companies actually manufactures vaccines. They are certainly involved in vaccine research, but they aren’t pharmaceutical companies that are producing vaccines. Authors from those companies were only involved with two of the studies (Verstraeten et al. 2003 and Price et al. 2010). So not one of the 57 authors were actually employed by pharmaceutical companies.

Finally, let’s look at the grant agencies involved. I counted 15 granting agencies which ranged from organizations that were focused on autism research to massive groups like WHO and CDC. Of those 15 granting agencies, not one was a pharmaceutical company. Two studies (Smeeth et al. 2004 and Price et al. 2010) did, however, acknowledge potential financial conflicts of interest. Several (but not all) of their authors had previously received funding from vaccine companies for other projects. Nevertheless, those funds should not have affect these papers, and even if they did, that still leaves us with eight solid papers with no financial ties to vaccine manufacturers.

Now, inevitably someone is going to accuse me of having cherry picked these studies, but here’s the thing, you can test this yourself. You can get on PubMed or Google Scholar and look at the author affiliations and funding sources. You don’t have to take my word for it. Further, even if I cherry picked these, that still means that we have at least eight good, completely conflict free papers which found that vaccines were safe.

Additionally, these publications are not what we would expect if pharmaceutical companies were paying off scientists. Remember that there are lots of different companies that compete with each other. It makes absolutely no sense for these companies to pay off scientists to write yet another paper saying that vaccines don’t cause autism. If you haven’t believed the last 100 papers, why should we expect number 101 to make a difference? Rather, if there was a massive conspiracy, it would make sense to target the people who actual care about the scientific literature. In other words, rather than making broad statements about the safety of vaccines, they should claim that the vaccines manufactured by company X are safe, whereas the vaccines by other companies are dangerous. If vaccine companies have scientists in their pockets, then we should see a war between companies about whose vaccines are safe. Think about the logic here for a minute. Anti-vaccers already think that vaccines are dangerous, so they don’t matter, but those of us who care about the literature are going to be very interested in seeing that some companies are safer than others. If, for example, several studies came out showing that vaccines made by GlaxoSmithKline were dangerous but vaccines made by Merck were safe, I would absolutely demand that my vaccines came from Merck, and so would tons of other scientifically minded people. That is what we would expect a conspiracy to look like. Drug companies should be fighting with each other. Instead, we simply see paper after paper showing over and over again that vaccines are safe, regardless of what company they came from.

Finally, I again want to flip the situation around and look at the finances of the people who oppose vaccines. Unlike many of the scientists doing actual research on vaccines, anti-vaccers often have clear conflicts of interest. Most famously, Andrew Wakefield (the man who started the myth that vaccines cause autism) has been found guilty of falsifying data and receiving money from lawyers who were intent on suing vaccine companies. Nevertheless, many anti-vaccers still follow Wakefield and argue that pharmaceutical companies are simply trying to silence him. Consider for a minute how absolutely fantastic this double standard is. Anti-vaxxers consider anyone who opposes them to be a shill, and they repeatedly insist that we have to follow the money, but when we actually follow the money and clearly demonstrate that Wakefield was being paid off, they suddenly ignore their own rules and claim that Wakefield is a hero who is being silenced for telling the truth. It’s as beautiful a case study in ad hoc logic and inconstant reasoning as I’ve ever seen.

Wakefield is admittedly an extreme example, but many other less extreme cases exist. For example, have you ever stopped to follow the money on the anti-vaccine blogs and web pages that pollute the internet? If you haven’t, you should, because most of the major ones include a store selling the products that you supposedly should use instead of vaccines. GreenMedInfo, Natural News, Mercola, Modern Alternative Mama, etc. all have stores selling their products and books. Similarly, famous anti-vaccine doctors like Sherri Tenpenny make quite a bit of money off of their books, speaking tours, etc. This is true for much more than just vaccines. You find this pattern throughout the amorphous mess that is alternative medicine, and it’s actually a brilliant business strategy when you think about it. First, you scare people about the horrors of vaccines and traditional “western” medicines. Then, you tell them about some amazing “natural” product that Big Pharma doesn’t want you to know about because it can cure everything from measles to infertility. Finally, you direct them to your store which just happens to sell that miracle product. There’s clearly no conflict of interest there (note the immense sarcasm).

My point in all of this is really quite simple: if we accept anti-scientists’ logically invalid challenge to follow the money, things end very badly for the anti-scientists. Decisions need to be made based on facts, not the people who support those facts, but if we agree to simply play by the anti-scientists’ rules, then we find a lack of motive for scientists to falsify data and strong financial motivation for anti-scientists to invent fictional conspiracies and oppose science. If anti-scientists actually followed their own rules, they would avoid most of the pages and blogs that they so dearly love to read and repost.

 

List of papers

Hviid, A., M. Stellfeld, J. Wohlfahrt, and M. Melbye. 2003. Association between thimerosal-containing vaccine and autism. JAMA 290:1763–1766.

Anders, N., E. Miller, A. Grant, J. Stowe, V. Osborne, and B. Taylor. 2004. Thimerosal exposure in infants and developmental disorders: a retrospective cohort study in the United Kingdom does not support a causal association. Pediatrics 114:584–591.

Destefano, F., T.K. Bhasin, W.W. Thompson, M. Weargin-Allsopp, and C. Boyle. 2004. Age at first measles-mumps-rubella vaccination in children with autism and school-matched control subjects: a population-based study in metropolitan Atlanta. Pediatrics 113:259–266.

Madsen, K.M., A. Hvid, M. Vestergaard, D. Schendel, J. Wohlfahrt, P. Thorsen, J. Olsen, and M. Melbye. 2002. A population-based study of measles, mumps, and rubella vaccination and autism. New England Journal of Medicine 347:1477–1482.

Price, C.S., W.W. Thompson, B. Goodson, E.S., Weintraub, L.A. Croen, V.L. Hinrichsen, M. Marcy, A. Roberston, E. Eriksen, E. Lewis, P. Bernal, D. Shay, R.L. Davis, and F. DeStefano. 2010. Prenatal and infant exposure to thimerosal from vaccines and immunoglobulins and risk of autism. Pediatrics 16:656–64.

Smeeth, L., C. Cook, E. Fombonne, L. Heavey, L.C. Rodrigues, P.G. Smith, and A.J. Hall. 2004. MMR vaccination and pervasive developmental disorders: a case-control study. Lancet 364:963–969.

Taylor, B., E. Miller, C.P. Farrington, M.C. Petropoulos, I. Favot-Mayaud, J. Li, and P.A. Waight. 1999. Autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association. Lancet 353: 2026–2029.

Taylor, L.E., A.L. Swerdfeger, and G.D. Eslick. 2014. Vaccines are not associated with autism: and evidence-based meta-analysis of case-control and cohort studies. Elsevier 32:3623-3629.

Uchiyama, T., M. Kurosawa, and Y. Inaba. 2007. MMR-vaccine and regression in autism spectrum disorders: negative results presented from Japan. Journal of Autism and Developmental Disorders 37:210–217.

Verstraeten, T., R.L. Davis, F. DeStefano, T.A. Lieu, P.H. Rhodes, S.B. Black, H. Shinefield, and R.T. Chen. 2003. Safety of Thimerosal-Containing Vaccines: A two-phased study of computerized health maintenance organization databases. Pediatrics 112:1039–1048.

List of grant agencies

  • America’s Health Insurance Plans
  • Autism Speaks
    Centers for Disease Control and Prevention
  • Danish National Research Foundation
  • Harvard Medical School
  • Health Resources and Service Administration
  • Kaiser Permanente Northern California
  • Medicines Control Agency
  • National Alliance for Autism Research
  • National Institute of Mental Health
  • National Vaccine Program Office and National Immunization Program
  • Statens Serum Institute, Copenhagen, Denmark
  • UK Medical Research Council
  • University of California Los Angeles
  • World Health Organization

 

List of author affiliations

  • Abt Associates Inc.
  • Centers for Disease Control and Prevention
  • Danish Epidemiology Science Center
  • Department of Statistics, Open University
  • Group Health Cooperative of Puget Sound, Seattle, Washington
  • Harvard Pilgrim Health Care Institute, Harvard Medical School
  • Health Protection Agency, Communicable Disease Surveillance Centre
  • Immunization Division, Public Health Laboratory Service Communicable Disease Surveillance Center
  • Immunization Safety Office
  • Institute of Psychiatry, Kings College, London
  • Juntendo University School of Medicine
  • Kaiser Permanente Northern California, Oakland, California
  • London School of Hygiene and Tropical Medicine, London, UK.
  • McGill University, Montreal Children’s Hospital, Canada
  • Morbidity and Health Care Team, Office for National Statistics, London, United Kingdom.
  • National Center for Chronic Disease Prevention and Health Promotion
  • Otsuma Women’s University
  • Royal Free Campus, Royal Free and University College Medical School, University College London
  • Statens Serum Institute, Copenhagen, Denmark
  • University of Washington
  • Whiteley-Martin Research Centre, Discipline of Surgery, The University of Sydney, Nepean Hospital
  • Yokohama Psycho-Developmental Clinic
Posted in Nature of Science | Tagged , , , , , | 15 Comments

Winning the war with the “Food Babe Army”

Spongebob squarepants sandy cheeks, science makes everything sound painfulOver the past week, something truly amazing has happened on the internet. The skeptic community has risen up and unanimously decried the quackery and fear mongering that constantly spews from Vani Hari’s mouth (and website). Vani Hari, better known as the, “Food Babe” has been scrutinized by scientists and skeptics since her initial internet debut, and, truth be told, it’s not hard to find fault with her arguments. Her entire world-view centers on the notion that chemicals with long scary sounding names must be bad for you, and she at least claims to live by the credos that if you can’t pronounce something you shouldn’t eat it and, “there’s just no acceptable level of any chemical to ingest, ever.” The reality is, of course, that such statements ignore high school level chemistry and are nothing more than appeal to nature/appeal to emotion fallacies. All matter is made of chemicals, and the length of a chemical’s name has nothing to do with whether or not it’s toxic. For example, everyone should regularly consume retinal, ascorbic acid, cyanocobalamin, and cholecalciferol. Those names are hard to pronounce and sound frightening (after all one of them is an acid, it must be toxic!), but those of us with scientific knowledge (or Google) know that those are simply the chemical names for vitamins A, B, C, and D.

Nevertheless, skeptics have devoted a considerable amount of effort to debunking her nonsense. Blogs like Science Based Medicine, Neurologicablog, and many others have done a phenomenal job of thoroughly shredding her claims that there’s antifreeze in beer, microwaves destroy the nutritional value of food, Subway is feeding you yoga mats, etc. Meanwhile, the blog Bad Science Debunked has been busy pointing out the fact that she sells numerous products that contain the very same chemicals that she decries!

The problem is, that despite the obvious absurdity of her claims (let’s not forget that she was concerned by the fact that airplanes don’t contain 100% oxygen even though the air you normally breathe is mostly nitrogen), people have flocked to her message of scientific illiteracy. She has amassed a “Food Babe Army” which wages war against any chemical which she finds unappealing. This would all be laughable if it wasn’t for the fact that her army actually accomplishes many of its goals! Most famously, they succeed in getting both Craft and Subway to remove chemicals that the Food Babe, armed with her degree from the University of Google, had decided were toxic.

This has concerned me for some time now. Despite the fact that a high schooler could defeat her “science,” her army was growing, and the rational voices of the skeptic community seemed to be falling on deaf ears. All of that may have changed, however, on April 7, 2015. That is the day which, I hope, will be remembered as the turning point in the war with the Food Babe Army, because that is the day when the “Science Babe” took down the “Food Babe” in arguably the most epic rant to date. I am, of course, referring to the Gawker article which quickly went viral. There was a lot of build up to that article (such as the New York Times article which was less than kind to Hari), but (at least in my observations) the Gawker article seems to have been a tipping point. In the few days since that article, I have seen numerous skeptic blogs writing additional takedowns, which have been heaping wood onto the growing fire that is decimating her position. In addition to posts about her scientific illiteracy, others have taken issue with her debate tactics, such as the fact that she dismisses many critics by simply accusing them of being sexist. Other people have been resurrecting some of Hari’s old Twitter posts, such as the patently absurd claim the flu vaccine was used to carry out genocides.

The Food Babe has, of course, responded to this new wave of criticism, but in many ways her response only weakens her position. Rather than actually deal with the fact that her views aren’t grounded in science, she has resorted to her normal brand of ad hominem name calling and shill accusations. To any unbiased observer, it’s pretty obvious that the Science Babe won this round.

With all of these new posts appearing in my newsfeed, I wondered just how much of an impact all of this was actually having. So, I turned to Google Trends and searched for “Food Babe.” The result was the figure below. That massive spike at the end is entirely from the first 10 days of April. The internet is lighting up with searches about Vani Hari.

food babe

This is a screen shot of Google Trends showing searches for “Food Babe.” That last peak is entirely from the first 10 days of April (this image was taken on April 10, 2015)

It has been said that “any press is good press,” but I tend to disagree. I think (or at least hope) that this massive push against Food Babe’s nonsense will have a positive impact. Encouragingly, her Facebook following has actually decreased this week! Granted, it has only been a decrease of 0.2%, but still a decrease is a decrease, and it’s an encouraging early sign. Nevertheless, I have accepted that we will probably never convince most people to defect from the Food Babe Army. Most of those poor souls have been trapped by Hari’s spell; however, I have great hope for those who have not yet converted to her pseudoscience, and I think that this massive assault on her arguments may go a long way towards stopping the growth of her empire. Her army is never going to go away, and her nonsense will always clutter the internet, but just maybe, we can stop her following from growing, or at the very least, slow it down. To quote one of my heroes (Captain Jean Luc-Picard), “The line must be drawn here. This far, no further!

food babe stats

Facebook allows page moderators to see basic stats of other pages (such as their percent growth from week to week). This is a screen shot of Food Babe’s percent growth from the previous week, taken on April 11, 2015. You’ll notice that the growth rate is negative (indicated by the color red and the downward arrow). This is the first time that I have ever checked a page’s stats and seen a negative “growth.”

To be clear, I am not encouraging anyone to use ad hominem assaults against Vani Hari. Sexist comments and personal attacks against her are unnecessary and inappropriate. Rather, I am simply trying to express my undying gratitude to the blogging warriors who have done battle with the Food Babe Army for the past several years. Thanks to their efforts, we have finally arrived at a point where I am optimistic that we might actually have a legitimate chance of stopping the spread of her quackery. So, I encourage anyone reading this not to let up. We need to keep pointing out her many errors until everyone comes to the realization that her arguments are based on fear not facts. This is not a war about what’s in our food, its a war about whether we are going to accept science or fall prey to fear mongering.

Addendum (April 12, 2015): Based on comments that people have been making in various places on the internet, I think I should clarify the point that I am trying to make. I am not suggesting that the war has been won or even that we are definitely winning the war at the moment. Rather, I am saying that we have a chance to make this the turning point. Right now, there is widespread interest in Vani Hari, which means that it is more important to point out here errors now than it ever has been before. As I said, we are never going to get all of Hari’s current followers to abandon ship, but I think that we have a reasonable chance of stopping a lot of people from joining her cause. We will never make her go away, but we can stop her influence from spreading beyond her current following.

Posted in Uncategorized | Tagged , , , | 4 Comments

Evolution doesn’t give organisms what they need

beaver natural selection doesn't give organisms what they need evolution

Natural selection can only act on the genetic materials that are available to it, and this rarely results in the trait that is actually needed. Image via Berkley.edu

When I am teaching evolution, I often like to ask my students to think about why a particular feature or trait evolved. I might, for example, ask them why flying squirrels evolved a flap of skin between their legs that allows them to glide between trees, and when I ask questions like this, I frequently get responses along the lines of, “they needed that trait.” This is a very common misconception about evolution. Although most people realize that there isn’t a conscious entity controlling evolution, they nevertheless think that natural selection provides organisms with the traits that they need. This misconception can even lead to fallacious arguments such as, “if evolution is true, why hasn’t a perfect organism evolved?” The reality is that natural selection is constrained by the genetic variation that is currently in a population. Thus, it can only select among the available traits, rather than giving an organism what it truly needs. Therefore, we do not expect evolution to ever produce a perfect organism.

Note: in this post I am using the term “evolution” to refer specifically to evolution via natural selection.

There are several important points for understanding why evolution doesn’t given an organism what it needs. First, we have to realize that evolution is blind. I explained this in detail in a previous post, but to put it simply, evolution has no foresight or goal that it is working towards. It cannot predict the traits that will be useful to an organism in the future. Thus, even if evolution could give an organism what it needs right now that would not necessarily be what it needs several generations from now.

The second important point is simply that natural selection is limited to the variation that it has available to it. Let’s think about the flying squirrel example for a second. We can think of lots of traits that would be very useful to that animal. For example, it would surely be useful for the squirrels to be able to propel themselves at rocket speeds with explosive flatulence, but natural selection cannot make that happen unless the genes for explosive flatulence are available in the population. More realistically (although less amusingly), small, sharp quills in their fur would be very useful for warding off predators. Similarly, a musk gland like a skunk’s would have great benefit, but squirrels don’t have that weaponry because they don’t have the genes for it. Natural selection can’t make a trait out of nothing. It has to work with the genetic building materials that are available to it, and this means that we will probably never see a gas powered squirrel rocketing through the trees (too bad, that would be neat).

Fortunately, there is a mechanism for making new genetic information. I am of course referring to mutations. Natural selection actually removes variation from a population. So if natural selection was allowed to run its course without any interference, you would eventually get a population of clones. Thus, mutations are very important because they add new genes to a population. In other words, they give natural selection new material to work with. So, although squirrels do not currently have the genes for quills or a powerful skunk-like musk gland, they could someday have those features if the right mutation comes along. The catch is that mutations are totally random. Most of them are neutral (no harm or benefit), a few are harmful, and a few are beneficial, but exactly which ones you get is completely random. So, it doesn’t matter how badly you need a new trait. Whether or not you get the particular mutation that you need is pure dumb luck, and the fact that you need a trait has no effect on the likelihood that a mutation will provide that trait for you.

Finally, it’s important to realize that evolution is often a trade-off between traits. It operates on simple cost/benefit ratios, and anytime that the benefit of a trait outweighs the cost, natural selection will favor that trait. Human backs are a phenomenal example of this. The human back is an absolutely horrible design for supporting upright weight. It’s just terrible. That’s part of why so many people develop back problems. So why do we have such horrendous backs? Well, we evolved from quadrupeds, and the basic structure of our back isn’t half bad for walking on all fours. We evolved to be bipedal, however, because being bipedal has lots of advantages. Most notably, having hands rocks! By walking upright, our ancestors could use their hands while they were moving. They could pick fruit on the go, carry large prey back to camp, use weapons, collect building materials for making shelter, etc. All of these abilities were extremely beneficial to our ancestors. Therefore, the benefits of being bipedal outweighed the costs of having a bad back. Also, notice that evolution couldn’t give us the back that we needed because it was constrained to the building materials available to it. All it could do was modify a back that had evolved for being a quadruped. It couldn’t make an entirely new back from scratch.

Now, some people may still wonder why evolution hasn’t made a better back than what we currently have, and there are several answers to that. First, we again have to consider the limitations of evolution. Natural selection can only improve our backs if it has the genetic material necessary to do so. So we are at the whim of random mutations. Also, selection pressure has a big role here. Natural selection only favors a trait when it affects reproductive ability, but back problems often arise after the key child bearing years. This would give it a fairly low selection pressure because nature doesn’t care how long you live once you’ve had all of your children. In other words, longevity is selected for only if it increases the number of genes that you get into the next generation. Finally, we have only relatively recently expanded the average human life expectancy beyond around 35, and many back problems don’t arise until later than that. So if most people die before back problems develop, then there is nothing for nature to select. Also, because humans are social, even in a pre-industrialized society many people with back problems would still be able to live and have children. So, again, there wouldn’t be much for nature to act on.

In summary, evolution doesn’t grant species what they need. It has no foresight, it can only act on the genetic material that is available to it, and it relies entirely on random mutations to provide new building materials. This results in modifications to existing traits, which often involves some form of a trade-off, and rarely produces the trait that is truly needed.

Posted in Science of Evolution | Tagged , , | Comments Off on Evolution doesn’t give organisms what they need

Basic Statistics Part 3: The Dangers of Large Data Sets: A Tale of P values, Error Rates, and Bonferroni Corrections

In previous posts, I have explained the importance of having lots of data, but what I failed to mention was the dangers of analyzing these large data sets. You see, all real data has variation in it, and when you have a very large data set, you can usually subset it enough that eventually you find a subset that, just by chance, fits your preconceived view. Sometimes, these erroneous results arise as a deliberate form of academic dishonesty, but other times they come from honest mistakes. Regardless of their origin, they present a very serious problem because to the untrained eye (and sometimes even to the trained eye), they seem to show scientific evidence for truly absurd positions, and an enormous number of the studies and “facts” that anti-scientists cite are actually the result of this illegitimate sub-setting of large data sets. Therefore, I want to explain how and why these erroneous results arise, how scientists deal with them, and how you can watch for them so that you are not duped by research which appears to carry all the hallmarks of good science.

Note: I have split this post up into two sections. The first just explains the phenomena without going into the actual math or the gritty details of what is going on. The second half, called “Technical Notes” explains the math behind this problem. I encourage everyone to read both sections, but you can get the basic idea just by reading the first section.

I want to illustrate how this works using a totally fictitious data set. To set up this example, let’s suppose that I was working on a particular species of frog, and I wanted to know how body size effected clutch size (i.e., the number of eggs that a female lays). I examined this by measuring female frogs and their clutch sizes at 30 populations (assume that I used proper controls so that any correlation would actually show causation). Now, remember that this is a fictional data set, so to actually generate it, I used the statistical program R to generate 30 sets of random data. Each set contained the body size and clutch size for 50 individuals. For body size, the computer randomly selected a number between 1 and 1,000 (inclusive), and for clutch size it selected a random number between 1 and 500 (inclusive). So I got “measurements” from a total of 1,500 “individuals.” When I put all of those individuals together, I got the figure below.

Figure 1: A comparison of body size and clutch size for my fictional data set (data were randomly selected). As expected, there is no relationship between our variables. You can see this in the flat trend line.

Figure 1: A comparison of body size and clutch size for my fictional data set (data were randomly selected). As expected, there is no relationship between our variables. You can see this in the flat trend line.

As we would expect, there is no relationship between body size and the number of eggs that a female laid. This is what we should find since the data came from a random number generator. In this case, it is pretty obvious to just look at the trend line and see that there are no relationships, but scientists like to actually test things statistically because statistics give us an objective way to tell determine whether or not there are any relationships. For these data, the appropriate test is what is known as a Spearman rank test. Running this test produces a P value of 0.7615. I explain P values more in the Technical Notes section, but for now, just realize that for most scientific studies, anything less than 0.05 is considered significant. So when you see a scientific study state that it found a “significant difference” or “significant relationship” it usually means that the P value was less than 0.05.

So far, everything is as it should be: there is no significant relationship. However, watch what happens when I subset the data. This time, I’m going to display each population separately.

Figure 2: These are the same data as figure 1, but the data have been subset by population so that each population is now shown separately (note: the data are fictional and were randomly generated).

Figure 2: These are the same data as figure 1, but the data have been subset by population so that each population is now shown separately (note: the data are fictional and were randomly generated).

That’s obviously a bit of a mess to look at, but a few of those lines appear to be showing a relationship, and when we run a Spearman rank test on each subset, we find that populations 2, 19, and 23 all had a significant positive relationship between frog size and clutch size!

 Figure 3: This just shows the data for populations 2, 19, and 23. All three of them had significant positive relationships. (note: the data are fictional and were randomly generated)

Figure 3: This just shows the data for populations 2, 19, and 23. All three of them had significant positive relationships. (note: the data are fictional and were randomly generated)

illustration of a type one error 1

Table 1: P values for all 30 fictional populations. Anything less than 0.05 was considered statistically significant. The three significant populations have been highlighted.

Unless you already have a good understanding of probability theory, this should surprise, shock, and disturb you. We know that there is not actually a relationship between these two variables because I randomly selected the numbers, yet three of our populations are showing significant relationships! What is going on here? Are all statistics part of a massive lie by the government/big pharma/Monsanto designed to deceive us into allowing them to poison the food supply, fill our children with toxins, neuter our pets, and inject us with mind controlling chemicals!? Hardly.

The reality is that this is a well-known statistical fluke known as a type 1 error. This type of error occurs anytime that you incorrectly conclude that there is either a relationship between variables or a difference between groups (see Technical Notes for details). This is simply an unavoidable byproduct of the law of large numbers. Because of natural variation in data, you will inevitably get erroneous results from time to time. Fortunately, there are ways to control this type of error when you are working with subgroups of large data sets (see Technical Notes). The problem is that sometimes studies that did not use these controls manage to make it through the peer-review process, and the media and general public have a panic over “results” that are actually just statistical outliers.
The breeding habits of frogs rarely make the news, so to illustrate this problem, imagine instead that my data set was looking at the side effects of a vaccine, or, perhaps it was looking for relationships between GMO consumption and neurological problems among different age groups. You can no doubt envision the headlines, “GMOs are dangerous for 10-15 year olds,” “New study proves that GMOs are dangerous,” etc. The problem is that, in this example, we know that the results are spurious. They are just from random chance, but to the untrained eye, they appear to show significant relationships for certain subgroups.

My data set is fictional, but this happens with real data and it causes all manner of problems. A great example of this occurred last year in a highly publicized paper called, “Measles-mumps-rubella vaccination timing and autism among young African American boys: a reanalysis of CDC data.” The headlines from the media and anti-vaccers were predictable, “New Study Finds A 340 Percent Increased Risk Of Autism In Boys Who Receive MMR Vaccine On Time” (from Collective-evolution), “MMR vaccines cause 340% increased risk of autism in African American infants” (from Natural News), etc. When we look at the actual study though, we find that it is fraught with problems. The one that I want to focus on is sub-setting. The authors took a data set of several hundred children, then subset it by race, then subset those subsets by sex, then subset those subsets by age at injection. So we now have lots of subsets, and out of all of those subsets only the group of 36 month old African American males was significant, and the authors failed to control their type 1 error rate. Is the problem with this approach obvious? It should be. This is exactly the same thing that happened with my fictional frog data. Once you subset the data enough, you will eventually find some tiny subset that just by chance appears to support your position.

Further, even if this result wasn’t simply a statistical anomaly, the “news” articles about it are still a clear example of a sharpshooter fallacy, because 36 month old African American boys was the only group that showed a significant relationship. The headlines should have said, “MMR vaccine safe for everyone except 36 month old African American males.” Males of all other races = no relationship. African American males of other age groups = no relationship. African American girls of any age group = no relationship, etc. This one, tiny subgroup is the only group with a significant difference, making this is a classic type 1 error. Fortunately, I was not the only one who could spot the problem with this paper, and multiple scientists quickly pointed out its numerous flaws, ultimately resulting in a speedy retraction by the journal.

Now that you understand how this works, you should be able to spot this problem in lots of pseudoscientific papers. For example, many of the papers that I have read on homeopathy used lots of different measurements for the exact same treatment without controlling the type 1 error rate, and out of the 20 types of measurements that they used, one happened to show a significant improvement. When you get results like that, you shouldn’t conclude that homeopathy works. Rather, you must acknowledge that 19 out of 20 measurements showed no improvement, therefore, the one that did show an improvement was probably a statistical outlier.

Before delving into some fun mathematical details, I want to discuss one last case where this occurs. Everything that I have talked about so far has been either a deliberate manipulation of the data, or a mistake by the researchers in which they did not use the correct statistical methods. There is, however, another way that you get these spurious results without any mistakes or deception by the researchers. Let’s go back to my frog populations again, but this time, imagine that instead of studying all 30 populations, I just studied population #2 (assume that I got the same data as I did in my simulation). There was no dishonest or biased reason that I chose that population. It was just the one that was available for me to examine. Studying that population would, however, give me an erroneous, significant result, but because that was the only population that I studied, I would have no way of knowing that the result was incorrect. Even if I did everything correctly, designed a perfect project, and used the appropriate statistics, I would still get and publish a false result without ever knowing that I was doing so.

It’s important to understand this because this problem happens in real research. Papers get published by good, honest researchers saying that a drug works or a treatment is dangerous when, in fact, the results were simply from statistical anomalies. This is one of the key reasons that scientists try to replicate each others research. Suppose, for example, that 29 other researchers decided to try to replicate my results by studying the other 29 frog populations. Now, we would have 30 papers, 3 of which say that body size affects clutch size and 27 that say that there is no relationship. This lets us do something called a meta-analysis. This is an extremely powerful tool where you take the data from several studies and run your analyses across the whole data set. You can think of this like taking all the individual populations of frogs (Figure 2) and combining them into one massive population (Figure 1; the math is actually significantly more complicated than that, but that’s the basic idea). This method is great because it lets us see the central trends, rather than the statistical outliers. Also, remember that the law of large numbers tells us that the calculated value of large data sets should be very close to the true value. In other words, large data sets should give us very accurate results.

One of my favorite meta-analyses was published last year, and looked at the relationship between the MMR vaccine and neurological problems like autism. I cite this study a lot on this blog, but that is because it provides such powerful evidence that vaccines are safe. It combined the results of 10 different safety trials, giving it an enormous sample size of over 1.2 million children (feel free to rub that in the face of any anti-vaccer who says that the safety of vaccines hasn’t been tested). With that large of a sample size we expect to see true results, not statistical outliers. So the fact that it found no relationships between vaccines and autism is extremely powerful evidence that vaccines do not cause autism.

You can find these meta-analyses for lots of different topics, and they are crucially important because they have such large sample sizes. You can, for example, find scattered papers from individual trails that found significant improvements using homeopathy, but you can also find plenty of papers showing that homeopathy did not work for many trials, and it can be difficult (especially as a layperson) to wade through that flood of information and determine what is actually going on and which studies you should trust. That’s the beauty of meta-analyses, they comb through the literature for you and give you the central trends of the data. To be clear, I am not advocating that you blindly accept the results of meta-analyses. You still need to carefully and rationally examine them, just like you do every other paper, but, they often do a much better job of presenting true results.

In conclusion, I would like to give several pieces of advice to those of you who are interested in truly informing yourself about scientific topics (as I hope you all are). First, take the time to learn at least the basics of statistics. Learn how the tests work, when to use them, and how to interpret their results. I personally recommend studying the Handbook of Biological Statistics. It’s a great website that does a fantastic job of introducing a lot of statistical tests in a way that most people can understand. Second, watch out for these type 1 errors. Learn the methods that we use to control them, and when you’re reading the literature, make sure that the authors used those controls. Finally, look for the meta-analyses. They are one of our most powerful tools, and they can really help you sort through the tangled mess that is academic literature.

Technical Notes
P-values and alphas
In statistics, you are generally testing two hypotheses: the null hypothesis and the alternative hypothesis. The null hypothesis usually states either that there is no association between the variables you are testing or that there are no differences between your groups. The alternative hypothesis states the opposite. It states that there is a significant relationship/difference. The P value is the probability of getting the results that you got if the null hypothesis is actually true. So if you are comparing the average value of two groups (group 1 mean = 120, group 2 mean = 130), and you get a P value of 0.8, that means that if there is no significant difference between those groups (i.e., the null hypothesis is true) then, just by chance, you should get the result that you got 80% of the time. So most likely, your “difference” of an average of 10 is just a result of chance variation in the data. In contrast, if you get a P value of 0.0001, that means that if the null hypothesis is actually true, you should only get your results 0.01% of the time. This makes it very likely that your difference is a true difference, rather than just a statistical anomaly. Thus, the smaller your P value, the more confidence you have in your results.

It’s important to note that you can never prove anything with statistics. You can show that there is only a 0.0000000000000000000000000000000000000000000000000001% chance of getting your result if the null is true, but you can never prove anything with 100% certainty.

To objectively determine whether or not there is a significant relationship/difference, you compare your P value to a predetermined significance value called alpha. Generally speaking, you use an alpha of 0.05, but sometimes other alphas are used, especially 0.01. So, if your calculated P value is less than your alpha, you reject the null hypothesis. In other words, you conclude that your result is statistically significant. Whereas if your P value is equal to or greater than your alpha, you fail to reject the null hypothesis (this is not the same thing as accepting the null, but that’s a conversation for another post). Importantly, your alpha must be determined ahead of time, and you are not allowed to cheat. If your alpha is 0.05 and your P value is 0.051, you cannot claim significance or talk about your results as if they were significant. This is, however, another case where meta-analyses come in handy. Sometimes, there is a significant result, but your sample size was too small to detect it, so by combing your data with the data from other studies, you can boost the sample size and reveal a trend that was not visible just by looking at your data.

Type 1 and type 2 errors
A type 1 error occurs anytime that you reject the null hypothesis when the null hypothesis was actually correct. Remember, P values are the probability of getting your results if the null hypothesis is actually true. So, if you have an alpha of 0.05, then a P value of 0.049 will be significant, but you should get that result just by chance 4.9% of the time. If you think back to my fictional frog data, this is why some of the populations were significant. Because I had 30 populations, I expected that just by chance some of them would happen to have body sizes that produced a P value less than 0.05.

Now, you may be thinking, “well why not just make the alpha really tiny, that way you almost never have a type 1 error.” The problem is that then you get a type 2 error. This occurs when you fail to reject the null hypothesis, but the null hypothesis was actually incorrect. In other words, there was a significant relationship, but you didn’t detect it. So if, for example, we used an alpha of 0.00001, we would have an extremely low type 1 error rate, but we would almost never have any significant results. In other words, almost all of our studies would be type 2 errors. So the alpha level is a balance between type 1 and type 2 errors. If it’s too large, then you have too many type 1 errors, but if it is too small, then you have too many type 2 errors.

Family-wise type 1 error rates
The family-wise type one error rate is basically what this whole post has been about. When you are testing the same thing multiple times, your actual type 1 error rate is not the standard alpha. Think about it this way, if you did 100 identical experiments and the null hypothesis was true, we would expect, just by chance, that five of the experiments would have a P value of 0.05. This is, again, what happened with the frog data. So you need a new alpha level that accounts for the fact that you are doing multiple tests on the same question. This new alpha is your family wise type 1 error rate.

There are several ways to calculate the modified error rates, but one of the most popular is the Bonferroni correction. To do this, you take the alpha that you actually want (usually 0.05) and divide it by the number of tests you are doing. This results in a 5% chance that any of your results will have a P value of 0.05 if the null hypothesis is correct. So, if we apply this to my fictional frog data, I had 30 populations, so my corrected alpha is 0.05/30 = 0.00167. So, for any of my populations to show a significant relationship, they must have a P value less than 0.00167. My lowest P value was, however, 0.0174. So, now that done the correct analysis and have properly controlled the family-wise error rate, we can see that there are no significant relationships between body size and clutch size, which is, of course, what we should see since these data were randomly generated. This is why it is so important to understand these statistics: if you don’t control your family-wise error rate, you are going to get erroneous results.

It’s worth mentioning that the standard Bonferroni correction tends to slightly inflate the type 2 error rate, so I (and many other researchers) prefer the sequential Bonferroni correction. This works the same basic way but with one important difference: rather than comparing all of your P values to alpha/# of tests, you compare the lowest P value to alpha/# of tests, the second lowest P value to alpha/(# of tests – 1), the third lowest P value to alpha/(# of tests – 2), etc. You keep doing this until you reach a P value is that is greater than or equal to the alpha you are comparing it to. At that point, you conclude that that test and all subsequent tests are not significant. So, for my frog data, we are done after the first comparison because even my lowest P value was greater than 0.00167, but let’s suppose instead that my four lowest P values were: 0.0001, 0.0002, 0.0016, and 0.003. First, we compare 0.0001 to 0.05/30 = 0.00167. Since 0.0001 is less than 0.00167, we reject the null. Next, we compare 0.0002 to 0.05/29 = 0.00172. Again, reject the null. Next, we compare 0.0016 to 0.05/28 = 0.00179. Again, reject the null. Then, we compare 0.003 to 0.05/27 = 0.00185. Now, our P value is greater than our alpha, so we fail to reject the null for this test and for all 26 other tests that produced larger P values.

A final way to deal with this problem of multiple comparisons (and really the best way) is to design your experiment such that you can use a statistical test that incorporates your subsets into the model and accounts for the fact that you are making several comparisons. Entire books have been written on this topic, so I won’t go into details, but to put it simply, some tests (like the ever popular ANOVA) allow you to enter different groups as factors in your analysis. Thus, the test makes the comparisons while controlling the error rate for you. Again, I want to impress upon you that if you want to be able to understand scientific results, you need to at least learn the most common statistical tests and how and when they are used. This is fundamental to understanding modern science. If you don’t understand statistics, you’re not going to be able to understand scientific papers, at least not in a way that lets you objectively assess the authors’ conclusions.

Other posts on statistics:

 

Posted in Nature of Science | Tagged , , , | 2 Comments

What does it mean to be a skeptic?

It is good to be skeptical about everything that you hear and read. In fact, skepticism is one of the defining characteristics of a scientist. Nevertheless, terms like “skeptic” and “open-minded” are often misappropriated by people in the anti-science movement, and many of the most biased people on the planet are under the delusion that they are skeptical. Climate change deniers, for example, often refer to themselves as, “climate change skeptics,” and it is rare to have a conversation with anti-vaccers without them referring to pro-vaccers as “sheeple.” Therefore, I want to briefly examine what it actually means to be a skeptic.

First, I want to clear up a common misconception. Many people seem to be under the impression that being a skeptic means going against the mainstream view. Thus, anti-vaccers consider themselves to be “thinking parents,” while viewing everyone else as “sheeple.” There is, however, nothing in the definition of “skeptic” that requires you to reject a scientific consensus. You are welcome to accept a consensus as long as that consensus formed as a result of strong, scientific evidence.

Having cleared up that misconception, let’s move on to the definition of a skeptic. There are basically two parts to being a skeptic. First, a true skeptic does not accept something or commit to a position unless there is sufficient evidence for that position. In other words, a skeptic questions what he/she is told and doesn’t accept anything until they have carefully studied the issue and examined the available evidence. Importantly, you must use good sources when fact checking. So, for example, reading Natural News does not constitute examining the evidence. Rather, you need to look at the original, peer-reviewed research.

This first requirement of skepticism may sound simple, but it is something that most people struggle with (including people who strongly support science). It is very easy and tempting to quickly latch onto some new study that seems to support your position, but it is crucially important that you avoid this trap. You must always carefully examine the evidence regardless of whether or not it supports your position. This is, in fact, one of the most important things that students of science get taught in graduate school. The peer-review system works well, but it is far from perfect, and sometimes bad research does get published. Therefore, you can never assume that something is true, and you must rigorously and carefully examine everything before accepting or rejecting it.

The second prerequisite for skepticism is being open-minded. This simply means that you are willing to change your position if you are shown evidence to the contrary. The term, “open-minded” has, however, been stolen and perverted by the anti-science movement. On numerous occasions, I have had people tell me that I need “open my mind about alternative medicines.” The reality is that am completely open to the possibility that alternative medicines work, but I’m not going to accept that they work until they have passed rigorous scientific testing. That’s not being close-minded, that’s being skeptical. Similarly, I have had multiple anti-vaccers tell me that my training in the sciences has made me, “close-minded.” When I pressed these people for what they meant , they explained that I was being close-minded by demanding scientific studies and refusing to accept anecdotal evidence. Think about this for a second. According to them, those of us who argue in favor of science are close-minded because we demand scientific evidence for a debate about science. Further, these same people will usually proudly proclaim that nothing will ever change their minds, which is, of course, the very definition of close-minded.

Accepting something without sufficient evidence is not being open-minded, it’s being gullible. It is not, for example, open-minded to use anecdotal evidence to arrive at the conclusion that vaccines cause autism. Rather, someone who is open-minded would reject those anecdotal reports in favor of the large, carefully controlled studies which clearly show that vaccines do not cause autism. It is important to note, however, that nothing in science is ever 100% certain. Thus, being truly open-minded means that you are always willing to consider the possibility that you might be wrong no matter how clear the data currently seem. So, for example, if in the future a large, well designed, carefully controlled study is published showing that vaccines do cause autism, and the results of that study are replicated by other researchers, I promise you that skeptics around the world (myself included) will write and prominently display posts admitting that we were wrong about the relationship between vaccines and autism. That is what it truly means to be open-minded. It means that you are willing o change your view when presented with good evidence. It does not mean that you are willing to blindly accept something despite a lack of evidence.

In summary, a skeptic is simply someone who demands good evidence before accepting something and is willing to change their view when it conflicts with the evidence. These requirements are easy to say, but often hard to follow. Nevertheless, everyone should strive to be a skeptic. You should use good sources, question your assumptions, demand evidence, beware of cognitive biases, and above all else, never hold any position so dearly that you are not willing to challenge it.

Posted in Uncategorized | Tagged , | Comments Off on What does it mean to be a skeptic?