The existence of real conspiracies does not justify conspiracy theories

Most science deniers are conspiracy theorists. Many of them don’t like to think of themselves as conspiracy theorists and would even ardently deny that they deserve that label, yet when you present them with peer-reviewed evidence for anthropogenic climate change, the safety of vaccines, the safety of GMOs, etc. they almost invariably respond by asserting that those studies aren’t valid evidence because vast corporations, governments, etc. have bought off all of the scientists, doctors, regulatory bodies, etc. That claim has no evidence to support it and is a textbook example of a conspiracy theory.

As a result of the conspiratorial nature of science-deniers, conspiracy theories are a frequent target of my blog/Facebook page, but almost any time that I post about the illogical nature of conspiracy theories, I get irate responses from people who insist that conspiracy theories are not inherently illogical, and they do this based on one of two lines of logic. Either they bring up the fact that real conspiracies do exist and have been discovered, or they cite agencies like the FBI that investigate criminal conspiracies and they incorrectly assert that I am suggesting that those agencies are inherently irrational (i.e., that FBI agents are conspiracy theorists). Both of these arguments are wrong and rely on infuriating semantic games rather than actual facts or logic, but I encounter them frequently enough that I want to spend a few minutes explaining the problems with them.

Both of these lines of reasoning rely on conflating real conspiracies with conspiracy theories, but that is semantic tomfoolery. The term “conspiracy theory” generally refers to self-reinforcing ideas that rely on inserting assumptions into gaps in our knowledge, have little or no actual evidence, and conveniently excuse any evidence that is presented against them. That is a very different thing from the type of investigation that an organization like the FBI does, and we don’t use the term “conspiracy theory” to refer to that type of investigation.

Let me give an example to illustrate this. On many occasions, I have showed a science-denier a peer-reviewed paper that discredits their view, only to have them claim that the authors were paid off. Then, I showed them the conflicts of interest section of the paper which clearly showed that no conflicts of interest existed. At that point, they, of course, said that the payment was secret and, therefore, not reported. When I asked them for evidence to support that claim, however, they couldn’t provide it. It was an assumption that they were making simply because the evidence did not fit their view. This is how conspiracy theories operate. Any evidence that conflicts with the theory is explained away as part of the conspiracy, and a lack of evidence to support claims is also justified as simply being part of the conspiracy (e.g., claiming that large corporations are silencing scientists and preventing them from publishing).

Now, let’s contrast that with actual investigations of actual conspiracies. Imagine, for example, that the FBI was investigating corruption, found no evidence of it existing (i.e., no conflicts of interest) but they ignored that lack of evidence and assumed that it was simply part of the conspiracy. Obviously, that would be a really bad investigation, and they could never get a conviction out of it. If they failed to find evidence to support what they thought was true, they would have to move on. They couldn’t just ignore any evidence that disagreed with them. To put that another way, agencies like the FBI rely on evidence, not conjecture when conducting their investigations, and their hypotheses change as new evidence arises.

Do you see the difference? Conspiracy theories conveniently excuse contrary evidence by writing it off as part of the conspiracy; whereas real investigations are based on the available evidence and don’t blindly ignore any evidence that disagrees with them. Having said that, real conspiracies certainly do exist, and multiple of them have been uncovered, but the fact that real conspiracies exist does not mean that your conspiracy theory is logical or justified.

To give an example of why the existence of real conspiracies doesn’t justify conspiracy theories, imagine that I have a friend named Bob, and for one reason or another, I decided that Bob was a murderer. I didn’t have any real evidence, but I “just knew” that I was right. Now, imagine that you confronted me about this and demanded evidence for my claims, and I responded by saying, “lots of real murderers have been caught, it happens all the time; therefore, it is rational for me to think that Bob is a murderer.” Would my reasoning be correct? Obviously not. The fact that there are murderers does not in any way shape or form make it rational to think that Bob is a murderer. I need actual evidence specifically showing that Bob murdered someone. The same thing is true with conspiracies. The fact that real ones exist doesn’t mean that your theory is justified. You need actual evidence for your view to be rational.

Now, at this point, conspiracy theorists will inevitably protest and claim that they do, in fact, have evidence. However, every time that I have ever asked to see that evidence, I have been sorely disappointed. Inevitably, the “evidence” takes the form of blogs, youtube videos, and baseless conjecture, often espousing ideas that have been thoroughly investigated and debunked. For example, I still encounter people who cite “climategate” as evidence that climatologists are involved in a conspiracy, despite the fact that multiple independent and well-respected scientific bodies examined the situation and concluded that no wrong-doing or data-manipulation had occurred. Of course, the conspiracy theorists inevitably respond by asserting that those scientific bodies are also part of the conspiracy, but that just illustrates my point. Conspiracy theories are irrational precisely because they twist any evidence to fit the conspiracy. Think about it, what could you possibly show a conspiracy theorist to convince them that the theory was wrong? Nothing, because no matter what evidence you show them, they will argue that the evidence is also part of the conspiracy.

My point in all of this is really simple. The term “conspiracy theory” specifically refers to imagined conspiracies that have no real evidence to support them and inherently rely on making assumptions to fill gaps in knowledge, rather than actually basing views on the available evidence. The existence of real conspiracies does not justify these conspiracy theories, nor should you play semantic games to try to equate conspiracy theorists with evidence-based investigative bodies like the FBI. So please, if you see a post about conspiracy theories, spare us all from your pedantry.

Note: I want to be clear that the use of the word “theory” in the term “conspiracy theory” is very different from its use in science (it is far more similar to its use in the term “movie theory”). In science, a theory is an explanatory framework that has been rigorously tested and shown to have a high predictive power. It is not simply a guess nor does it indicated that we are unsure of its veracity.

Posted in Uncategorized | Tagged | 9 Comments

Training to be a scientist: It’s not an indoctrination and it’s more than just reading

Many people seem to have some rather strange preconceptions about how higher education in science works. I often encounter people who insist that graduate school “indoctrinates” students, robs them of creativity, and “brainwashes” them to “blindly accept scientific dogma.” Further, others are under the delusion that training in the sciences just consists of lectures and reading, and they think that just spending some time on Google is sufficient to learn everything that they would get from actually earning an advanced degree. It probably shouldn’t be surprising that the people making these claims have never received any formal training in science and are, in fact, projecting their own biases onto a system that they know nothing about. Having said that, it’s not particularly surprising that so many people are so hopelessly wrong about how graduate school works, because it is a very unique system that is unlike most other forms of education. Therefore, I thought it would be a good idea to explain what advanced training in the sciences is actually like, and in so doing, I want to dispel common myths about “indoctrinations” as well as the notion that reading articles on Google is equivalent to receiving an advanced degree.

Note: There is a lot of variation in graduate programs, and different countries often have different systems (AU and the USA are quite different, for example). So, I will focus on the commonalities and highlight some differences where relevant.

Note: I earned a BSc and MSc at two different universities in the USA, and I am currently about three quarters of the way through a PhD in AU.

Time frame

If you plan on having a career in science, you will almost certainly need a MSc and in most cases a PhD. To earn those degrees, you will first have to earn a BSc, which will take you 3–4 years, depending on where you earn it. Typically, people then go for a MSc, which is usually another 2-3 years, followed by a PhD, which is another 4-6 years (generally speaking). There are exceptions though. For example, in the US, some schools allow you to skip the MSc and go straight for the PhD, but you generally have to have a fair amount of previous research experience, and those PhD programs generally take a bit longer than PhD programs that have a MSc as a prerequisite. Regardless of the path you take, however, you are probably going to need at least a decade of higher education (often more) before you can get a job as a researcher (also, even after earning a PhD, if you want to go into academia, you have to spend several years doing post-docs, but that is a whole other topic).

I want to pause here for a moment to consider the frequent claims on the internet that someone has done “thousands of hours of research.” For example, anti-vaccers often like to claim that they have spent thousands of hours studying vaccines and, therefore, are just as qualified as scientists and doctors. Let’s do some math on that for a minute. Graduate students generally work a minimum of 60 hours a week, and don’t take many holidays (undergrad is similarly strenuous in the sciences). Thus, completing training in the sciences will usually take a minimum of 60 hours a week, for 50 weeks a year (assuming two weeks of vacation), for 10 years (you don’t get summers off as a graduate student, and your undergraduate summers will often be spent on internships). That’s 30,000 hours of training. Further, even if we want to be absurdly, unrealistically generous and say that someone completes their degrees in eight years, working only 50 hours a week, that is still 20,000 hours! Please keep that in mind the next time that someone claims to know more than scientists just because they spent some time reading Google! To put that another way, even if grad school consisted of nothing but reading (which, as I’ll explain in a minute, it doesn’t), that would still mean that people with advanced degrees are far, far more well-read than someone who reads blogs in their spare time.

It’s about research, not coursework

Probably the biggest misconception about advanced education in the sciences is the notion that it is mostly coursework. Indeed, I often have friends and family members ask me things like, “when do your courses start for the year?” or “do you have any big tests coming up?” But that’s not actually how grad school works. There are courses, but the emphasis is on research.

To expand on that, earning an undergraduate degree in science does, in fact, require a lot of courses. However, many of those courses involve a laboratory or field component. These courses have traditional lectures, but also give you hands on training and experience, and that hands-on learning is vital. Reading about how to set up an experiment, and actually setting up an experiment are two very different things. Further, if you plan on going to graduate school after earning your bachelors, you will probably (but not always) need to gain some real research experience by doing summer research internships, assisting graduate students with their research, doing independent research with a professor, etc. Regardless of how you gain that research experience, the result is the same: you get actual experience with real research and one-on-one mentoring from experienced researchers. Thus, even at the undergraduate level, you should be gaining knowledge from hands-on learning, not just lectures and reading.

At the graduate level, there is a lot of variation depending on where you go. The US, for example, generally does have some coursework requirements, but they are fairly minimal. Where I did my MSc, for example, I took about two courses a semester for my first three semesters, and didn’t take any courses my last semester. Most of my time, however, was spent doing research. That is fairly typical for US universities, and if you are doing a PhD in the US, you can expect to take several courses during your first two years or so (while simultaneously doing research), then do nothing but research for the last several years.

The Australian system differs a bit. It assumes that you have already taken enough courses, and you just jump straight into pure research. As a result, their degrees are typically on the short end of the spectrum. Also, although there are no required courses, you do often have to do a certain number of “workshop” hours. For example, by taking a week-long course on statistics, a workshop on grant-writing, etc.

Lego Grad Student is one of the most entertainingly accurate representations you’ll ever see of what it is like to be a graduate student. It is one of my few sources of joy in life, and I highly recommend that you follow him on facebook, twitter, etc.

So, if grad school isn’t focused on coursework, then what do grad students actually do? In short, we do research. To complete a graduate degree, you pick a topic that you want to study, design a series of experiments* to test your hypotheses about that topic, conduct those experiments, analyze the results, and write a formal report of your experiments and results (you have to submit a thesis or dissertation to your school, but if you want a job after grad school, you’ll usually want to publish each chapter of your thesis/dissertation as a paper in a peer-reviewed journal). A typical day for me, for example, consists of either spending all day in the field collecting samples, or spending all day in the lab processing those samples (extracting DNA, running PCRs, etc.), or sitting at my computer analyzing the data from those samples, or writing a paper on the results, or (usually) a hectic, scatterbrained combination of all of the above. I get up each morning, seven days a week, do research for a minimum of 10 hours, watch an episode of Star Trek, Stargate, or Dr Who, rant on Facebook, go to bed, get up the next morning and do it all over again. This is what grad school does: it teaches you to be researcher by throwing you into the deep end and seeing whether you sink or swim.

Here again, I simply cannot overstate the importance of this learning-by-experience approach. Reading about how to design an experiment, how to run statistics, how to control for confounding variables, etc. simply is not the same as actually doing it, which is why I find it utterly ridiculous that people think that reading a few blogs or even books makes them qualified to tell scientists that they are wrong.

There is a lot of reading

Another gem from Lego Grad Student

Despite what I’ve so far about grad school being more than just reading, there is still reading, a LOT of reading. However, when I say, “reading” I don’t mean reading blogs or poorly-referenced books by fringe academics. Rather, I mean reading hundreds of peer-reviewed papers, as well as lengthy academic books. As a grad student, you should be reading close to a paper a day. You have to read that much if you want to actually be an expert in your field. Further, new research is coming out all the time, which means that you have to stay up to speed on the most recent publications. As a result, even after earning your degree, you still have to read constantly.

Out of curiosity, when I was writing this post, I checked the number of full papers that I had saved in my desktop reference manager (I use Mendeley, just fyi), and there are over 1,100 papers in there that I have read and taken notes on. Further, those are just the papers that I thought were worth keeping the entire text of. I probably only save about half the papers I read at most, not to mention all the academic books I’ve read.

Further, just to be clear, I am in no way unusual in this regard. I’m not bragging about how well-read I am. Rather, my point is that this is normal for training in the sciences. This is simply what it takes to be an expert in science. It requires constantly reading the original research, and it is utterly absurd to think that watching Youtube and reading blogs is going to give you knowledge that is equivalent to the type of knowledge that is gained through that level of dedication and study.

It’s a collaborative learning environment

So far, I have stressed the importance of reading massive amounts of peer-reviewed literature and the importance of actually doing research, but there is another key aspect of graduate training: learning from your advisers and peers.

Graduate school in the sciences is, in many ways, an apprenticeship. When you join a program, you join the lab of a senior academic who serves as your primary supervisor. Their job is to train you, but also to keep you on track. Good advisers, in my opinion, should give you a lot of freedom to explore ideas and go down wrong roads, but they should also reign you back in when you are too far off track. I’ve personally been really lucky with advisers, because I’ve had great ones at every level who I’ve learned a tremendous amount from, and when I say “learned from,” I don’t want you to picture a situation where I go into their offices and they lay down the law and tell me exactly what to think and do, because that is way off from how the relationship actually works. Rather, I go to them with an idea I’ve come up with or a problem that I am running into, and we sit down and debate and discuss what to do with that idea or problem. There is a wonderful back-and-forth and an exchange of ideas and knowledge that helps me to not simply solve the current problem, but to think more clearly about how to solve future problems.

Here again, this is something that you simply don’t get from reading the internet. The ability to, on a regular basis, sit down with an expert with years of experience and debate and discuss ideas is invaluable. That type of exchange teaches you to think critically, and critical thinking is a skill that has to be honed and practiced.

Further, as a graduate student, you shouldn’t just be learning from your primary supervisor. You should also have secondary supervisors, collaborators, etc. all of whom have different knowledge bases and skill sets that you can learn from. Additionally, fellow graduate students are great for bouncing ideas off of and getting help from. Even within a lab group, everyone has their own areas of expertise, and in a good group, that expertise gets shared. At some point, on nearly any given day, I end up going to another member of my lab to run something past them, see if they have any experience with a particular analysis method, etc., and, conversely, other students come to see me when their research intersects with something that I’m knowledgeable about. This constant exchange of ideas and information is incredible, it’s one of my favorited things about academia. I constantly get to benefit from the hard-earned knowledge of those around me. Further, even when I am the one sharing knowledge, I find that I benefit from the exchange. At some point, you’ve probably heard the sentiment that if you want to see if you really understand something, you should try explaining it to someone else. I think that there is a lot of truth to that, and on many occasions the process of explaining something to someone has made me think of something I hadn’t considered before or realize that there were gaps in my knowledge that needed to be filled.

I know that I sound like a broken record by now, but once again, this type of knowledge-sharing environment is something that is hard to come by on the internet. There are plenty of echo chambers to convince you that you’re correct about what you already think is true, but collaborative learning environments like what you experience in grad school are few and far between.

It’s not an indoctrination

At this point, I want to directly dispel several myths. The first of which is the bizarre notion that students are indoctrinated to blindly follow the accepted wisdom of their fields, rather than thinking for themselves. This is exactly the opposite of reality. Professors constantly encourage students to think outside of the box, ask questions, and challenge accepted notions. For example, most graduate programs involve some form of seminar, journal club, etc. where a professor (or sometimes a student) picks a paper for the group to read, then everyone sits around debating the paper, trying to shoot holes in it, brining other papers into the discussion, etc. It is an exercise that is specifically designed to make students think critically and questions papers rather than blindly accept them. Indeed, at all three of the universities I’ve attended, I’ve had professors explicitly instruct students to read papers critically, rather than assuming that their results are correct. There was one seminar that particularly sticks out in my mind where the professor was really excited by the paper he wanted us to read, but by the end of the seminar the entire group, professor included, agreed that there were serious problems with the paper. Rather than simply giving us a paper and saying, “here it is, believe it,” he took us through the process of assessing the paper and thinking critically about it.

Further, the system is designed not simply to teach you to be critical, but also to teach you how to be critical. By this process of discussing and debating papers with your peers, you learn how to assess papers, how to look for weak points in their methodologies, how to place them into the broader context of a whole body of literature, etc. This is something else that you simply do not get from merely reading.

To put this another way, science is all about asking questions. That’s fundamentally what science is: a systematic process for asking and answering questions. Therefore, it really shouldn’t be surprising that graduate programs place an emphasis on teaching students to ask questions and think critically about those questions. If you’re not willing to ask questions then you have no business being in science.

Scientists are creative

There is a common stereotype that scientists aren’t creative, and I often hear people claim that grad school squashes creativity. That is, of course, utter nonsense, and, honestly, it’s a pretty insulting assertion. Scientists are extremely creative; we have to be. Things go wrong constantly in science. Experiments never go the way that you think they will, and you have to come up with solutions to those problems, which generally requires creativity. I once heard someone describe lab work as, “a never-ending series of experiments to figure out why the last experiment failed.” That’s a pretty fair assessment, and if you can’t be creative and think outside of the box, you’re never going to find the solution to the problem you are trying to solve.

Let me just give a few examples of the types of things my peers come up with. One of my friends needed a way to survey arboreal lizards that lived in under the bark of trees, but he didn’t want to peel the bark, because that was bad for the trees. So, he came up with the idea of taking a foam puzzle-piece mat (like the ones you put together on the floor of a kid’s play area) wrapping it around a tree, and holding it in place with two bungie cords. It’s simple, cheap, innovative, and works great. It’s a creative solution to a problem. Another one of my friends needed to collect musk samples from snakes (most snakes excrete a musk from in or around their cloaca) and he figured out that the easiest way to do it was just to put a condom over the snakes’ tails. Again, it’s a brilliant solution, and he had to think outside of the box to come up with it. Similarly, last year, someone else in my field figured out that you could use a vibrator to tell the sex of turtles. I may not have personally done anything that interesting, but I’ve still had to design and build all sorts of crazy contraptions for field work, and I’ve spent the last few months working on a novel method for removing contamination in laboratory reagents. This type of creative problem solving is the norm for scientists. Actually read the scientific literature, and you’ll be blown away by the creative solutions that the men and women in science come up with.

Indeed, earning a PhD is largely a test of whether you can creatively problem solve, because problems are going to arise constantly, and you are going to have to come up with creative solutions for them. So this notion that scientists are rigid and can’t think outside of the box is utter nonsense. Further, it should be blatantly obvious that it is nonsense, because if it was true, science would never have progressed. How could science possible move forward if scientists didn’t question the accepted wisdom and come up with novel, creative solutions?

Finally, it is worth explicitly stating that being scientific and being artistic are not mutually exclusive. Plenty of scientists are also brilliant musicians, painters, etc., and they create things that are aesthetically pleasing as well as things that are intellectually pleasing. For that matter, “data art” is becoming a big and wonderful thing, where scientists make graphs and figures that are beautiful to look at as well as informative.

Other tasks

This post has become longer than I intended, so I’ll be brief here, but I do want to point out that there are lots of other tasks imposed on graduate students. For example, you usually have some form of teaching duties, you’re expected to write grants to fund your research, you have to prepare presentations and give them at conferences, etc. All of this is quite stressful, and between all of these demands and the 60+ hour work-week, depression and mental illness tend to be quite common among graduate students. There are certainly things about the system that need to change to address that issue, but that is a topic for another post.

Conclusion

The point that I’m trying to drive home here is that earning an advanced degree in the sciences is far, far more than simply doing a lot of reading. A tremendous amount of reading is involved, but it’s only part of a much bigger picture. Similarly, coursework is fairly unimportant for graduate school in the sciences, and some programs have no course work requirements at all. Instead, graduate school focuses on research. As a graduate student, you are a researcher and you will spend your days designing and conducting experiments, analyzing results, writing papers, etc. That type of hands-on, experience-based learning simply can’t be replaced by a few hours on Google. Further, as part of your training, you get to work with and learn from experts in your field as well as your fellow students. You get to be immersed in a constant exchange of ideas and knowledge. Additionally, this process does not brainwash you or squelch your creativity, quite the opposite. Creativity, the ability to ask questions, and critical thinking are vital for being a successful scientist, and graduate schools foster those talents rather than suppressing them.

In most areas of life, people have no problems deferring to experts, but for some reason when it comes to science, people view expertise as a bad thing.

When you add all of this up, it should be blatantly obvious that reading blogs and watching Youtube videos does not put you on par with people with that level of training and experience. That doesn’t seem like something I should have to say, but apparently, I do. By way of analogy, imagine that someone spends years being trained on how to be a mechanic, they read hundreds of books and manuals on mechanics, work with experts, and spend years actually working on cars. Then, imagine that someone who has never picked up a wrench reads some blogs and maybe a book or two, then has the audacity to not only say that their knowledge is equivalent to the expert’s, but also that the expert is actually fundamentally wrong about many basic aspects of mechanics. It’s an insane scenario, yet it is exactly what people do all the time with science, and science is way more complicated than car mechanics.

Having said all of that, I’m not trying to discourage laypeople from studying science. By all means, study science. Learn as much as you can. Our universe is amazing, and science is a window into its beauty and majesty. So please, read, study, and learn, but when you do that, first make sure that you are using good sources, and second, have a healthy respect for the amount of work, knowledge, and expertise that it took to make those discoveries that you are reading about. If you think that you found something that scientists are wrong about, stop and think about the amount of training required to become a scientist and the amount of work that goes into research, then ask yourself how likely it really is that you, as a non-expert, found something that all of the experts missed. You should be very, very cautious before concluding that they are wrong, just as someone who has never worked on a car should be very, very cautious before deciding that virtually every professional mechanic is wrong.

*Note: I use the term “experiment” pretty broadly to simply describe a scientific test of an idea. Thus, it may not be the traditional randomized controlled design that people think of. Rather, it could be collecting tissue samples to look at population genetics, testing a hypothesis about evolution by seeing whether fossils match your predictions, etc.

Please read these posts before you accuse me of an appeal to authority fallacy:

Posted in Nature of Science | 13 Comments

If vaccines are a scam to make money, why don’t we routinely vaccinate for diseases like cholera?

Anti-vaccers insist that vaccines are simply a scam by big companies who don’t mind poisoning children in the name of profit. They insist that vaccines are dangerous/unnecessary and doctors and health agencies only “push” them because those doctors and agencies have been bought off by the money-loving companies. As “evidence” of this, they often cite the fact that the number of vaccines that a child receives has increased over time, and they claim that the increase in vaccines is just so that the companies can increase their profits (it’s actually just to protect children against more diseases). This maze of conspiracies quickly falls apart, however, when you consider the fact that there are many vaccines that are not part of the routine schedule in most developed countries. Consider vaccines like yellow fever, cholera, and anthrax, for example. If anti-vaccers conspiracy theories were actually correct, then why aren’t those vaccines part of the routine schedule? As I’ll explain, there is no good answer to that question, and the fact that these vaccines aren’t routine is a serious problem for anti-vaccers.

The anti-vaccine movement revolves around three fundamental premises, all of which are necessary for the anti-vaccine position. The first is simply that vaccines are unnecessary/cause more harm than good. There are many variations of this premise, and different anti-vaccers use different ones. For example, many claim that vaccines simply don’t work and improved sanitation actually caused the decline in disease rates (it didn’t). Other’s modify that to say that vaccines do work, but so does sanitation, therefore the vaccines are unnecessary. Still others argue that the vaccines work, but the diseases are just “harmless childhood illnesses” and the side effects of vaccines are worse than the diseases (they aren’t). Regardless of which variant is used, the core claim is always the same: vaccines aren’t necessary for reducing disease rates and ultimately do more harm than good.

The second premise is that vaccines are sources of major profit for pharmaceutical companies (they actually aren’t) and they are really just about making money. Thus, according to anti-vaccers, “Big Pharma” doesn’t mind poisoning the population with harmful/unnecessary vaccines, because they only care about their bottom line. This leads to a problem though. If vaccines are really harmful/unnecessary, then why do doctors and every major health organization in the world recommend them, why do government regulatory bodies approve them, and why do so many scientific studies say that they are safe and effective? For anti-vaccers, the answer to all of these questions comes in the form of their third premise: a global conspiracy.

According to anti-vaccers, pharmaceutical companies have bought off virtually all the world’s doctors and scientists, as well as government regulatory bodies (e.g., the FDA and CDC), and major health organizations (e.g., the WHO). This third premise lets anti-vaccers conveniently dismiss any evidence that disagrees with their position as simply being part of the conspiracy (it’s also an ad hoc fallacy).

Once these three premises are combined, we have the anti-vaccer explanation for all vaccines. Take the MMR vaccine, for example. According to anti-vaccers, it is not necessary because measles rates were declining before the vaccine and it’s just a harmless childhood disease (premise 1). They also say that the vaccine is actually harmful and causes autism (it doesn’t) and all manner of other problems (still premise 1). Nevertheless, companies push the vaccines because they profit from it (premise 2), and they have paid off doctors/scientists/regulatory bodies/heath organizations to ignore its side effects and push the vaccine (premise 3).

If that sounds like a reasonable argument to you, then please explain to me why vaccines like the vaccines for anthrax, yellow fever, and cholera aren’t part of the routine schedule. Why aren’t companies profiting from pushing those vaccines? The real reason is, of course, that those diseases aren’t common enough in industrialized countries for the vaccines to be necessary. Vaccines do have side effects, but serious side effects are rare, and for most vaccines (like MMR) the risk from not vaccinating is far greater than the risk from vaccinating. For vaccines like yellow fever, however, your chances of getting yellow fever in a country like the USA is ridiculously small. Therefore, unless you are traveling to a country where that risk is higher, it is better not to vaccinate. This is basic risk assessment, and it’s why the CDC and doctors don’t recommend the yellow fever vaccine as part of the routine schedule. Further, for things like cholera, the vaccine is often only effective for a short duration, and sanitation is actually a very effective means of controlling the disease.

Those explanations don’t help anti-vaccers one bit, however. If companies can profit by tricking parents into thinking that their child needs to be vaccinated for measles, chicken pox, rubella, etc. then why can’t they also profit by tricking parents into thinking that their child needs to be vaccinated for anthrax, yellow fever, cholera, etc.? Similarly, if companies can buy off all of the worlds scientists, doctors, regulatory agencies, etc. to push unnecessary vaccines for other disease, then why can’t they do so for these diseases?

Do you see why this is such a monumental problem for the anti-vaccine position? Let me spell this out with bullet points to make it more clear. The fundamental and necessary anti-vaccine premises are as follows:

  • Premise 1: Vaccines are unnecessary/do more harm than good
  • Premise 2: Vaccines are all about money
  • Premise 3: Vaccine companies have bought off thousands of scientists, doctors, health agencies, and regulatory bodies to get them to push vaccines.

Now, let’s consider the simple question, “why isn’t the cholera vaccine part of the routine schedule in most first-world countries?” If you say that it is because that vaccine is unnecessary or does more harm than good, then you have just rejected premise 1 and admitted that vaccines need to be beneficial and safe before they can be marketed. Similarly, if you say that it isn’t profitable, then you have admitted that vaccines aren’t worth vast sums of money, and premise 2 is gone. Finally, if you say that regulatory bodies, doctors, etc. won’t approve/recommend the vaccine, then you have rejected premise 3 and admitted that “Big Pharma” can’t pay off all the worlds’ scientists, doctors, government agencies, etc. Regardless of which option you take, you have to reject a fundamental anti-vaccer premise, at which point, the entire anti-vaccine position crumbles.

Nevertheless, I anticipate anti-vaccers trying to get out of this by arguing that that there is something specific about these vaccines that make them unprofitable, impossible to market, etc., but good luck doing that without committing an ad hoc fallacy. As anti-vaccers love to point out, there are a lot of vaccines in the routine schedule, and they cover a pretty diverse range of diseases, so what common trait could vaccines like cholera, yellow fever, etc. possibly have that isn’t found in at least some vaccines in the current schedule? Let me give a few examples of the things you can’t use as a response (if you’re an anti-vaccer). You can’t say that it is because the diseases are rare in countries like the US, because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that it is because they require boosters, because that is an argument that anti-vaccers use against some vaccines in the current schedule (i.e., they claim we don’t need a vaccine for things like measles because measles is rare). You can’t say that the protective effects are short lived because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that it is because there are serious side effects because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that it is because the Big Pharma couldn’t market these particular vaccines, because anti-vaccers already argue that Big Pharma has managed to market dozens of unnecessary/dangerous vaccines, so there is no a priori reason why these vaccines should be any different. You can’t say that “Big Pharma” was stopped by doctors, health agencies, etc., because anti-vaccers claim that companies have already bought off all those people. Do you see the point? Any explanation that you offer for why these vaccines aren’t part of the routine schedule will inevitable be applicable to vaccines that are already in the schedule, thus invalidating the anti-vaccine position.

In short, there is no explanation for this that is consistent with anti-vaccers’ ideology/conspiracy theories. If vaccines actually do more harm than good, and if they are actually all about money, and if doctors, scientists, government agencies, etc. have actually all been bought off, then there is absolutely no reason why vaccines like the vaccines for anthrax, yellow fever, and cholera shouldn’t be part of the routine schedule. Therefore, the fact that they aren’t part of the routine schedule in most countries is an enormous problem for anti-vaccers. Their absence from the schedule indicates that the schedule is actually based on a scientific risk assessment of what children need in order to be protected, rather than simply being the result of pharmaceutical companies trying to fill children with as many vaccines as possible so that they can line their pockets with profits.

Posted in Vaccines/Alternative Medicine | Tagged , , | 5 Comments

Parents often don’t know what is best

When dealing with anti-vaccers and other believers in woo, I often encounter indignant parents who, when faced with evidence and arguments that are contrary to their views, respond with, “well as a parent, only I know what is best for my child.” This sentiment is pervasive among anti-vaccers, but if we think about it for even a few seconds, the absurdity of it quickly becomes clear. Giving birth clearly does not magically impart you with infinite medical knowledge. Having a child is not even remotely equivalent to earning a medical degree. It’s kind of unbelievable that I even have to say that, but apparently, I do.

The problems with this claim should become obvious as soon as we start applying it to other situations. For example, purchasing a computer clearly does not endow me with instant and incomparable knowledge about anti-virus software, firewalls, etc. Similarly, no one claims to be an expert mechanic by sheer virtue of the fact that they own a car, so why would we think that simply having a child makes someone a medical expert?

I want to take that car analogy a bit further, because I think it is instructive. Imagine that I have decided that the notion that you need to do regular oil changes to protect your engine is actually just a conspiracy by car companies to make money, and, in fact, not only is it fine to never change your oil, but oil changes are actually bad for your car. Obviously, that position is absurd, but now imagine that you confronted me about it, and I responded by saying, “well as the owner of my car, only I know what is best for it.” Would you accept that response? Would it instill you with confidence that I actually know what I am talking about? I doubt it. It would be obvious to you that the fact that I own a car has no bearing on the extent of my mechanical knowledge, and plenty (probably most) car owners know next to nothing about mechanics.  Nevertheless, that is exactly what anti-vaccine parents do. They hold a dangerous position that is discredited by a mountain of evidence, yet they feel justified in their position simply because they have a child.

Now, at this point, someone may accuse me of a straw man fallacy, and argue that giving birth doesn’t magically give you medical knowledge, but rather, parents know best because they are the ones who interact with their child on a daily basis and know the most about him/her. That argument isn’t really any better though. Watching your child on a daily basis can’t possibly give you knowledge about your child’s internal physiology, nor can it inform you about the results of carefully controlled studies. Interacting with your child can’t magically inform you that vaccines are dangerous, for example. Going back to my car example, I could say that as the owner of the car, I am the one who interacts with it on a daily basis and know the most about it, but that clearly doesn’t make me any less wrong about the necessity of oil changes. In other words, interacting with your child doesn’t magically give you medical knowledge any more than driving my car magically gives me mechanical knowledge. To be clear, parents should report their observations to a doctor when they take the child for a medical visit, just as I should report observations about the way my car drives when I take it for a tune up, but that is a far-cry from parents being in a position to reject countless medical studies simply because they have daily encounters with their progeny.

Nevertheless, a parent might try to expand on this with specific observations. For example, they might say, “well after the first shot, I could see a difference in my child, so I’ll never vaccinate again” (see note). That is, however, simply an anecdote, and it is utterly worthless for establishing causation. For one thing, personal observations are often biased, and humans are notoriously bad at deciphering trends without the aid of actual data. Further, two things often occur together just by chance. For example, in a previous post, I ran the math on autism rates and vaccination rates and showed that even though vaccines don’t cause autism, we expect there to be thousands of cases each year where, just by chance, the first signs of autism are noticed shortly after vaccination. To return to my car example again, imagine that I had an oil change once, and shortly afterwards, one of my spark plugs stopped working and had to be replaced. Could I say that since it happened right after the oil change, the oil change must have been the cause? Obviously not. Further, the fact that I am the owner of the car would still be irrelevant. I couldn’t say, “well I own the car and drive it daily, so I know what happened, and I know the oil change killed the spark plug.” That would obviously be insanity.

Note: To clarify, I am not talking about things for which causation has already been established (e.g., an immediate allergic reaction). Rather, I am talking about all the countless things that anti-vaccers attribute to vaccines, despite a total lack of evidence to support causation, and often a substantial amount of evidence against causation. Autism is a prominent example, but I have seen parents accuse vaccines of everything that you can imagine. According to them, restlessness = vaccine injury, change in food preference  = vaccine injury, change in favorite toy = vaccine injury, etc. all “supported” by the notion that as parents, they surely must know what is going on with their child. It’s also worth pointing out that for the vast majority of things that anti-vaccers accuse vaccines of, there is simply no plausible causal mechanism, and they really are no different from me accusing an oil change of killing a spark plug.

Next, someone might try to appeal to “parental instincts,” but that is really just a restatement of where we started. We are back to the notion that being a parent automatically gives you medical knowledge, even thought it clearly doesn’t. As a friend of mine likes to say, parental instincts tell you that you shouldn’t let your kid play in that shady-looking guy’s van, but they can’t tell you whether or not vaccines are safe, whether or not a treatment works, etc. Only carefully controlled studies can do that.

Finally, someone will almost certainly argue that “doctors sometimes make mistakes.” This claim is, of course, true, but the fact that doctors aren’t perfect doesn’t automatically make parental instincts superior. Doctors are human, and humans make mistakes, but someone with a decade of advanced training and years of experience is far less likely to make a medical mistake than someone with no training or experience who is basing their views off gut instincts and Youtube videos (note: read this post before bringing up the claim that medical errors are the third leading cause of death).

In conclusion, I want to be clear that I’m not attacking parents or trying to “diminish” parenthood or any other such nonsense. I’m just trying to get people to have an accurate view of their own limitations. Having a child does not make you a medical expert nor does it make you the most qualified person to understand or assess your child’s health. If it did, there would be no need for doctors or science.

Posted in Vaccines/Alternative Medicine | Tagged , , , , | 14 Comments

If anecdotes are evidence, why aren’t you drinking paint thinner?

I want to begin this post by doing something atypical for me. I want to tell you about an amazing cure-all that I that was recently introduced to: turpentine (aka paint thinner). According to the vast wealth of knowledge available on the internet, most (if not all) diseases are actually caused by parasites, fungal infections (particularly Candida), and even modern medicine itself. Don’t worry, however, because all of these can be cured by drinking turpentine (or sometimes kerosene or even gasoline). Now, you may think that sounds crazy, but have no fear, because this treatment is totally natural (turpentine is made from distilled tree resin). Also, it has been used for nearly two centuries, and several brave doctors have bucked the medical establishment and are promoting it (e.g., Jennifer Daniels). You may think that is pretty flimsy evidence, but don’t worry, I also have multiple blogs, alternative health websites, and Youtube videos explaining why this is the cheap trick doctors don’t want you to know. Best of all, I have tons of anecdotes. There are countless success stories of people who tried traditional medicines to no avail, but as soon as they started drinking turpentine, their symptoms went away and they could just tell that they were healthier. Take, for example, this person who wrote the following after taking turpentine, “My energy level is so much better, lungs feel cleaner. Can’t tell me this stuff doesn’t work.” With confidence like that, how could they be wrong? Finally, you may be wondering why there aren’t a lot of scientific studies supporting turpentine as a treatment, as well as why there are lots of health recommendations against taking it. The answer is simple: big pharma only cares about profits, so they are suppressing the truth of this amazing treatment.

As most of you have hopefully guessed, the paragraph above is facetious, and I’m not going to try to induct you into a pyramid scheme, but I wanted to open this post that way to illustrate a very important point. Namely, most of the people reading this probably spotted the flaws in my arguments for turpentine. The idea that drinking paint thinner could cure all diseases is so outlandish that you probably realized that blogs, Youtube videos, and anecdotes aren’t sufficient evidence. You probably realized that the fact that something is natural or ancient doesn’t mean it’s safe or effective (appeal to nature and appeal to antiquity fallacies). You probably realized that the fact that I found a handful of doctors that support drinking turpentine doesn’t mean that it works (appeal to authority fallacy), and you probably scoffed at the notion that safety warnings on turpentine were actually part of a conspiracy by “Big Pharma.”

Nevertheless, despite all of that, a large portion of you probably use identical reasoning to support your favorite alternative remedy. Based on what I see in the comments, most of you probably have some “cure for the common cold” or other pseudoscientific practice that you cling to dearly, and if I asked you for your evidence, you would respond with the exact same type of reasoning. Most prominently, you would give me anecdotes and cite blogs and Youtube videos.

Further, on the off chance that someone reading this believes in the magic powers of turpentine, there is still almost certainly some other alternative practice that you think is nuts, even though it is supported by the exact same evidence base. For example, I used to know someone who believed in all manner of nonsense, from crystal healing to anti-vaccine conspiracy theories, but they drew the line at homeopathy. As I tried to explain to them, however, that doesn’t make sense because homeopathy has the same evidence base as things like crystal healing. In other words, when I asked them to give me evidence of crystal healing, they replied with blogs, Youtube videos, and anecdotes, yet they rejected homeopathy even though homeopathy is also “supported” by countless blogs, Youtube videos, and anecdotes. To try to make them grasp this paradox, I once asked them, “If homeopathy doesn’t work, then why do so many people claim to feel better after taking it?” They very correctly responded that those reports could be from placebo effects, total coincidences, regression to the mean [technically a type of placebo effect], other medications, etc. In other words, when it wasn’t their pet belief, they had no problem seeing the flaws in the line of reasoning, but when it was their personal views at stake, suddenly cognitive biases clouded their vision and inhibited their ability to think logically.

The point that I’m trying to make here is that your reasoning has to be consistent. Either anecdotes can establish causation or they can’t. You don’t get to pick and choose when you think that they work. In other words, if an anecdote, or even a collection of anecdotes, is actually sufficient grounds for saying that cannabis cures cancer, acupuncture works, vaccines cause autism, etc. then it must also be sufficient grounds for the effectiveness of homeopathy, miracle mineral solution, bleach enemas, turpentine, kerosene, gasoline, crystal healing, bloodletting, leaches, sacrificing to the sun god, and every other form of woo that has ever been proposed, because they all have anecdotes. If anecdotes actually can establish causation, then you have to believe in all of them. They can’t only establish causation when you want them to. That’s not how evidence works.

To put that another way, if for any one of the thousands of alternative treatments that have ever existed, you are content to say, “the anecdotes could easily be from placebo effects or other factors,” then you must say that for all of the treatments. In other words, by acknowledging even once that the fact that someone took a treatment then got better is not good evidence that the treatment actually works, you have just universally acknowledged that anecdotes can’t establish causation. The logical syllogism, “someone took X, then got better, therefore X works” either works all the time or it never works. It can’t magically work when you want it to, then not work when you don’t want it to.

The same thing is true for the admissibility of blogs and Youtube videos as evidence. If you asked me for evidence that turpentine is a cure-all, and I responded with an unsubstantiated Youtube video, you would very correctly demand actual data. It is inherently obvious that any crackpot can make a Youtube video and say whatever they want in it. To be clear, there are some Youtube videos, blogs, etc. that are packed with non-cherry-picked citations to the original peer-reviewed literature, and there is nothing wrong with linking to a source like that and saying, “this video gives a good explanation and cites the relevant literature.” That is, however, almost never what I see when it comes to conspiracy theories and alternative medicine. The sources that I see people use as evidence are nearly always just someone spouting nonsense as if they were stating facts, and citations to original studies are either non-existent or horrible cherry-picked.

Finally, I want to contrast this type of inconsistency with a science-based view of reality. To put it simply, you can convince me (and scientists in general) of anything if you have sufficient evidence, and by evidence, I mean multiple independent studies that used large sample sizes, adequate controls, and rigorous analyses. If you can show me a consistent body of scientific evidence demonstrating that drinking turpentine is safe and effective, I’ll accept it. If you can show me a consistent body of evidence demonstrating that vaccines cause autism, I’ll accept it. If you can show me a consistent body of evidence demonstrating that crystal healing actually works, I’ll even accept that. Do you see the difference between that and cherry-picking when you do and do not want to accept anecdotes as evidence? Science has consistent criteria for what is and is not evidence, whereas there is no constancy in pseudoscience.

The take home that I want you to get from this is that you need to ensure that your reasoning is consistent. A great way to do this is by trying to think of situations where you would not accept the conclusion that results from your current line of reasoning. For example, if you are using an anecdote to claim that a particular alternative treatment works, stop and try to think of situations where you would not accept anecdotes as evidence. In other words, if you can think of a situation where you wouldn’t accept that X caused Y, even though someone took X then Y happened, then you have just demonstrated that your line of reasoning is flawed, and anecdotes are not sufficient evidence of causation.

Note: To be clear, I am not arguing that the existence of anecdotes is evidence that something doesn’t work (that would be a fallacy fallacy). In other words, when I said things like, “the logical syllogism, ‘someone took X, then got better, therefore X works’ either works all the time or it never works,” it is the syllogism itself that is the problem, not its conclusion. To put that another way, there will always be anecdotes for things that actually do work. The problem is simply using those anecdotes as evidence that it works.

Note: Inevitably when I start talking about anecdotes, pedants get all bent out of shape and argue that anecdotes do have value because they indicate that something may be worth studying. I agree, and never said anything to the contrary. That argument does not, however, in any way shape or form negate my point that anecdotes are not valid evidence of causation. So please spare me your pointless pedantry.

 Related posts

Posted in Vaccines/Alternative Medicine | Tagged , , , , , , , , | 16 Comments