The existence of real conspiracies does not justify conspiracy theories

Most science deniers are conspiracy theorists. Many of them don’t like to think of themselves as conspiracy theorists and would even ardently deny that they deserve that label, yet when you present them with peer-reviewed evidence for anthropogenic climate change, the safety of vaccines, the safety of GMOs, etc. they almost invariably respond by asserting that those studies aren’t valid evidence because vast corporations, governments, etc. have bought off all of the scientists, doctors, regulatory bodies, etc. That claim has no evidence to support it and is a textbook example of a conspiracy theory.

As a result of the conspiratorial nature of science-deniers, conspiracy theories are a frequent target of my blog/Facebook page, but almost any time that I post about the illogical nature of conspiracy theories, I get irate responses from people who insist that conspiracy theories are not inherently illogical, and they do this based on one of two lines of logic. Either they bring up the fact that real conspiracies do exist and have been discovered, or they cite agencies like the FBI that investigate criminal conspiracies and they incorrectly assert that I am suggesting that those agencies are inherently irrational (i.e., that FBI agents are conspiracy theorists). Both of these arguments are wrong and rely on infuriating semantic games rather than actual facts or logic, but I encounter them frequently enough that I want to spend a few minutes explaining the problems with them.

Both of these lines of reasoning rely on conflating real conspiracies with conspiracy theories, but that is semantic tomfoolery. The term “conspiracy theory” generally refers to self-reinforcing ideas that rely on inserting assumptions into gaps in our knowledge, have little or no actual evidence, and conveniently excuse any evidence that is presented against them. That is a very different thing from the type of investigation that an organization like the FBI does, and we don’t use the term “conspiracy theory” to refer to that type of investigation.

Let me give an example to illustrate this. On many occasions, I have showed a science-denier a peer-reviewed paper that discredits their view, only to have them claim that the authors were paid off. Then, I showed them the conflicts of interest section of the paper which clearly showed that no conflicts of interest existed. At that point, they, of course, said that the payment was secret and, therefore, not reported. When I asked them for evidence to support that claim, however, they couldn’t provide it. It was an assumption that they were making simply because the evidence did not fit their view. This is how conspiracy theories operate. Any evidence that conflicts with the theory is explained away as part of the conspiracy, and a lack of evidence to support claims is also justified as simply being part of the conspiracy (e.g., claiming that large corporations are silencing scientists and preventing them from publishing).

Now, let’s contrast that with actual investigations of actual conspiracies. Imagine, for example, that the FBI was investigating corruption, found no evidence of it existing (i.e., no conflicts of interest) but they ignored that lack of evidence and assumed that it was simply part of the conspiracy. Obviously, that would be a really bad investigation, and they could never get a conviction out of it. If they failed to find evidence to support what they thought was true, they would have to move on. They couldn’t just ignore any evidence that disagreed with them. To put that another way, agencies like the FBI rely on evidence, not conjecture when conducting their investigations, and their hypotheses change as new evidence arises.

Do you see the difference? Conspiracy theories conveniently excuse contrary evidence by writing it off as part of the conspiracy; whereas real investigations are based on the available evidence and don’t blindly ignore any evidence that disagrees with them. Having said that, real conspiracies certainly do exist, and multiple of them have been uncovered, but the fact that real conspiracies exist does not mean that your conspiracy theory is logical or justified.

To give an example of why the existence of real conspiracies doesn’t justify conspiracy theories, imagine that I have a friend named Bob, and for one reason or another, I decided that Bob was a murderer. I didn’t have any real evidence, but I “just knew” that I was right. Now, imagine that you confronted me about this and demanded evidence for my claims, and I responded by saying, “lots of real murderers have been caught, it happens all the time; therefore, it is rational for me to think that Bob is a murderer.” Would my reasoning be correct? Obviously not. The fact that there are murderers does not in any way shape or form make it rational to think that Bob is a murderer. I need actual evidence specifically showing that Bob murdered someone. The same thing is true with conspiracies. The fact that real ones exist doesn’t mean that your theory is justified. You need actual evidence for your view to be rational.

Now, at this point, conspiracy theorists will inevitably protest and claim that they do, in fact, have evidence. However, every time that I have ever asked to see that evidence, I have been sorely disappointed. Inevitably, the “evidence” takes the form of blogs, youtube videos, and baseless conjecture, often espousing ideas that have been thoroughly investigated and debunked. For example, I still encounter people who cite “climategate” as evidence that climatologists are involved in a conspiracy, despite the fact that multiple independent and well-respected scientific bodies examined the situation and concluded that no wrong-doing or data-manipulation had occurred. Of course, the conspiracy theorists inevitably respond by asserting that those scientific bodies are also part of the conspiracy, but that just illustrates my point. Conspiracy theories are irrational precisely because they twist any evidence to fit the conspiracy. Think about it, what could you possibly show a conspiracy theorist to convince them that the theory was wrong? Nothing, because no matter what evidence you show them, they will argue that the evidence is also part of the conspiracy.

My point in all of this is really simple. The term “conspiracy theory” specifically refers to imagined conspiracies that have no real evidence to support them and inherently rely on making assumptions to fill gaps in knowledge, rather than actually basing views on the available evidence. The existence of real conspiracies does not justify these conspiracy theories, nor should you play semantic games to try to equate conspiracy theorists with evidence-based investigative bodies like the FBI. So please, if you see a post about conspiracy theories, spare us all from your pedantry.

Note: I want to be clear that the use of the word “theory” in the term “conspiracy theory” is very different from its use in science (it is far more similar to its use in the term “movie theory”). In science, a theory is an explanatory framework that has been rigorously tested and shown to have a high predictive power. It is not simply a guess nor does it indicated that we are unsure of its veracity.

Advertisements
Posted in Uncategorized | Tagged | 9 Comments

Training to be a scientist: It’s not an indoctrination and it’s more than just reading

Many people seem to have some rather strange preconceptions about how higher education in science works. I often encounter people who insist that graduate school “indoctrinates” students, robs them of creativity, and “brainwashes” them to “blindly accept scientific dogma.” Further, others are under the delusion that training in the sciences just consists of lectures and reading, and they think that just spending some time on Google is sufficient to learn everything that they would get from actually earning an advanced degree. It probably shouldn’t be surprising that the people making these claims have never received any formal training in science and are, in fact, projecting their own biases onto a system that they know nothing about. Having said that, it’s not particularly surprising that so many people are so hopelessly wrong about how graduate school works, because it is a very unique system that is unlike most other forms of education. Therefore, I thought it would be a good idea to explain what advanced training in the sciences is actually like, and in so doing, I want to dispel common myths about “indoctrinations” as well as the notion that reading articles on Google is equivalent to receiving an advanced degree.

Note: There is a lot of variation in graduate programs, and different countries often have different systems (AU and the USA are quite different, for example). So, I will focus on the commonalities and highlight some differences where relevant.

Note: I earned a BSc and MSc at two different universities in the USA, and I am currently about three quarters of the way through a PhD in AU.

Time frame

If you plan on having a career in science, you will almost certainly need a MSc and in most cases a PhD. To earn those degrees, you will first have to earn a BSc, which will take you 3–4 years, depending on where you earn it. Typically, people then go for a MSc, which is usually another 2-3 years, followed by a PhD, which is another 4-6 years (generally speaking). There are exceptions though. For example, in the US, some schools allow you to skip the MSc and go straight for the PhD, but you generally have to have a fair amount of previous research experience, and those PhD programs generally take a bit longer than PhD programs that have a MSc as a prerequisite. Regardless of the path you take, however, you are probably going to need at least a decade of higher education (often more) before you can get a job as a researcher (also, even after earning a PhD, if you want to go into academia, you have to spend several years doing post-docs, but that is a whole other topic).

I want to pause here for a moment to consider the frequent claims on the internet that someone has done “thousands of hours of research.” For example, anti-vaccers often like to claim that they have spent thousands of hours studying vaccines and, therefore, are just as qualified as scientists and doctors. Let’s do some math on that for a minute. Graduate students generally work a minimum of 60 hours a week, and don’t take many holidays (undergrad is similarly strenuous in the sciences). Thus, completing training in the sciences will usually take a minimum of 60 hours a week, for 50 weeks a year (assuming two weeks of vacation), for 10 years (you don’t get summers off as a graduate student, and your undergraduate summers will often be spent on internships). That’s 30,000 hours of training. Further, even if we want to be absurdly, unrealistically generous and say that someone completes their degrees in eight years, working only 50 hours a week, that is still 20,000 hours! Please keep that in mind the next time that someone claims to know more than scientists just because they spent some time reading Google! To put that another way, even if grad school consisted of nothing but reading (which, as I’ll explain in a minute, it doesn’t), that would still mean that people with advanced degrees are far, far more well-read than someone who reads blogs in their spare time.

It’s about research, not coursework

Probably the biggest misconception about advanced education in the sciences is the notion that it is mostly coursework. Indeed, I often have friends and family members ask me things like, “when do your courses start for the year?” or “do you have any big tests coming up?” But that’s not actually how grad school works. There are courses, but the emphasis is on research.

To expand on that, earning an undergraduate degree in science does, in fact, require a lot of courses. However, many of those courses involve a laboratory or field component. These courses have traditional lectures, but also give you hands on training and experience, and that hands-on learning is vital. Reading about how to set up an experiment, and actually setting up an experiment are two very different things. Further, if you plan on going to graduate school after earning your bachelors, you will probably (but not always) need to gain some real research experience by doing summer research internships, assisting graduate students with their research, doing independent research with a professor, etc. Regardless of how you gain that research experience, the result is the same: you get actual experience with real research and one-on-one mentoring from experienced researchers. Thus, even at the undergraduate level, you should be gaining knowledge from hands-on learning, not just lectures and reading.

At the graduate level, there is a lot of variation depending on where you go. The US, for example, generally does have some coursework requirements, but they are fairly minimal. Where I did my MSc, for example, I took about two courses a semester for my first three semesters, and didn’t take any courses my last semester. Most of my time, however, was spent doing research. That is fairly typical for US universities, and if you are doing a PhD in the US, you can expect to take several courses during your first two years or so (while simultaneously doing research), then do nothing but research for the last several years.

The Australian system differs a bit. It assumes that you have already taken enough courses, and you just jump straight into pure research. As a result, their degrees are typically on the short end of the spectrum. Also, although there are no required courses, you do often have to do a certain number of “workshop” hours. For example, by taking a week-long course on statistics, a workshop on grant-writing, etc.

Lego Grad Student is one of the most entertainingly accurate representations you’ll ever see of what it is like to be a graduate student. It is one of my few sources of joy in life, and I highly recommend that you follow him on facebook, twitter, etc.

So, if grad school isn’t focused on coursework, then what do grad students actually do? In short, we do research. To complete a graduate degree, you pick a topic that you want to study, design a series of experiments* to test your hypotheses about that topic, conduct those experiments, analyze the results, and write a formal report of your experiments and results (you have to submit a thesis or dissertation to your school, but if you want a job after grad school, you’ll usually want to publish each chapter of your thesis/dissertation as a paper in a peer-reviewed journal). A typical day for me, for example, consists of either spending all day in the field collecting samples, or spending all day in the lab processing those samples (extracting DNA, running PCRs, etc.), or sitting at my computer analyzing the data from those samples, or writing a paper on the results, or (usually) a hectic, scatterbrained combination of all of the above. I get up each morning, seven days a week, do research for a minimum of 10 hours, watch an episode of Star Trek or Dr Who, rant on Facebook, go to bed, get up the next morning and do it all over again. This is what grad school does: it teaches you to be researcher by throwing you into the deep end and seeing whether you sink or swim.

Here again, I simply cannot overstate the importance of this learning-by-experience approach. Reading about how to design an experiment, how to run statistics, how to control for confounding variables, etc. simply is not the same as actually doing it, which is why I find it utterly ridiculous that people think that reading a few blogs or even books makes them qualified to tell scientists that they are wrong.

There is a lot of reading

Another gem from Lego Grad Student

Despite what I’ve so far about grad school being more than just reading, there is still reading, a LOT of reading. However, when I say, “reading” I don’t mean reading blogs or poorly-referenced books by fringe academics. Rather, I mean reading hundreds of peer-reviewed papers, as well as lengthy academic books. As a grad student, you should be reading close to a paper a day. You have to read that much if you want to actually be an expert in your field. Further, new research is coming out all the time, which means that you have to stay up to speed on the most recent publications. As a result, even after earning your degree, you still have to read constantly.

Out of curiosity, when I was writing this post, I checked the number of full papers that I had saved in my desktop reference manager (I use Mendeley, just fyi), and there are over 1,100 papers in there that I have read and taken notes on. Further, those are just the papers that I thought were worth keeping the entire text of. I probably only save about half the papers I read at most, not to mention all the academic books I’ve read.

Further, just to be clear, I am in no way unusual in this regard. I’m not bragging about how well-read I am. Rather, my point is that this is normal for training in the sciences. This is simply what it takes to be an expert in science. It requires constantly reading the original research, and it is utterly absurd to think that watching Youtube and reading blogs is going to give you knowledge that is equivalent to the type of knowledge that is gained through that level of dedication and study.

It’s a collaborative learning environment

So far, I have stressed the importance of reading massive amounts of peer-reviewed literature and the importance of actually doing research, but there is another key aspect of graduate training: learning from your advisers and peers.

Graduate school in the sciences is, in many ways, an apprenticeship. When you join a program, you join the lab of a senior academic who serves as your primary supervisor. Their job is to train you, but also to keep you on track. Good advisers, in my opinion, should give you a lot of freedom to explore ideas and go down wrong roads, but they should also reign you back in when you are too far off track. I’ve personally been really lucky with advisers, because I’ve had great ones at every level who I’ve learned a tremendous amount from, and when I say “learned from,” I don’t want you to picture a situation where I go into their offices and they lay down the law and tell me exactly what to think and do, because that is way off from how the relationship actually works. Rather, I go to them with an idea I’ve come up with or a problem that I am running into, and we sit down and debate and discuss what to do with that idea or problem. There is a wonderful back-and-forth and an exchange of ideas and knowledge that helps me to not simply solve the current problem, but to think more clearly about how to solve future problems.

Here again, this is something that you simply don’t get from reading the internet. The ability to, on a regular basis, sit down with an expert with years of experience and debate and discuss ideas is invaluable. That type of exchange teaches you to think critically, and critical thinking is a skill that has to be honed and practiced.

Further, as a graduate student, you shouldn’t just be learning from your primary supervisor. You should also have secondary supervisors, collaborators, etc. all of whom have different knowledge bases and skill sets that you can learn from. Additionally, fellow graduate students are great for bouncing ideas off of and getting help from. Even within a lab group, everyone has their own areas of expertise, and in a good group, that expertise gets shared. At some point, on nearly any given day, I end up going to another member of my lab to run something past them, see if they have any experience with a particular analysis method, etc., and, conversely, other students come to see me when their research intersects with something that I’m knowledgeable about. This constant exchange of ideas and information is incredible, it’s one of my favorited things about academia. I constantly get to benefit from the hard-earned knowledge of those around me. Further, even when I am the one sharing knowledge, I find that I benefit from the exchange. At some point, you’ve probably heard the sentiment that if you want to see if you really understand something, you should try explaining it to someone else. I think that there is a lot of truth to that, and on many occasions the process of explaining something to someone has made me think of something I hadn’t considered before or realize that there were gaps in my knowledge that needed to be filled.

I know that I sound like a broken record by now, but once again, this type of knowledge-sharing environment is something that is hard to come by on the internet. There are plenty of echo chambers to convince you that you’re correct about what you already think is true, but collaborative learning environments like what you experience in grad school are few and far between.

It’s not an indoctrination

At this point, I want to directly dispel several myths. The first of which is the bizarre notion that students are indoctrinated to blindly follow the accepted wisdom of their fields, rather than thinking for themselves. This is exactly the opposite of reality. Professors constantly encourage students to think outside of the box, ask questions, and challenge accepted notions. For example, most graduate programs involve some form of seminar, journal club, etc. where a professor (or sometimes a student) picks a paper for the group to read, then everyone sits around debating the paper, trying to shoot holes in it, brining other papers into the discussion, etc. It is an exercise that is specifically designed to make students think critically and questions papers rather than blindly accept them. Indeed, at all three of the universities I’ve attended, I’ve had professors explicitly instruct students to read papers critically, rather than assuming that their results are correct. There was one seminar that particularly sticks out in my mind where the professor was really excited by the paper he wanted us to read, but by the end of the seminar the entire group, professor included, agreed that there were serious problems with the paper. Rather than simply giving us a paper and saying, “here it is, believe it,” he took us through the process of assessing the paper and thinking critically about it.

Further, the system is designed not simply to teach you to be critical, but also to teach you how to be critical. By this process of discussing and debating papers with your peers, you learn how to assess papers, how to look for weak points in their methodologies, how to place them into the broader context of a whole body of literature, etc. This is something else that you simply do not get from merely reading.

To put this another way, science is all about asking questions. That’s fundamentally what science is: a systematic process for asking and answering questions. Therefore, it really shouldn’t be surprising that graduate programs place an emphasis on teaching students to ask questions and think critically about those questions. If you’re not willing to ask questions then you have no business being in science.

Scientists are creative

There is a common stereotype that scientists aren’t creative, and I often hear people claim that grad school squashes creativity. That is, of course, utter nonsense, and, honestly, it’s a pretty insulting assertion. Scientists are extremely creative; we have to be. Things go wrong constantly in science. Experiments never go the way that you think they will, and you have to come up with solutions to those problems, which generally requires creativity. I once heard someone describe lab work as, “a never-ending series of experiments to figure out why the last experiment failed.” That’s a pretty fair assessment, and if you can’t be creative and think outside of the box, you’re never going to find the solution to the problem you are trying to solve.

Let me just give a few examples of the types of things my peers come up with. One of my friends needed a way to survey arboreal lizards that lived in under the bark of trees, but he didn’t want to peel the bark, because that was bad for the trees. So, he came up with the idea of taking a foam puzzle-piece mat (like the ones you put together on the floor of a kid’s play area) wrapping it around a tree, and holding it in place with two bungie cords. It’s simple, cheap, innovative, and works great. It’s a creative solution to a problem. Another one of my friends needed to collect musk samples from snakes (most snakes excrete a musk from in or around their cloaca) and he figured out that the easiest way to do it was just to put a condom over the snakes’ tails. Again, it’s a brilliant solution, and he had to think outside of the box to come up with it. Similarly, last year, someone else in my field figured out that you could use a vibrator to tell the sex of turtles. I may not have personally done anything that interesting, but I’ve still had to design and build all sorts of crazy contraptions for field work, and I’ve spent the last few months working on a novel method for removing contamination in laboratory reagents. This type of creative problem solving is the norm for scientists. Actually read the scientific literature, and you’ll be blown away by the creative solutions that the men and women in science come up with.

Indeed, earning a PhD is largely a test of whether you can creatively problem solve, because problems are going to arise constantly, and you are going to have to come up with creative solutions for them. So this notion that scientists are rigid and can’t think outside of the box is utter nonsense. Further, it should be blatantly obvious that it is nonsense, because if it was true, science would never have progressed. How could science possible move forward if scientists didn’t question the accepted wisdom and come up with novel, creative solutions?

Finally, it is worth explicitly stating that being scientific and being artistic are not mutually exclusive. Plenty of scientists are also brilliant musicians, painters, etc., and they create things that are aesthetically pleasing as well as things that are intellectually pleasing. For that matter, “data art” is becoming a big and wonderful thing, where scientists make graphs and figures that are beautiful to look at as well as informative.

Other tasks

This post has become longer than I intended, so I’ll be brief here, but I do want to point out that there are lots of other tasks imposed on graduate students. For example, you usually have some form of teaching duties, your expected to write grants to fund your research, you have to prepare presentations and give them at conferences, etc. All of this is quite stressful, and between all of these demands and the 60+ hour work-week, depression and mental illness tend to be quite common among graduate students. There are certainly things about the system that need to change to address that issue, but that is a topic for another post.

Conclusion

The point that I’m trying to drive home here is that earning an advanced degree in the sciences is far, far more than simply doing a lot of reading. A tremendous amount of reading is involved, but it’s only part of a much bigger picture. Similarly, coursework is fairly unimportant for graduate school in the sciences, and some programs have no course work requirements at all. Instead, graduate school focuses on research. As a graduate student, you are a researcher and you will spend your days designing and conducting experiments, analyzing results, writing papers, etc. That type of hands-on, experience-based learning simply can’t be replaced by a few hours on Google. Further, as part of your training, you get to work with and learn from experts in your field as well as your fellow students. You get to be immersed in a constant exchange of ideas and knowledge. Additionally, this process does not brainwash you or squelch your creativity, quite the opposite. Creativity, the ability to ask questions, and critical thinking are vital for being a successful scientist, and graduate schools foster those talents rather than suppressing them.

In most areas of life, people have no problems deferring to experts, but for some reason when it comes to science, people view expertise as a bad thing.

When you add all of this up, it should be blatantly obvious that reading blogs and watching Youtube videos does not put you on par with people with that level of training and experience. That doesn’t seem like something I should have to say, but apparently, I do. By way of analogy, imagine that someone spends years being trained on how to be a mechanic, they read hundreds of books and manuals on mechanics, work with experts, and spend years actually working on cars. Then, imagine that someone who has never picked up a wrench reads some blogs and maybe a book or two, then has the audacity to not only say that their knowledge is equivalent to the expert’s, but also that the expert is actually fundamentally wrong about many basic aspects of mechanics. It’s an insane scenario, yet it is exactly what people do all the time with science, and science is way more complicated than car mechanics.

Having said all of that, I’m not trying to discourage laypeople from studying science. By all means, study science. Learn as much as you can. Our universe is amazing, and science is a window into its beauty and majesty. So please, read, study, and learn, but when you do that, first make sure that you are using good sources, and second, have a healthy respect for the amount of work, knowledge, and expertise that it took to make those discoveries that you are reading about. If you think that you found something that scientists are wrong about, stop and think about the amount of training required to become a scientist and the amount of work that goes into research, then ask yourself how likely it really is that you, as a non-expert, found something that all of the experts missed. You should be very, very cautious before concluding that they are wrong, just as someone who has never worked on a car should be very, very cautious before deciding that virtually every professional mechanic is wrong.

*Note: I use the term “experiment” pretty broadly to simply describe a scientific test of an idea. Thus, it may not be the traditional randomized controlled design that people think of. Rather, it could be collecting tissue samples to look at population genetics, testing a hypothesis about evolution by seeing whether fossils match your predictions, etc.

Please read these posts before you accuse me of an appeal to authority fallacy:

Posted in Nature of Science | 10 Comments

If vaccines are a scam to make money, why don’t we routinely vaccinate for diseases like cholera?

Anti-vaccers insist that vaccines are simply a scam by big companies who don’t mind poisoning children in the name of profit. They insist that vaccines are dangerous/unnecessary and doctors and health agencies only “push” them because those doctors and agencies have been bought off by the money-loving companies. As “evidence” of this, they often cite the fact that the number of vaccines that a child receives has increased over time, and they claim that the increase in vaccines is just so that the companies can increase their profits (it’s actually just to protect children against more diseases). This maze of conspiracies quickly falls apart, however, when you consider the fact that there are many vaccines that are not part of the routine schedule in most developed countries. Consider vaccines like yellow fever, cholera, and anthrax, for example. If anti-vaccers conspiracy theories were actually correct, then why aren’t those vaccines part of the routine schedule? As I’ll explain, there is no good answer to that question, and the fact that these vaccines aren’t routine is a serious problem for anti-vaccers.

The anti-vaccine movement revolves around three fundamental premises, all of which are necessary for the anti-vaccine position. The first is simply that vaccines are unnecessary/cause more harm than good. There are many variations of this premise, and different anti-vaccers use different ones. For example, many claim that vaccines simply don’t work and improved sanitation actually caused the decline in disease rates (it didn’t). Other’s modify that to say that vaccines do work, but so does sanitation, therefore the vaccines are unnecessary. Still others argue that the vaccines work, but the diseases are just “harmless childhood illnesses” and the side effects of vaccines are worse than the diseases (they aren’t). Regardless of which variant is used, the core claim is always the same: vaccines aren’t necessary for reducing disease rates and ultimately do more harm than good.

The second premise is that vaccines are sources of major profit for pharmaceutical companies (they actually aren’t) and they are really just about making money. Thus, according to anti-vaccers, “Big Pharma” doesn’t mind poisoning the population with harmful/unnecessary vaccines, because they only care about their bottom line. This leads to a problem though. If vaccines are really harmful/unnecessary, then why do doctors and every major health organization in the world recommend them, why do government regulatory bodies approve them, and why do so many scientific studies say that they are safe and effective? For anti-vaccers, the answer to all of these questions comes in the form of their third premise: a global conspiracy.

According to anti-vaccers, pharmaceutical companies have bought off virtually all the world’s doctors and scientists, as well as government regulatory bodies (e.g., the FDA and CDC), and major health organizations (e.g., the WHO). This third premise lets anti-vaccers conveniently dismiss any evidence that disagrees with their position as simply being part of the conspiracy (it’s also an ad hoc fallacy).

Once these three premises are combined, we have the anti-vaccer explanation for all vaccines. Take the MMR vaccine, for example. According to anti-vaccers, it is not necessary because measles rates were declining before the vaccine and it’s just a harmless childhood disease (premise 1). They also say that the vaccine is actually harmful and causes autism (it doesn’t) and all manner of other problems (still premise 1). Nevertheless, companies push the vaccines because they profit from it (premise 2), and they have paid off doctors/scientists/regulatory bodies/heath organizations to ignore its side effects and push the vaccine (premise 3).

If that sounds like a reasonable argument to you, then please explain to me why vaccines like the vaccines for anthrax, yellow fever, and cholera aren’t part of the routine schedule. Why aren’t companies profiting from pushing those vaccines? The real reason is, of course, that those diseases aren’t common enough in industrialized countries for the vaccines to be necessary. Vaccines do have side effects, but serious side effects are rare, and for most vaccines (like MMR) the risk from not vaccinating is far greater than the risk of vaccinating. For vaccines like yellow fever, however, your chances of getting yellow fever in a country like the USA is ridiculously small. Therefore, unless you are traveling to a country where that risk is higher, it is better not to vaccinate. This is basic risk assessment, and it’s why the CDC and doctors don’t recommend the yellow fever vaccine as part of the routine schedule. Further, for things like cholera, the vaccine is generally often only effective for a short duration, and sanitation is actually a very effective means of controlling the disease.

Those explanations don’t help anti-vaccers one bit, however. If companies can profit by tricking parents into thinking that their child needs to be vaccinated for measles, chicken pox, rubella, etc. then why can’t they also profit by tricking parents into thinking that their child needs to be vaccinated for anthrax, yellow fever, cholera, etc.? Similarly, if companies can buy off all of the worlds scientists, doctors, regulatory agencies, etc. to push unnecessary vaccines for other disease, then why can’t they do so for these diseases?

Do you see why this is such a monumental problem for the anti-vaccine position? Let me spell this out with bullet points to make it more clear. The fundamental and necessary anti-vaccine premises are as follows:

  • Premise 1: Vaccines are unnecessary/do more harm than good
  • Premise 2: Vaccines are all about money
  • Premise 3: Vaccine companies have bought off thousands of scientists, doctors, health agencies, and regulatory bodies to get them to push vaccines.

Now, let’s consider the simple question, “why isn’t the cholera vaccine part of the routine schedule in most first-world countries?” If you say that it is because that vaccine is unnecessary or does more harm than good, then you have just rejected premise 1 and admitted that vaccines need to be beneficial and safe before they can be marketed. Similarly, if you say that it isn’t profitable, then you have admitted that vaccines aren’t worth vast sums of money, and premise 2 is gone. Finally, if you say that regulatory bodies, doctors, etc. won’t approve/recommend the vaccine, then you have rejected premise 3 and admitted that “Big Pharma” can’t pay off all the worlds’ scientists, doctors, government agencies, etc. Regardless of which option you take, you have to reject a fundamental anti-vaccer premise, at which point, the entire anti-vaccine position crumbles.

Nevertheless, I anticipate anti-vaccers trying to get out of this by arguing that that there is something specific about these vaccines that make them unprofitable, impossible to market, etc., but good luck doing that without committing an ad hoc fallacy. As anti-vaccers love to point out, there are a lot of vaccines in the routine schedule, and they cover a pretty diverse range of diseases, so what common trait could vaccines like cholera, yellow fever, etc. possibly have that isn’t found in at least some vaccines in the current schedule? Let me give a few examples of the things you can’t use as a response (if you’re an anti-vaccer). You can’t say that it is because the diseases are rare in countries like the US, because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that it is because they require boosters, because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that the protective effects are short lived because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that it is because there are serious side effects because that is an argument that anti-vaccers use against some vaccines in the current schedule. You can’t say that it is because the Big Pharma couldn’t market these particular vaccines, because anti-vaccers already argue that Big Pharma has managed to market dozens of unnecessary/dangerous vaccines, so there is no a priori reason why these vaccines should be any different. You can’t say that “Big Pharma” was stopped by doctors, health agencies, etc., because anti-vaccers claim that companies have already bought off all those people. Do you see the point? Any explanation that you offer for why these vaccines aren’t part of the routine schedule will inevitable be applicable to vaccines that are already in the schedule, thus invalidating the anti-vaccine position.

In short, there is no explanation for this that is consistent with anti-vaccers’ ideology/conspiracy theories. If vaccines actually do more harm than good, and if they are actually all about money, and if doctors, scientists, government agencies, etc. have actually all been bought off, then there is absolutely no reason why vaccines like the vaccines for anthrax, yellow fever, and cholera shouldn’t be part of the routine schedule. Therefore, the fact that they aren’t part of the routine schedule in most countries is an enormous problem for anti-vaccers. Their absence from the schedule indicates that the schedule is actually based on a scientific risk assessment of what children need in order to be protected, rather than simply being the result of pharmaceutical companies trying to fill children with as many vaccines as possible so that they can line their pockets with profits.

Posted in Vaccines/Alternative Medicine | Tagged , , | 5 Comments

Parents often don’t know what is best

When dealing with anti-vaccers and other believers in woo, I often encounter indignant parents who, when faced with evidence and arguments that are contrary to their views, respond with, “well as a parent, only I know what is best for my child.” This sentiment is pervasive among anti-vaccers, but if we think about it for even a few seconds, the absurdity of it quickly becomes clear. Giving birth clearly does not magically impart you with infinite medical knowledge. Having a child is not even remotely equivalent to earning a medical degree. It’s kind of unbelievable that I even have to say that, but apparently, I do.

The problems with this claim should become obvious as soon as we start applying it to other situations. For example, purchasing a computer clearly does not endow me with instant and incomparable knowledge about anti-virus software, firewalls, etc. Similarly, no one claims to be an expert mechanic by sheer virtue of the fact that they own a car, so why would we think that simply having a child makes someone a medical expert?

I want to take that car analogy a bit further, because I think it is instructive. Imagine that I have decided that the notion that you need to do regular oil changes to protect your engine is actually just a conspiracy by car companies to make money, and, in fact, not only is it fine to never change your oil, but oil changes are actually bad for your car. Obviously, that position is absurd, but now imagine that you confronted me about it, and I responded by saying, “well as the owner of my car, only I know what is best for it.” Would you accept that response? Would it instill you with confidence that I actually know what I am talking about? I doubt it. It would be obvious to you that the fact that I own a car has no bearing on the extent of my mechanical knowledge, and plenty (probably most) car owners know next to nothing about mechanics.  Nevertheless, that is exactly what anti-vaccine parents do. They hold a dangerous position that is discredited by a mountain of evidence, yet they feel justified in their position simply because they have a child.

Now, at this point, someone may accuse me of a straw man fallacy, and argue that giving birth doesn’t magically give you medical knowledge, but rather, parents know best because they are the ones who interact with their child on a daily basis and know the most about him/her. That argument isn’t really any better though. Watching your child on a daily basis can’t possibly give you knowledge about your child’s internal physiology, nor can it inform you about the results of carefully controlled studies. Interacting with your child can’t magically inform you that vaccines are dangerous, for example. Going back to my car example, I could say that as the owner of the car, I am the one who interacts with it on a daily basis and know the most about it, but that clearly doesn’t make me any less wrong about the necessity of oil changes. In other words, interacting with your child doesn’t magically give you medical knowledge any more than driving my car magically gives me mechanical knowledge. To be clear, parents should report their observations to a doctor when they take the child for a medical visit, just as I should report observations about the way my car drives when I take it for a tune up, but that is a far-cry from parents being in a position to reject countless medical studies simply because they have daily encounters with their progeny.

Nevertheless, a parent might try to expand on this with specific observations. For example, they might say, “well after the first shot, I could see a difference in my child, so I’ll never vaccinate again” (see note). That is, however, simply an anecdote, and it is utterly worthless for establishing causation. For one thing, personal observations are often biased, and humans are notoriously bad at deciphering trends without the aid of actual data. Further, two things often occur together just by chance. For example, in a previous post, I ran the math on autism rates and vaccination rates and showed that even though vaccines don’t cause autism, we expect there to be thousands of cases each year where, just by chance, the first signs of autism are noticed shortly after vaccination. To return to my car example again, imagine that I had an oil change once, and shortly afterwards, one of my spark plugs stopped working and had to be replaced. Could I say that since it happened right after the oil change, the oil change must have been the cause? Obviously not. Further, the fact that I am the owner of the car would still be irrelevant. I couldn’t say, “well I own the car and drive it daily, so I know what happened, and I know the oil change killed the spark plug.” That would obviously be insanity.

Note: To clarify, I am not talking about things for which causation has already been established (e.g., an immediate allergic reaction). Rather, I am talking about all the countless things that anti-vaccers attribute to vaccines, despite a total lack of evidence to support causation, and often a substantial amount of evidence against causation. Autism is a prominent example, but I have seen parents accuse vaccines of everything that you can imagine. According to them, restlessness = vaccine injury, change in food preference  = vaccine injury, change in favorite toy = vaccine injury, etc. all “supported” by the notion that as parents, they surely must know what is going on with their child. It’s also worth pointing out that for the vast majority of things that anti-vaccers accuse vaccines of, there is simply no plausible causal mechanism, and they really are no different from me accusing an oil change of killing a spark plug.

Next, someone might try to appeal to “parental instincts,” but that is really just a restatement of where we started. We are back to the notion that being a parent automatically gives you medical knowledge, even thought it clearly doesn’t. As a friend of mine likes to say, parental instincts tell you that you shouldn’t let your kid play in that shady-looking guy’s van, but they can’t tell you whether or not vaccines are safe, whether or not a treatment works, etc. Only carefully controlled studies can do that.

Finally, someone will almost certainly argue that “doctors sometimes make mistakes.” This claim is, of course, true, but the fact that doctors aren’t perfect doesn’t automatically make parental instincts superior. Doctors are human, and humans make mistakes, but someone with a decade of advanced training and years of experience is far less likely to make a medical mistake than someone with no training or experience who is basing their views off gut instincts and Youtube videos (note: read this post before bringing up the claim that medical errors are the third leading cause of death).

In conclusion, I want to be clear that I’m not attacking parents or trying to “diminish” parenthood or any other such nonsense. I’m just trying to get people to have an accurate view of their own limitations. Having a child does not make you a medical expert nor does it make you the most qualified person to understand or assess your child’s health. If it did, there would be no need for doctors or science.

Posted in Vaccines/Alternative Medicine | Tagged , , , , | 14 Comments

If anecdotes are evidence, why aren’t you drinking paint thinner?

I want to begin this post by doing something atypical for me. I want to tell you about an amazing cure-all that I that was recently introduced to: turpentine (aka paint thinner). According to the vast wealth of knowledge available on the internet, most (if not all) diseases are actually caused by parasites, fungal infections (particularly Candida), and even modern medicine itself. Don’t worry, however, because all of these can be cured by drinking turpentine (or sometimes kerosene or even gasoline). Now, you may think that sounds crazy, but have no fear, because this treatment is totally natural (turpentine is made from distilled tree resin). Also, it has been used for nearly two centuries, and several brave doctors have bucked the medical establishment and are promoting it (e.g., Jennifer Daniels). You may think that is pretty flimsy evidence, but don’t worry, I also have multiple blogs, alternative health websites, and Youtube videos explaining why this is the cheap trick doctors don’t want you to know. Best of all, I have tons of anecdotes. There are countless success stories of people who tried traditional medicines to no avail, but as soon as they started drinking turpentine, their symptoms went away and they could just tell that they were healthier. Take, for example, this person who wrote the following after taking turpentine, “My energy level is so much better, lungs feel cleaner. Can’t tell me this stuff doesn’t work.” With confidence like that, how they be wrong? Finally, you may be wondering why there aren’t a lot of scientific studies supporting turpentine as a treatment, as well as why there are lots of health recommendations against taking it. The answer is simple: big pharma only cares about profits, so they are suppressing the truth of this amazing treatment.

As most of you have hopefully guessed, the paragraph above is facetious, and I’m not going to try to induct you into a pyramid scheme, but I wanted to open this post that way to illustrate a very important point. Namely, most of the people reading this probably spotted the flaws in my arguments for turpentine. The idea that drinking paint thinner could cure all diseases is so outlandish that you probably realized that blogs, Youtube videos, and anecdotes aren’t sufficient evidence. You probably realized that the fact that something is natural or ancient doesn’t mean it’s safe or effective (appeal to nature and appeal to antiquity fallacies). You probably realized that the fact that I found a handful of doctors that support drinking turpentine doesn’t mean that it works (appeal to authority fallacy), and you probably scoffed at the notion that safety warnings on turpentine were actually part of a conspiracy by “Big Pharma.”

Nevertheless, despite all of that, a large portion of you probably use identical reasoning to support your favorite alternative remedy. Based on what I see in the comments, most of you probably have some “cure for the common cold” or other pseudoscientific practice that you cling to dearly, and if I asked you for your evidence, you would respond with the exact same type of reasoning. Most prominently, you would give me anecdotes and cite blogs and Youtube videos.

Further, on the off chance that someone reading this believes in the magic powers of turpentine, there is still almost certainly some other alternative practice that you think is nuts, even though it is supported by the exact same evidence base. For example, I used to know someone who believed in all manner of nonsense, from crystal healing to anti-vaccine conspiracy theories, but they drew the line at homeopathy. As I tried to explain to them, however, that doesn’t make sense because homeopathy has the same evidence base as things like crystal healing. In other words, when I asked them to give me evidence of crystal healing, they replied with blogs, Youtube videos, and anecdotes, yet they rejected homeopathy even though homeopathy is also “supported” by countless blogs, Youtube videos, and anecdotes. To try to make them grasp this paradox, I once asked them, “If homeopathy doesn’t work, then why do so many people claim to feel better after taking it?” They very correctly responded that those reports could be from placebo effects, total coincidences, regression to the mean, other medications, etc. In other words, when it wasn’t their pet belief, they had no problem seeing the flaws in the line of reasoning, but when it was their personal views at stake, suddenly cognitive biases clouded their vision and inhibited their ability to think logically.

The point that I’m trying to make here is that your reasoning has to be consistent. Either anecdotes can establish causation or they can’t. You don’t get to pick and choose when you think that they work. In other words, if an anecdote, or even a collection of anecdotes, is actually sufficient grounds for saying that cannabis cures cancer, acupuncture works, vaccines cause autism, etc. then it must also be sufficient grounds for the effectiveness of homeopathy, miracle mineral solution, bleach enemas, turpentine, kerosene, gasoline, crystal healing, bloodletting, leaches, sacrificing to the sun god, and every other form of woo that has ever been proposed, because they all have anecdotes. If anecdotes actually can establish causation, then you have to believe in all of them. They can’t only establish causation when you want them to. That’s not how evidence works.

To put that another way, if for any one of the thousands of alternative treatments that have ever existed, you are content to say, “the anecdotes could easily be from placebo effects or other factors,” then you must say that for all of the treatments. In other words, by acknowledging even once that the fact that someone took a treatment then got better is not good evidence that the treatment actually works, you have just universally acknowledged that anecdotes can’t establish causation. In other words, the logical syllogism, “someone took X, then got better, therefore X works” either works all the time or it never works. It can’t magically work when you want it to, then not work when you don’t want it to.

The same thing is true for the admissibility of blogs and Youtube videos as evidence. If you asked me for evidence that turpentine is a cure-all, and I responded with an unsubstantiated Youtube video, you would very correctly demand actual data. It is inherently obvious that any crackpot can make a Youtube video and say whatever they want in it. To be clear, there are some Youtube videos, blogs, etc. that are packed with non-cherry-picked citations to the original peer-reviewed literature, and there is nothing wrong with linking to a source like that and saying, “this video gives a good explanation and cites the relevant literature.” That is, however, almost never what I see when it comes to conspiracy theories and alternative medicine. The sources that I see people use as evidence are nearly always just someone spouting nonsense as if they were stating facts, and citations to original studies are either non-existent or horrible cherry-picked.

Finally, I want to contrast this type of inconsistency with a science-based view of reality. To put it simply, you can convince me (and scientists in general) of anything if you have sufficient evidence, and by evidence, I mean multiple independent studies that used large sample sizes, adequate controls, and rigorous analyses. If you can show me a consistent body of scientific evidence demonstrating that drinking turpentine is safe and effective, I’ll accept it. If you can show me a consistent body of evidence demonstrating that vaccines cause autism, I’ll accept it. If you can show me a consistent body of evidence demonstrating that crystal healing actually works, I’ll even accept that. Do you see the difference between that and cherry-picking when you do and do not want to accept anecdotes as evidence? Science has consistent criteria for what is and is not evidence, whereas there is no constancy in pseudoscience.

The take home that I want you to get from this is that you need to ensure that your reasoning is consistent. A great way to do this is by trying to think of situations where you would not accept the conclusion that results from your current line of reasoning. For example, if you are using an anecdote to claim that a particular alternative treatment works, stop and try to think of situations where you would not accept anecdotes as evidence. In other words, if you can think of a situation where you wouldn’t accept that X caused Y, even though someone took X then Y happened, then you have just demonstrated that your line of reasoning is flawed, and anecdotes are not sufficient evidence of causation.

Note: To be clear, I am not arguing that the existence of anecdotes is evidence that something doesn’t work (that would be a fallacy fallacy). In other words, when I said things like, “the logical syllogism, ‘someone took X, then got better, therefore X works’ either works all the time or it never works,” it is the syllogism itself that is the problem, not its conclusion. To put that another way, there will always be anecdotes for things that actually do work. The problem is simply using those anecdotes as evidence that it works.

Note: Inevitably when I start talking about anecdotes, pedants get all bent out of shape and argue that anecdotes do have value because they indicate that something may be worth studying. I agree, and never said anything to the contrary. That argument does not, however, in any way shape or form negate my point that anecdotes are not valid evidence of causation. So please spare me your pointless pedantry.

 Related posts

Posted in Vaccines/Alternative Medicine | Tagged , , , , , , , , | 16 Comments

Facts aren’t political (or religious)

On this blog/Facebook page, I try very hard to stick to scientific facts and avoid discussing politics. Nevertheless, I am frequently accused of being political, even when I am simply reporting a fact. For example, I often post facts about climate change, such as the fact that 2014, 2015, and 2016 all set new records for the warmest year (on average), and when I do that, I nearly always receive comments accusing me of “liberal propaganda” or “pushing a liberal agenda,” but that’s not how facts work. It is demonstrably true that all of those years were the warmest on record, and politics has absolutely nothing to do with it. In other words, facts are inherently not political. They are simply statements of reality. To be clear, facts can, of course, be used to make political arguments and to try to persuade people of a particular political position, but the facts themselves are not political, and that distinction is important.

Let me give an example that I recently encountered that will hopefully demonstrate how this plays out and why it is problematic. This example is from a historical topic, not a scientific one, but I think it illustrates the situation nicely. A few days ago, I saw a clip from Adam Ruins Everything where Adam discussed the history of racism in the US housing markets as well as the lingering effects of that racism. At no point in the video did he make any political statements or arguments. He simply explained the facts (the things that are demonstrably true), and he cited his sources. In other words, he simply made claims like, “bill X was passed which did Y,” and at no point did he give a call to action, advocate for a piece of legislation etc. He even went out of his way in the video to say that he is simply reporting the history, not trying to guilt white home owners.

As you have probably guessed, however, many of the comments on this video were simply amazing. In the various places that I have seen this video show up, I have seen tons of comments complaining about “liberal propaganda,” “white guilt,” etc. For example, one commenter wrote, “This is just anti-white propaganda.” To be clear, these comments weren’t citing sources showing that the facts in the video were wrong, rather they were simply accusing the video of being political nonsense (in this case liberal nonsense) .

This general reaction can be summed up best with the comment below.

 

Again, that’s not how facts work. Simply stating historical facts is not the same thing as making a political argument. Arguing for an action based on those facts would be political, but simply teaching people the facts is not political. Nevertheless, people often respond to facts as if they are political, and the great irony is that this reaction often occurs precisely because of political biases. In other words, someone sees a fact that causes some problem for their political ideology, so instead of dealing with the fact, they accuse the fact itself of being politically motivated. This is extremely problematic for obvious reasons. We can’t hope to have a rational discussion about a topic (whether it be scientific, historical, political, etc.) if people don’t accept that basic facts. Any position that ignores facts is doomed to fail.

Note: To be clear, I am not making any general statements about the accuracy of Adam Ruins Everything, but in this particular case, he was simply reporting historical facts.

This situation plays itself out all the time on scientific topics, but climate change is probably where it occurs the most frequently, and I don’t fully understand why, because if I make a statement like, “the earth is a spheroid,” no one accuses me of making a political claim. Everyone realizes that I am simply reporting a fact; however, when I state something like, “we have greatly increased the CO2 in the atmosphere, and that CO2 is causing the planet to warm” suddenly people accuse me of pushing a liberal agenda and my inbox is flooded with comments to the effect of, “I thought this page was about science but it is just liberal nonsense.” Do you see the problem there? It is a fact that our CO2 is causing the climate to change, and I always back that fact up with my sources (detailed in previous posts such as this one and this one), but people mistake that fact for a political argument. To be clear, people can certainly use that fact to make a political argument, and when someone says, “we are causing the planet to warm, therefore we should do X,” at that point they are making a political argument, but the fact itself is not political. In other words, whether or not we should do something about climate change is going to depend moral values, views on economics, etc., but none of that is relevant to the simple fact that we are causing the climate to change.

Note: I realize that both of my examples so far have involved groups who are generally politically conservative (by the US definition), but just to be clear, liberals do this all the time as well.

Propaganda

So far, I have been focusing on the accusation that a fact is political, but this same general problem occurs in other ways as well. For example, when I sate a fact about the effectiveness or safety of vaccines, I am often met with accusations of spreading “pro-vaccine propaganda,” but, as I have stated several times now, that’s not how facts work. Simply stating a fact is not propaganda. To put that another way, the definition of propaganda is not, “a fact that I don’t like.”

Here again, I want to be clear that facts can be used as propaganda if they are reported in a misleading or biased way to push some agenda. For example, anti-vaccers like to cite the fact that in some disease outbreaks, most of the people who became infected were vaccinated. That fact is technically true (in some cases), but using it as an argument against vaccination is misleading and leaves out critical information. Namely, it ignores the fact that the rates of disease are consistently higher among the unvaccinated. In cases like that, you could argue that the argument is propaganda, but even there, it is not the fact itself that is the problem. Rather, the problem is the misleading way in which it is presented. Also, generally when I see people making blind accusations that something is propaganda, the fact wasn’t being in any way miss-represented. Rather, people were making the accusation simply because they didn’t like the fact.

Religion

Finally, the same basic problem occurs for topics that have any sort of religious implications. For example, when I talk about the facts of evolution such as the existence of transitional fossils or the fact that evolution predicted genetic patterns, I’m nearly always met with creationists who accuse me of pushing an atheist agenda. You can see an example of this if you look at the meme on the right. I made the original meme (green), and a creationist group tried to “correct” it (including calling it propaganda), and I corrected it back. Here again, I simply stated a scientific fact that has been repeatedly demonstrated by fossils, genetics, biogeography, etc. It is in no way cherry-picked or misleading. To be clear, that fact certainly does present serious problems for young earth creationists (but not theistic evolutionists), but that doesn’t make it propaganda or religious. The fact itself is simply a statement of reality. To put that another way, the fact is relevant to the topic of religion, but religion is not relevant to the fact.

Conclusion

In short, facts are not political, religious, or propaganda. They can be used to make political or religious arguments, and they can even be misrepresented and used as propaganda, but the facts themselves are neutral statements of reality. So, when someone says something like, “vaccines save thousands of lives each year” they are stating a fact, not propaganda. Similarly, when they say, “numerous studies have tested the natural drivers of climate change and found that they cannot explain the current warming,” that is not a political argument, it is a simple, demonstrable fact, and politics have no bearing whatsoever on it.

It is very easy to dismiss information that you don’t like as propaganda, but doing so is intellectually dishonest and you do yourself a disservice by giving into that type of cognitive pitfall. To be clear, you should fact check and make sure that a claim is true, the fact wasn’t cherry-picked or misrepresented, etc., but don’t fall into the trap of blindly asserting that anything that disagrees with you is politically motivated or agenda driven.

Note: Although I have no interest in debating the Adam Ruins Everything video, I will note that many commenters harped on the use of the phrase “insufferably white” at the beginning, and argued that it meant that the video was propaganda. I would respond to that first by reminding everyone that the show is supposed to be comedic and that was meant to be a joke. Second, that was the only part of the video that was even remotely political. Everything else was simple statements of facts. Again, he listed his sources, so it is easy to fact check him.

Posted in Uncategorized | Tagged , , , , | 17 Comments

Evolution doesn’t require all species to change all the time

we did not evolve from apes but we share a common ancestor with themIn this post, I want to deal with what is arguably one of the most common misconceptions about the theory of evolution. Namely, the notion that it requires all species and populations to constantly be undergoing radical changes. You can see this misconception play out in many creationist arguments. For example, creationists often cite living fossils (i.e., organisms that appear essentially the same today as they did in the fossil record) as evidence that evolution is wrong. Indeed, there are entire Facebook pages devoted to presenting examples of organisms that, at least superficially, don’t appear to have changed over millions of years. Similarly, this faulty line of reasoning is on full display in the well-worn creationist trope, “if we evolved from apes, why are there still apes?” The reality is that these arguments are straw men, and the theory of evolution does not require or predict that all populations of all species will constantly be undergoing massive changes. Indeed, there are many well-known reasons why some populations remain stable for long periods of time, and I want to spend this post talking about them.

Several evolutionary mechanisms

At the outset, we need to clarify our terms and specify exactly what we are talking about. Evolution itself is simply a change in the allele frequencies (i.e., genetic makeup) of a population from one generation to the next, but there are several different mechanisms that can cause that change. I previously devoted a whole series of posts to these mechanisms, so I will be brief here.

First, we have mutations. These randomly produce new genetic information for the other evolutionary mechanisms to act on. Usually they are neutral, but sometimes they are harmful (in which case selection removes them) and sometimes they are beneficial (in which case selection increases their frequency).

Next, we have genetic drift. This mechanism acts on the existing variation (mutations) in a population, but it is random (i.e., it randomly increases or decreases the frequency of a genetic trait). As a result, it can be harmful because it can remove beneficial traits. In very small populations, it can even swamp selection and cause harmful traits to rise to prominence.

Gene flow simply alters the genetic frequencies of a population by bringing genetic material in from a neighboring population. This is often good, because it can provide new genetic material to a population, but it can also be bad, because it can bring in traits that are not adaptive for the local environment. Like genetic drift, high rates of gene flow combined with small populations can even swamp selection.

Finally, we have natural selection. All of these mechanisms are important, but selection tends to be the major one that drives dramatic changes. It is simply a mathematical inevitably of two conditions:

  1. There is heritable (genetic) variation for traits.
  2. Those traits affect organisms’ genetic fitness (i.e., their ability to get genetic material into the next generation).

Any time that those two conditions are met, selection will occur and the population will evolve. In other words, if some individuals have a genetic trait that lets them produce more offspring than individuals who don’t have that trait, the individuals with the trait will produce more offspring, pass the trait on to their offspring, and, as a result, that trait will be more common in the next generation. That’s all that natural selection is (sexual selection is best thought of as a special case of natural selection).

Now that you understand the mechanisms that drive evolution, you should be able to easily think of situations in which evolution won’t occur, or, at least won’t cause substantial changes. Imagine, for example, a large, isolated population (thus limited genetic drift and no gene flow), that is at equilibrium with the environment (thus no selection). Mutations will still occur, but most of them will be neutral, and if the population is already well adapted, majorly beneficial ones are unlikely. Thus, lo and behold, we have a population that undergoes very little evolution. I realize that probably isn’t very convincing to many people, so let’s flesh this out further.

Selection adapts to the current environment.

It is crucially important to understand the that selection simply adapts organisms to their current environment. It doesn’t give them what they “need,” it’s not working towards some ultimate endpoint, it doesn’t have foresight, and it’s not trying to perfect organisms. It simply acts on the current variation in a population and adapts it to the current environment. We often sum this up with the simple phrase, “evolution is blind.”

This concept is important, because it means that once a population is well adapted to its current environment, there is little left for selection to do. In other words, selection is limited to the available genetic material, so unless a new mutation arises that makes organisms even better suited to the environment, it has nothing to act on. Thus, it simply maintains the traits that are currently beneficial (via stabilizing selection) rather than evolving the population in a new direction.

To be clear, it’s certainly possible for a beneficial mutation to arise, but keep in mind that mutations are random and are not influenced by what would help an organism. Further, most of them are neutral, and many of them get lost to genetic drift before selection can act on them. Similarly, new genetic material could come from neighboring populations, but populations are often isolated.

A population like this would be described as being in a state of stasis or equilibrium with its environment, and for populations in stasis, only a relatively small amount of evolution occurs. There will pretty much always be some selection occurring just as there will always be some low level of genetic drift and mutations, but populations that have reached an equilibrium like this can persist largely unchanged for millions of years and basically just wobble around a mean value rather than moving in a consistent direction. In other words, some small changes will constantly be taking place, but they tend not to accumulate or form the type of grandiose changes that would be obvious in fossils.

Indeed, this is well supported in the fossil record, with species often persisting largely unchanged for millions of years. Nevertheless, various factors can shift a population out of stasis and cause it to undergo rapid change. For example, if there is a dramatic change in the environment, or if a population colonizes a new environment, then selection can act again, because the population will no longer be adapted to the local environment. Thus, a change in the environment can cause rapid evolution, whereas a stable environment can keep a population in stasis (there are lots of other factors that affect whether populations stay in stasis, but for sake of simplicity, I’ll leave it there).

What I have been describing here is the concept known as punctuated equilibrium (proposed by Eldredge and Gould), and it is a favorite creationist straw man, so let me briefly set a couple of points straight. First, creationists sometimes portray this as the, “hopeful monster hypothesis,” where rapid changes happen essentially overnight. Indeed, I have seen children’s’ books by Answers in Genesis with silly cartoons, such as a drawing of a pair of puzzled-looking T-rex staring at a hatched egg that has a chicken poking out of it. That is not, however, at all what punctuated equilibrium actually states, and if a creationist presents it to you in that manner, they are either ignorant about basic evolutionary concepts or they are deliberately lying. Either way, you shouldn’t be getting information from them. In reality, when a species shifts out of stasis, selection still goes through its normal steps, with each generation gradually accumulating more and more differences from the original one, and the process still takes thousands or even millions of years to produce dramatic changes. So, the evolution is only “rapid” when you put it in the context of the grand geological time scale of the entirety of earth’s history.

Second, creationists often present punctuated equilibrium as a problem for evolution and claim that Darwin was fundamentally wrong, but that is another straw man. Darwin wasn’t really wrong, he was just incomplete. He was absolutely correct about the mechanisms that drive evolution, he just didn’t realize that there are situations in which those mechanisms don’t occur (or, more correctly, don’t accumulate changes). This is very analogous to Newtonian physics vs special relativity. Newton wasn’t wrong, he was just incomplete. His math was spot on and is still taught in every physics course around the world. He simply didn’t realize that there are special cases where his math doesn’t directly apply and other math is needed. Indeed, it would obviously be insane to say that relativity is a problem for physics and discredits the whole field, yet that is exactly what creationists do when they present punctuated equilibrium as a problem for evolution. It is in no way shape or form a problem for the theory of evolution. We understand what causes large evolutionary changes to occur, and if those causes aren’t happening, then of course large evolutionary changes  won’t occur. Indeed, punctuated equilibrium does not say that evolution by natural selection doesn’t occur, nor does it say that evolution by natural selection isn’t the primary cause of the diversity of life on planet earth. All that it says is that there are periods of stasis in which little evolution occurs, and those periods of stasis end abruptly when things like habitat changes or invasion into a new area cause rapid, large-scale evolution. That is simply an expansion of our understanding of evolution, not a refutation of it.

To simplify that, Darwin was right about how and why evolution takes place, he was just incomplete regarding its rate. In other words, there are lots of periods where changes accumulate gradually just like Darwin proposed, but there are also periods where few changes accumulate, which is the piece that he was missing.

Low diversity

Reaching a point of equilibrium with the environment likely accounts for most of the long periods with seemingly little evolution, but there are other things that can limit evolution as well. For example, low genetic diversity can seriously limit a population’s ability to adapt. Remember, selection and genetic drift simply act on the existing genetic variation, but if there is very little genetic variation, then there is very little for them to act on. Indeed, this is one of the key reasons why conservation efforts for threatened and endangered species often focus on maintaining high genetic diversity. Species with low diversity can’t adapt to environmental changes, new predators, etc. because there is no diversity for selection to act on. Again, this is not a problem for the theory of evolution, because the theory stipulates that selection occurs when there is heritable variation. From that, it also follows that selection will not occur when there is no variation.

Some populations can evolve while others don’t

The finally point that I want to make is that not all populations of a species have to evolve simultaneously. Remember, selection acts on populations, and it adapts them to their current environment. Thus, two populations can both remain stable if they are in similar environments, or one can adapt while the other remains stable if one is in a changing environment and the other is in a stable environment, or they can both adapt in different directions if they are both in environments that are changing in different directions, etc. Thus, there is absolutely no reason why the evolution of a new species requires the loss of the original species.

Let me give you an example. Imagine that we have a population of butterflies living on the coast, and one day, a large storm blows a bunch of them out to an offshore island that has a very different environment from the mainland. That population on the island will quickly adapt to the island, and if it continues to be isolated form the mainland, it will eventually undergo speciation (i.e., it will split off from the original species and become a new species). Meanwhile, if the environment on the mainland remains fairly stable, the population there can persist in stasis and retain its original form. Thus, you have the evolution of a new species, without the loss of the original.

On the left you have a creationist meme arguing that Pikaia gracilens is the same as a modern eel. In reality, it is very different from a modern eel, and as is depicted on the right, there were many other lineages that evolved into our modern animals. The image was made by Here’s The Evolution, a Facebook page devoted to refuting creationists’ non-sense memes.

You should, at this point, be able to think of lots of situations that would cause this, and the problems with creationists’ arguments should now be obvious. For example, in response to the question, “if we evolved from apes, why are there still apes,” there is no reason why one lineage couldn’t split off and evolve into us, while another lineage remained largely unchanged (also, we share a common ancestor with modern apes rather than being descended from them, so the premise of the question is also wrong). Indeed, the existence of multiple lineages like this is something that creationists often overlook. In other words, when they present an example of something that “hasn’t evolved” they often ignore the fact that there are usually lots of other branches of its family tree that underwent massive amounts of evolution, and the fact that one lineage remained in stasis is in no way a problem for the theory of evolution.

Conclusion

In short, there is absolutely nothing in the theory of evolution that requires all populations to constantly undergo large-scale evolutionary changes. Natural selection simply acts on the existing variation in a population, and it adapts populations to the current environment. Thus, in situations where the environment is stable, there is little variation, etc. populations may persist largely unchanged for long periods of time. This is not a problem for the theory of evolution. We understand the factors that cause evolution, and if those factors don’t occur, then of course evolution won’t either.

Similarly, there is no reason why all populations of a given species have to evolve simultaneously. If one population is in a stable environment while the other is in a changing environment, then the latter will evolve to adapt to the changing environment, while the former remains in stasis in the stable environment. Eventually, the adapting population will accumulate enough changes that they speciate (i.e., split into separate species). Thus, a new species will form, while the original was retained. Again, this is completely consistent with our understanding of evolution, and it is not at all a problem for the theory.

Finally, I want to conclude by pointing out one of the things that I find most frustrating about creationists. Namely, their intellectual dishonesty and complete lack of curiosity. The things that I have been describing in the post are basic, fundamental concepts about evolutionary theory that you would learn in an introductory course on evolution, yet creationists are willfully ignorant of them. Most creationists have no desire to learn what evolution actually says and would rather plow forward with their straw men arguments. To be fair, there are some who eschew these arguments, but they appear to be the minority, and that is tremendously disappointing, because evolution is truly fascinating. Studying it is enthralling, but rather than bask in the glorious glow of enlightenment, creationists cling to their misconceptions and refuse to acknowledge that they have no clue what they are talking about.

Note: As always, I want to clarify that I am not making any religious arguments in this post. Evolution is a scientific fact, and I am simply explaining the evidence. There are Christians who both accept evolution as fact and believe in god.

Related posts

 

Posted in Uncategorized | 4 Comments