Occam’s razor is about assumptions, not simplicity

occam's razor meme peope complex answers simple

Image via The Questionist

Occam’s razor is an important tool for critical thinking, and it is employed constantly in science. Nevertheless, it is often misunderstood and is frequently (and erroneously) stated as, “the simplest solution is usually the correct one.” This is an unfortunate and misleading way to phrase the razor, because it leads people to conclude that conceptually simpler hypotheses are more likely to be correct, and that isn’t actually true. I have, for example, shared images like the one above multiple times on my blog’s Facebook page, and almost without fail, someone responds to them with something to the effect of, “Occam would have something to say about this.” The reality is, however, that Occam’s razor is actually about making assumptions, not conceptual simplicity. In other words, a “simple” hypothesis is one that doesn’t make unnecessary assumptions, not one that is conceptually simple.

I will elaborate on what I mean by unnecessary assumptions in a moment, but first I want to talk a bit more about conceptual simplicity. If you have ever really studied science, then it should be obvious to you that reality isn’t simple. Indeed, the history of science is largely the history of replacing a conceptually simple understanding of nature with an increasingly complicated understanding. In the pre-science era, many people had a very simple understanding of nature. There were only four elements, the earth was the center of the universe, etc. Those ideas were all replaced with far more complex scientific explanations, but those complex explanations are correct.

This accumulation of complexity also happens within science. Gravity provides a good example of this. Newton’s understanding of gravity was far simpler than the more complicated general relativity model proposed by Einstein, but that doesn’t make Einstein wrong nor does it mean that he violated any guidelines of logical thought by proposing it. Indeed, science has repeatedly confirmed that Einstein was right, and we need his conceptually complex model to account for how nature works. Biology has gone through similar revisions. Our modern understanding of evolution, for example, is far more complicated and nuanced than what Darwin proposed. There are many additional (and very correct) layers of complexity that have been added to our understanding over the years (e.g., neutral evolution, punctuated equilibrium, etc.). Indeed, most, if not all, branches of science have experienced similar increases in complexity, and that’s fine. It doesn’t violate Occam’s razor.

Note: In the examples above (and many examples for core scientific topics), the original idea was not wrong so much as incomplete. Darwin and Newton were mostly right, there were just some special circumstances that they weren’t aware of.

Having said that, you should never make a model, hypothesis, etc. more complicated than it needs to be, but simply saying, “hypothesis X is complex and hypothesis Y is simple” doesn’t really tell you much about which one is more likely to be correct. Assumptions, in contrast, tell you a great deal about which hypothesis is more likely to be correct.

Assumptions are the heart of what Occam’s razor is actually about, and the correct way to state the razor is that you should never make more assumptions than are strictly necessary. This concept, sometimes referred to as parsimony, is a guiding principle of science. Everything should be based on evidence and known facts, and the further outside of the known you have to step, the more likely you are to be wrong.

If you think about this for a second, it should make good, intuitive sense. Assumptions are, by definition, things that may or may not be true. Thus, the more potentially untrue components your hypothesis has, the higher the probability that it will be wrong. We can describe this mathematically. Let’s say, for sake of example, that you have a hypothesis that makes one assumption and there is a 90% chance that your assumption is correct (pretend we know that somehow). Watson also has a hypothesis, but his hypothesis makes three assumptions, each of which has a 90% chance of being correct. Your hypothesis only has a 10% chance that its assumption is wrong; whereas for Watson’s hypothesis, there is a 27% chance that at least one of the assumptions is wrong. Thus, it is obvious that his hypothesis is less likely to be correct (see this post for probability calculations).

In case math isn’t your thing, we can use some every-day examples to illustrate this as well. Imagine that you get in your car and try to start it, but when you turn the key, the engine won’t start. It won’t even turn over. Now, there are several possible hypotheses. The most obvious three are that it is the battery, starter, or alternator, but let’s say that you have an additional piece of information. Let’s say that yesterday you had your alternator and battery tested, and they both checked out as fine. Now, which of those three hypotheses is more likely to be correct based on the information you have? It’s obviously the starter, right? You just had the other two tested, so it’s reasonable to conclude that they likely aren’t the problem. This is a perfectly rational and intuitive conclusion, but when we break it down, it’s really just an application of Occam’s razor. Consider, the starter hypothesis proposes only one unknown: there is something wrong with the starter. In contrast, both the battery and alternator hypotheses require additional assumptions, because not only must there be something wrong with one of those car parts, but you also have to assume that the test equipment you used yesterday was faulty, or that a problem happened to develop right after being tested, etc. You have to make an assumption that is not required for the starter hypothesis.

To further illustrate this, we can construct hypotheses with additional assumptions. I could, for example, propose that the starter, battery, and alternator all died simultaneously. Now I have multiple assumptions running, and I trust that it is clear that it is unlikely for all of those things to have gone bad at the same time. We can make it even more ridiculous though by also assuming that in addition to those three parts, the ignition coil, spark plugs, and spark plug wires are also dead (see note). Do you see my point? Every time that we add another unnecessary assumption, the odds of the hypothesis being correct go down. We don’t need to be making assumptions about spark plugs, ignition coils, etc., and therefore we shouldn’t. We should work with what we know and add other assumptions only if they become strictly necessary.

Note: Yes, I know that bad spark plugs, spark plug wires, and the ignition coil(s) would not prevent the engine from turning over, but that only further illustrates the absurdity of assuming that they also stopped working. 

I want to segue here briefly into a related topic: ad hoc fallacies. These fallacies are prevalent in anti-science arguments, and they are fundamentally failures to apply Occam’s razor. They occur when, after being faced with evidence that defeats your position, you invent a solution (i.e., make an assumption) that serves no function other than attempting to patch the hole in your argument.

Let me give an example. Suppose that a friend is with you when your car won’t start, and suppose that you have bragged to him repeatedly about how your car is impervious to faults and can’t break-down. Thus, upon seeing your car’s failure to start, he snidely says, “so much for your car never breaking down.” You are, however, unwilling to acknowledge that your car is capable of having flaws, so instead, you claim that someone must have sabotaged it. That is an hoc fallacy. You arbitrarily assumed that someone sabotaged your car even though you have no evidence to support that claim and even though it breaks Occam’s razor by making unnecessary assumptions.

That example may seem absurd and obviously silly, but people do this all the time. For example, anytime that you see someone in an internet debate blindly accuse their opponent of being a “shill,” they are committing this fallacy. They are assuming that their opponent has a conflict of interest rather than accepting contrary evidence. Similarly, when people blindly reject scientific studies based on assumptions that the studies were funded by major companies, they are committing this fallacy. Indeed, anytime that someone resorts to a conspiracy theory to dismiss a contrary piece of evidence, they are committing this fallacy and are being irrational.

This brings me to my final point. Namely, the quality of the assumptions matters as well as the quantity. In other words, some assumptions are more justified than others. Someone could, for example be pedantic about my car example and argue that saying that the starter died and saying that the starter and spark plugs died both make the same number of assumptions because the first one implicitly assumes that the spark plugs did not die. It should be obvious, however, that (unless you have been having perpetual problems with your car) the default position should be to assume that things work. Every time that you get in your car, you’re implicitly assuming that all of its necessary parts work. Technically, you could argue that the opposite hypothesis (i.e., that none of the parts work) makes the same number of assumptions, but one set of assumptions is clearly more justified than the other (the reasons behind that get into inductive logic and the burden of proof and other concepts that I don’t have time to go into here). The same is true in science and debates. It is not valid to, for example, assume that the entire scientific community is involved in a massive conspiracy, and you can’t try to validate that assumption by saying that everyone else is assuming that the conspiracy doesn’t exist. Those two assumptions are not equal, and you need some concrete evidence before you can claim that there is a conspiracy.

In short, Occam’s razor does not state that the simplest solution is more likely to be correct. Rather, it says that the solution that makes the fewest assumptions is more likely to be correct; therefore, you should restrict your assumptions to only the ones that are absolutely necessary to explain the phenomena in question. A solution can be very complicated and still likely be correct if it is based on facts, not assumptions. Indeed, the answers science produces tend to be conceptually complex, and the history of science is a graveyard of simple ideas that were replaced with more complex ones.

Related posts

Advertisements
Posted in Rules of Logic | Tagged , , , | 26 Comments

Vaccines are “unavoidably unsafe,” but that doesn’t mean they are dangerous

vaccineI have increasingly seen anti-vaccers citing the fact that vaccines are considered “unavoidably unsafe,” as proof that vaccines are dangerous and should be avoided. In reality, however, the term “unavoidably unsafe” is just legal jargon that does not mean what anti-vaccers think it means. So let’s talk about what it actually means

The first thing to realize is that this is a legal term, not a scientific one. This is not a term that scientists use when doing risk assessments or testing the safety of drugs. So right off the bat, we have a huge problem because this argument is conflating legal terms with scientific ones (I’ll return to that at the end). So what does this legal term actually mean?

Basically, it means that there is nothing that can be done to make the product safer without compromising the function of the product. The term comes from the legal document, “Restatement (Second) of Torts, Section 402A,” and it is about protecting manufacturers from frivolous law suits, not about providing consumers with health information. The basic idea is simply that companies cannot be held accountable for an injury that arises from unavoidably unsafe products because there was nothing that the company could have done to prevent that injury (inherent in this term is the requirement that the product was manufactured correctly, labelled correctly with adequate instructions for how to administer it, etc.).

Let me give you an example of what that means. The term is generally not applied to food, but if it was, peanut butter could be considered unavoidably unsafe, because some people have allergic reactions to peanut butter, and there is nothing that a peanut butter company can do to prevent that. In other words, there is no way to manufacture peanut butter without that risk being present. Thus (assuming that the product was manufactured and labelled correctly), a peanut butter company would not be liable if someone had an allergic reaction to the peanut butter, because that reaction was not the result of manufacture negligence. Now, does that mean that peanut butter is dangerous? No, obviously not. For the majority of us it is perfectly fine. “Unavoidably unsafe” does not mean that a product is dangerous and should be avoided. Rather, it simply means that are risks that cannot be removed.

When we apply that to vaccines, we see the same thing. Vaccines have side effects. No one has ever denied that, but serious side effects are rare, and the benefits far outweigh the risks. Indeed, Section 402A specified that “unavoidably unsafe” products should have benefits that outweigh their risks. So labeling vaccines as unavoidably unsafe absolutely does not mean that they are dangerous and should be avoided. It simply means that there are risks that are not manufacture’s fault. Also, just to be 100% clear here, everything has risks, including the decision not to vaccinate. People often focus on the risk of taking an action and ignore the risk from not taking that action, but a correct risk assessment has to consider both, and for vaccines, the risk from not vaccinating is much higher than the risk from vaccinating.

Finally, I want to return to me previous comment about this being a legal term not a scientific term. Those who deny science frequently like to cite courts, legal documents, etc. as evidence of their position, but that is simply not how science works. Even if a legal body like the Supreme Court had said that vaccines are dangerous, that would not be evidence that vaccines are dangerous. Lawyers and judges are not scientists. When they make a statement about science, they are stepping outside of their area of expertise. Further, even if they were scientists, that wouldn’t turn what they say into a fact. In other words, when they say something it doesn’t automatically become true. Whether or not something is a fact has to be determined by conducting studies. That is where scientific evidence comes from, and scientific studies overwhelmingly support the safety and effectiveness of vaccines. Trying to use a legal ruling as evidence against scientific studies is foolhardy. It is also pretty ironic and hypocritical for anti-vaccers (a group that is notorious for distrusting the government) to cite a government ruling as if it gives them a checkmate.

In short, “unavoidably unsafe” is simply a legal term that means the manufacture is not liable because they cannot do anything to make the product safer. It does not mean that the product is dangerous and should be avoided.

Note: Some pedants may take issue with the way that I have been using the term “dangerous” and, admittedly, even some documents about “unavoidably unsafe” products use it in a way that is inconsistent with how we usually use the term. So, when I say “dangerous” I mean a product or activity with a high enough chance of causing harm and low enough benefits that it should be avoided. That does not mean, however, that there is no chance of something “safe” causing harm. Swimming, for example, is not something that I would usually consider “dangerous” even though death is possible. Swimming during a thunder storm, however, I would consider dangerous. See the difference?

Recommended further reading

Schwartz. 1985. Unavoidably unsafe products: Clarifying the meaning and policy behind comment K. Washington and Lee Law Review 42: 1139–1148.

Related posts

 

 

 

Posted in Vaccines/Alternative Medicine | Tagged , , | 8 Comments

Bt GMOs reduce pesticides, increase yields, and benefit farmers (including organic farmers)

Few technologies have been demonized to the same extent as genetic engineering. According to countless websites, GMOs are an evil scourge on the earth that destroy biodiversity, use exorbitant levels of pesticides, and hybridize rampantly with wild crops, and all of that is before we even get to the (largely false) claims about Monsanto. Reality, however, shows a rather different picture, especially when it comes to Bt GMOs, which are what I want to focus on for this post. You see, one of the problems with GMO debates is that people on both sides tend to lump all GMOs together, but there are actually lots of different types of GMOs with different properties and different pros and cons. Of these different types, Bt GMOs are arguably one of the best, and as I will show, they actually reduce pesticide use, increase crop yields (thus reducing land use), increase profits for farmers, and are safer for the environment than their conventional counterparts (including organic farming). Further, they actually benefit farmers who don’t grow GMOs by providing a protective “halo” around their farms that protect them from insect pests. As a result, non-GMO farms that are near Bt GMO farms actually use less pesticides and enjoy higher profits than they would without the GMO farms.

Note: Most Bt GMOs are not herbicide resistant (i.e., aren’t designed for use with glyphosate [aka roundup]), so if your issue with GMOs is that you don’t like glyphosate, you should be fine with many Bt GMO crops (also you should read the actual scientific literature on glyphosate).

Bt pesticides and Bt GMOs

Before we can talk about the benefits of Bt GMOs, we need to talk about the alternatives and history of Bt. Bt toxin is actually a crystalline protein produced by the bacteria Bacillus thuringiensis, and decades ago, scientists discovered that it was a very effective pesticide against certain groups of insects, while being safe for most other organisms. There are three reasons for this. First, the acidic stomachs of mammals (and many other animals) breaks the protein down. Second, only part of the protein is potentially dangerous, and it has to be broken down in a highly alkaline environment (which is present in insect guts, but not most animals) to release the potentially dangerous part. Third, it operates by binding to specific receptors that are found on certain insect guts, but not the guts of other animals. Thus, its mode of action simply doesn’t work on humans and most other animals (for more details about mode of action, see Kumar and Chandra 2008 and this page from Harvard). As a result, it is safe for humans and most animals at anything but an extremely high dose (Mendelsohn et al. 2003; remember, even water is fatally toxic at a high dose [Garigan and Ristedt 1999]).

All of these properties make Bt toxin an ideal pesticide, and it was widely adopted, particularly for organic farming (yes, organic farming uses pesticides as well, just not “synthetic” pesticides). As far as pesticides go, it is a pretty safe one, but it is still not without problems. First, the spraying process takes time and money, uses water, burns fossil fuels, has to be done multiple times a year, etc. Additionally, when it is sprayed on crops, it kills a wide range of insects that were on the crops, not just the ones that actually eat the crops. Further, spraying has to be timed correctly, it doesn’t provide continuous protection, etc. Also, it is far from the only insecticide being used, and many are far worse for the environment. This is where GMOs come in.

Clever scientists figured out a way to genetically engineer plants to produce Bt toxin themselves. As a result, minimal spraying is needed, because the plant produces its own pesticide (keep in mind, this pesticide is very safe for humans). This saves farmers time and money, provides continuous protection (resulting in higher crop yields), and has fewer effects on non-target species. I’ll elaborate on all of these points below.

Note 1: Pesticides are simply chemicals used to kill pest species. Insecticides, herbicides, fungicides, etc. are all types of pesticides that target specific groups (insects, plants, and fungi, respectively).

Reduced pests, reduced pesticides, increased yields, and increased profits

I sometimes hear those who oppose GMOs claim that GMOs haven’t delivered on their promises, but when it comes to Bt GMOs, that is demonstrably false. Numerous studies have consistently confirmed that Bt GMOs greatly reduce pest populations, which results in less damage to the crops (Hutchison et al. 2010; Lu and Desneux 2012; Dively et al. 2018). Further, all of this is accomplished while using less pesticides (Shelton et al. 2002; Cattaneo 2006; Lu and Desneux 2012). This, of course, also translates to higher yields and higher profits for farmers (Shelton et al. 2002; Cattaneo 2006; Vitale et al. 2010). Indeed, one study estimated that over a 14 year period, Bt maize (aka corn) saved farmers in Illinois, Minnesota, and Wisconsin $3.2 billion, and saved farmers in Iowa and Nebraska $3.6 billion (Hutchison et al. 2010). So, don’t believe the anti-GMO horror story that GMOs are somehow bad for farmers. They aren’t. Farmers choose to use them because they benefit the farmers.

Note: The studies cited in this post came from a wide range of countries, not just developing countries. Australia, the USA, China, European countries, and African countries are all represented in the studies I cited throughout.

Environmental benefits

As explained in the previous section, Bt GMOs use significantly less pesticides than their conventional/organic counterparts. That reduction stems from the fact that non-GMO crops are frequently sprayed with insecticides; whereas the Bt GMOs produce their own insecticides, which greatly reduces the need for spraying pesticides. So, if your biggest concern with GMOs is that they use too many pesticides, then you should support Bt GMOs, because they use substantially less than other agriculture methods (including organic).

Because of the targeted nature of GMOs, this reduction in pesticide use translates directly to improved biodiversity, while still effectively killing pests. When a field is sprayed with an insecticide (even a fairly safe one like the Bt spray used in organic farming), a large range of insects in the field are affected, even if they aren’t pest species. In other words, things like bees and monarch butterflies (particularly their caterpillars) can be killed by the pesticide, even though they aren’t pest species and don’t eat the crops (depending on the pesticide, there can also be negative effects for other wildlife, aquatic ecosystems, etc). This is inevitable collateral damage from spraying pesticides. The Bt GMOs, however, are very targeted. Insects need to actually eat the plant to get the toxin. As a result, innocent, non-pest species that just happen to be in the field are largely unaffected. Plus, there are no pesticides running into waterways and the other negative effects of pesticides are eliminated.

To be clear, this isn’t speculative, dozens of studies have confirmed this. Indeed, several meta-analyses of the literature have found that Bt GMOs do not adversely affect non-target species, and, compared to crops that are sprayed with Bt, they have significantly better insect diversity (Marvier et al. 2007; Wolfenbarger et al. 2008; Comas et al. 2014). Additionally, one study found that by reducing the use of pesticides, Bt GMOs actually increased populations of insect predators, such as birds (Lu and Desneux 2012). So, if your concern with GMOs is biodiversity, then, once again, you should be supporting Bt GMOs, because they are demonstrably better than the alternatives.

Having said that, there are reports of some non-target insects being affected by Bt GMOs, but these are usually insects that specialize on eating or parasitising pest species (Wolfenbarger et al. 2008). So, in many cases, it’s not that the GMO itself harms them, but rather that the GMO kills their prey. Also, to be 100% clear, studies comparing Bt GMOs to conventional crops that are not sprayed at all have found that there is a slight difference in diversity levels (likely at least partially from the type of ecological interactions I just described; Whitehouse et al. 2005), but the expectation that most crops shouldn’t be sprayed at all is unrealistic (it is a nirvana fallacy) and would result in other environmental problems (e.g., increased land area, tilling methods that damage the soil, etc.).

It’s also worth explicitly stating that the safety of Bt GMOs still holds true even if we look at specific groups that people care greatly about, like bees and butterflies. There was initial concern that the pollen from Bt crops could adversely affect these groups, but that suggestion was based on unrealistic exposure levels, it ignored the fact that they are affected by sprays, and subsequent studies have failed to find evidence that these crops harm bees (Duan et al. 2008) and non-pest butterflies (Mendelsohn et al. 2003). Further, Bt GMOs have one final benefit: reduced habitat loss.

Habitat loss and fragmentation is the single biggest threat to biodiversity (Newbold et al. 2015; Wilson et al. 2016; Young et al. 2016). Further, conversion of natural lands to agriculture is the biggest cause of habitat loss (Foley et al. 2005; Phalan et al. 2016) and is well known to be a serious threat to conservation (Martinuzzi et al. 2015; Tilman et al. 2017). This is one of the key reasons why, as a conservation biologist, I support GMOs. They have a higher yield than conventional crops, which means that they need less land to grow the same amount of food. Therefore, from an environmental standpoint, they are tremendously beneficial. Indeed, increasing crop yields is often argued as a key strategy for preserving biodiversity (Phalan et al. 2016; Tilman et al. 2017).

Let me try to explain it this way. All agriculture is bad for biodiversity. When you take a natural forest or grassland, clear it, and plant crops, you will inevitably lose a large number of species that used to live there. People often seem to have this idyllic view of farms (particularly organic farms) as if all the animals and plants that lived in the forest before it was cleared will somehow continue to live in the organic farm field. This is a fairy tale. Even if you rotate your crops, never till the soil, and never use any pesticides, the biodiversity of that farm field will still be substantially lower than what it was before you turned it into a farm field, because the field doesn’t contain the various habitat types that many animals need (e.g., a forest species is not going to live in a field). People seem to have no trouble realizing this when it comes to things like clearing rainforests to grow palm oil, but for some reason, when it comes to crops in countries like the USA and European countries, people suddenly don’t seem to realize how harmful clearing land for agriculture actually is, but its negative effects on biodiversity are well-documented (Krauss et al. 2010; Martinuzzi et al. 2015).

Benefits to non-GMO farmers

If you listen to the anti-GMO crowd, they often operate under the pretense of protecting farmers who don’t grow GMOs, but as usual, reality is quite different. Indeed, several studies have confirmed that non-GMO farmers benefit tremendously from having Bt GMO farms near them. This is the case because the Bt GMO farms protect the non-GMO farms via what has been called the “halo effect.” You see, the Bt GMOs do such a good job of killing pest species, that the populations for those species decline in the areas where Bt GMOs are grown (Carrière 2003; Wu et al. 2008; Dively et al. 2018). Additionally, as mentioned earlier, Bt GMOs result in increased populations of generalist predators (such as birds) compared to non-GMO crops, and these predators act as biological control agents on the fields in their area (Lu and Desneux 2012). As a result of both of these factors, in the areas around Bt GMO farms, there are fewer pest insects to attack the non-GMO crops, and non-GMO farmers enjoy less crop damage, higher yields, and higher profits than they would if there were no GMO farms around (Hutchison et al. 2010; Wan et al. 2012; Dively et al. 2018). Remember that study that I mentioned earlier that found that Bt corn saved farmers billions of dollars? In Illinois, Minnesota, and Wisconsin $2.4 billion of those savings were by non-GMO farmers, and in Iowa and Nebraska $1.9 billion were by non-GMO farmers. Further, this protective halo effect allows non-GMO farmers to use fewer pesticide applications than they would need to otherwise (Wu et al. 2008; Hutchison et al. 2010; Dively et al. 2018). So, both the environmental and economic benefits of Bt GMOs spill over into the non-GMO farms.

Benefits to human health

In addition to the benefits to the environment and farmers, Bt GMOs have also been demonstrated to be safer for humans because of reduced mycotoxins (Pellegrino et al. 2018). These are chemicals produced by fungi, and can end up in our food when fungi are growing on the crops. The Bt crops don’t actually kill the fungi, but they do kill the pest-insects that make habitats for the fungi. You see, the fungi like to grow in the holes created by pest insects chewing on the plants. So, fewer pest insects means fewer holes, which means less fungi and less mycotoxins (Pellegrino et al. 2018). I don’t want to oversell this, because, at least in first world countries, food is usually checked for mycotoxin contamination, so food with it usually gets thrown out. Nevertheless, the filtering process is not 100% effective, and they are still a concern. So, the Bt GMOs do in fact reduce your risk of this.

What about pesticide resistance?

At this point, people usually bring up pesticide resistance. This is the evolved resistance to Bt toxin that ultimately causes Bt to be ineffective at controlling insect populations (it is analogous to antibiotic resistance). This certainly is a problem, but it is not a problem that is limited to GMOs. Indeed, insects were documented evolving resistance to Bt long before GMOs were available (remember, Bt is used as a spray in many non-GMO farms, including organic farms; McGaughey 1985; Tabashnik et al. 1990). So even if all the Bt GMO fields were replaced with organic fields (as some would like to see happen) we would still be having this problem because resistance to a widely used pesticide is an inevitable outcome of natural selection (at least inevitable without careful management).

The second problem with this argument is that resistance to Bt simply means that we can’t use Bt anymore. So, saying that we shouldn’t use Bt because it will create Bt resistant insects makes absolutely no sense. It is literally saying, “we shouldn’t use Bt, because if we use Bt we won’t be able to use Bt.”

Third, although resistance is a problem, it is not an insurmountable one. One current strategy that is widely used is to have “refuge’ fields that are not Bt GMOs and are not treated with Bt (Siegfried and Hellmich 2012). Indeed, in the USA, the EPA requires farmers who use Bt corn to have at least 20% of their fields as refuge fields. This is a good strategy because of how natural selection works. I don’t want to get too bogged down in the details here, but in short, Bt GMOs (or Bt sprays) kill the majority of pest insects in the field, and only a handful that have alleles that are resistant to Bt will survive. If those insects mate with each other, we will quickly get a resistant population where all the insects have resistant alleles. By having a nearby refuge, however, we have a large population that is not resistant, making it more likely that the resistant insects will mate with the non-resistant insects, and the alleles for being resistant will be diluted. Indeed, it is well known that gene flow can swamp adaptation in this way (Kawecki and Ebert 2004; Foster et al. 2007; Funk et al. 2012; read this series for more about how evolutionary mechanisms work). Other strategies are also being developed and tested, so this is very much a situation where we should take the necessary precautions to prevent insect resistant, but there is no reason to use insect resistance as a general argument against the crops. As the old saying goes, don’t throw the baby out with the bathwater.

Conclusion

In short, Bt GMOs have tremendous benefits and are actually the opposite of most anti-GMO claims. For example, GMO opponents claim that GMOs increase pesticide use, but Bt GMOs greatly reduce it. Similarly, you may have heard the claim that GMOs are bad for biodiversity, but Bt GMOs are actually far better for it than non-GMO crops (including organic crops) because they are more targeted and have fewer effects on non-target species. Further, habitat loss is the dominant threat to biodiversity, but because Bt GMOs increase yields, they reduce the need for clearing habitat for agriculture. Additionally, they benefit farmers by increasing yields and profits, and they even benefit non-GMO farmers by providing a protective “halo” that increases the non-GMO farmers’ yields and profits and reduces their need for pesticides. So, from both an environmental and economic standpoint, Bt GMOs are better than the conventional and organic alternatives.

Related posts

Literature Cited

  • Carrière 2003. Long-term regional suppression of pink bollworm by Bacillus thuringiensis cotton. Proceedings of the Royal Academy of Sciences 100:1519–1523.
  • Cattaneo 2006. Farm-scale evaluation of the impacts of transgenic cotton on biodiversity, pesticide use, and yield. Proceedings of the Royal Academy of Sciences 103:7571–7576.
  • Dively et al. 2018. Regional pest suppression associated with widespread Bt maize adoption benefits vegetable growers. Proceedings of the Royal Academy of Sciences
  • Duan et al. 2008. A meta-analysis of effects of Bt crops on honey bees (Hymenoptera: Apidae). PLoS ONE 3:e1415.
  • Foley et al. 2005. Global consequences of land use. Science 309:570–574.
  • Foster et al. 2007. Genetic structure and evolved malaria resistance in Hawaiian honeycreepers. Molecular Ecology 16:4738–4746.
  • Funk et al. 2012. Harnessing genomics for delineating conservation units. Trends in Ecology and Evolution 27:489–496.
  • Garigan and Ristedt 1999. Death from hyponatremia as a result of acute water intoxication in an Army basic trainee. Military Medicine 164:234–238.
  • Hutchison et al. 2010. Areawide suppression of European corn borer with Bt maize reaps savings to non-Bt maize growers. Science 330:222–225.
  • Kawecki and Ebert 2004. Conceptual issues in local adaptation. Ecology Letters 7:1225–1241.
  • Krauss et al. 2010. Habitat fragmentation causes immediate and time‐delayed biodiversity loss at different trophic levels. Ecological Letters 13:597–605.
  • Kumar and Chandra 2008. Bacillus thuringiensis (Bt) transgenic crop: An environment friendly insect-pest management strategy. Journal of Environmental Biology 29:641–653.
  • Lu and Desneux 2012. Widespread adoption of Bt cotton and insecticide decrease promotes biocontrol services. Nature 487:362–365.
  • Martinuzzi et al. 2015. Future land-use scenarios and the loss of wildlife habitats in the southeastern United States. Ecological Applications 25:160–171.
  • Marvier et al. 2007. A meta-analysis of effects of Bt cotton and maize on nontarget invertebrates. Science 316:1475–1477.
  • McGaughey 1985. Insect resistance to the biological insecticide Bacillus thuringiensis. Science 229:193–195.
  • Mendelsohn et al. 2003. Are Bt crops safe? Nature Biotechnology 21:1003–1009.
  • Newbold et al. 2015. Global effects of land use on local terrestrial biodiversity. Nature 520:45–50.
  • Pellegrino et al. 2018. Impact of genetically engineered maize on agronomic, environmental and toxicological traits: a meta-analysis of 21 years of field data. Scientific Reports 8.
  • Phalan et al. 2016. How can higher-yield farming help to spare nature? Science 351:450–451.
  • Shelton et al. 2002. Economic, ecological, food safety, and social consequences of the deployment of Bt transgenic plants. Annual Review of Entomology 47:845–881.
  • Siegfried and Hellmich 2012. Understanding successful resistance management: The European corn borer and Bt corn in the United States. GM Crops and Food 3:184–193.
  • Tabashnik et al. 1990. Field development of resistance to Bacillus thuringiensis in diamondback moth (Lepidoptera: Plutellidae). Journal of Economic Entomology 83:1671–1676.
  • Tilman et al. 2017. Future threats to biodiversity and pathways to their prevention. Nature 546:73–81.
  • Vitale et al. 2010. The commercial application of GMO crops in Africa: Burkina Faso’s decade of experience with Bt cotton. AgBioForum 13:320–332.
  • Wan et al. 2012. The halo effect: Suppression of pink bollowrn on non-Bt cotton by Bt cotton in China.
  • Whitehouse et al. 2005. A comparison of Arthropod communities in transgenic Bt and conventional cotton in Australia. Environmental Entomology 34:1224–1241.
  • Wilson et al. 2016. Habitat fragmentation and biodiversity conservation: key findings and future challenges. Landscape Ecology 31:219–227.
  • Wolfenbarger et al. 2008. Bt Crop effects on functional guilds of non-target arthropods: a meta-analysis. PLoS ONE 3:e2118.
  • Wu et al. 2008. Suppression of cotton bollworm in multiple crops in China in areas with Bt toxin-containing cotton. Science 321:1676–1678.
  • Young et al. 2016. Patterns, causes, and consequences of Anthropocene defaunation. Annual Review of Ecology, Evolution, and Systematics 47:333–358.

 

Posted in GMO | Tagged , , , | 3 Comments

Life constantly arises from “nonliving matter”

Creationists often argue that scientists’ lack of knowledge about how the first cell arose is evidence that life could not have arisen “spontaneously from nonliving matter.” There are numerous problems with this argument, some of which I have dealt with before. For example, it is entirely an argument from ignorance fallacy (details here). Further, although it is often used as an argument “against evolution” it is actually an entirely separate concept from the theory of evolution, and the two theories do not rely on each other (details here). In this post, however, I want to focus on a different aspect of this argument. Namely, the fact that it isn’t actually true. Life arises spontaneously from “nonliving matter” all the time. Creationists simply frame the argument in a deceptive way that ignores the chemical nature of living organisms. Every time an organism reproduces, life is arising from nonliving matter. Now, creationists will, of course, object to that claim because that new life came from the reproduction of another living organism, but that is actually entirely irrelevant. As I will explain in detail, life itself is simply a product of highly complex chemistry, and the process of reproduction consists entirely of chemical reactions among nonliving atoms. The living organism simply provides the environment in which that chemistry can take place.

Definition of “spontaneous”

The first thing that we need to talk about in this discussion is the definition of “spontaneous.” In chemistry, spontaneous has a specific meaning. It gets a bit technical with concepts like entropy, but the easiest way to understand it is that a spontaneous reaction is exothermic (meaning that it releases energy into the environment), whereas a nonspontaneous reaction is endothermic (meaning that it requires energy from the environment). This is an oversimplification, but that is not really important for this post.

The definition used by chemists is not, however, generally what creationists mean when they talk about a “spontaneous” formation of life. Rather, they seem to mean simply an event that could happen naturally without conscious intervention. Although not technical, we can use this definition, but I think we need to carefully clarify it at the outset. By this definition, given the right environmental conditions (including temperature, enzymes, etc.) any chemical reaction is spontaneous. Imagine, for example, that I take a small salt crystal, and drop it into water. The salt will dissolve because the positive sodium ions will be attracted to the negative part of water molecules, while the negative chloride ions will be attracted to the positive part of water molecules (water is a polar molecule). That reaction is (by creationists’ definition) spontaneous. It is an inevitable outcome of the chemistry. No one has to sit there and will the molecules to interact with each other. They just do so automatically because of the way that charges, electrons, etc. behave. You might try to quibble over this example because it involved me (a conscious entity) dropping the salt into the water, but we can easily think of situations where the chemicals would meet without intervention (e.g., a cliff eroding into a lake).

This may seem straightforward so far, but it is critical to clarify that this definition of spontaneous must still apply even when we are talking about reactions that occur inside a living organism. Take photosynthesis, for example. Plants take in water (H2O) and carbon dioxide (CO2) and through a complex series of chemical reactions, they produce oxygen (O2) and glucose (C6H12O6). Various enzymes are involved, and the reaction is endothermic and requires energy from the sun. Thus, it is not spontaneous by the technical chemical definition, but it is spontaneous by the definition that creationists use when they say things like, “life can’t spontaneously arise.” In other words, it is an inevitable outcome of the chemistry in that environment. When you have those chemicals (including the various enzymes, etc.) plus an input of energy from the sun, the reaction will happen. No one has to force the individual atoms to interact.

Note: I will use this definition of spontaneous throughout.

Everything is nonliving matter

There is no such thing as living matter. The dichotomy between “living’ and “nonliving” matter is a completely false one that is perpetuated by the way that all of us (including scientists) talk, but it is a critical topic when it comes to abiogenesis, because creationists entire argument hinges on this false distinction between living and nonliving matter.

The problem here is that matter is never alive, but when it is arranged in certain ways, it results in chemical reactions that produce the property known as life. In other words, living organism are composed entirely of nonliving matter. You are, for example, predominantly composed of the elements oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. These are nonliving atoms that come together to form nonliving molecules. When those molecules are arranged in a certain way, they chemically react and produce a living cell, and those living cells collectively from a living organism (you). You are alive, and your cells are alive, but the matter that makes those cells is not alive. It’s just a specific arrangement of nonliving atoms.

This may seem like an entirely pointless semantic quibble, but it is actual vital for this discussion, because, since matter itself is not alive, all life, by definition, arises from nonliving matter. When you make a sperm or an egg cell, for example, nonliving matter is simply being arranged into a living cell. Yes, that arrangement is being performed by living cells, but they are themselves just arrangements of nonliving matter, and they are simply providing the chemicals (aka nonliving matter) and environment necessary for spontaneous chemical reactions to arrange the chemicals into a living cell. The matter is never alive at any point in the process.

Now, I can already hear the objection that a living cell is required for that to happen. In other words, this argument states that even though the matter itself is not alive, a living organism is required to arrange the nonliving matter into a living cell. As I will explain in subsequent sections, however, there is absolutely no reason to think that assertion is true.

This is about chemistry, not consciousness

Before I go any further, I need to make a brief comment about consciousness, because someone will inevitably respond to my assertion that life is simply a product of complex chemistry by arguing that “chemistry can’t explain consciousness.”

I want to respond to that in several ways. First, prove it. You are a biochemical machine. You breathe in oxygen, which is transported to your cells thanks to haemoglobin in your blood. That oxygen, as well as glucose from your food, is then used for a complex chemical reaction known as cellular respiration. This produces the molecule ATP which can be reduced to ADP, resulting in a release of energy. That degradation of ATP to ADP powers your body. Every function of your body is controlled by chemistry and reducible to chemistry. Even when you are thinking, that is a result of chemistry in your brain (neurotransmitters, sodium ion channels, etc.). Given all of that, there is no good reason to think that consciousness is not also just a product of complex chemistry.

Having said that, however, this topic is actually completely irrelevant to the argument about abiogenesis, and I would really rather just drop it altogether. So, to that end, I will focus on bacteria from here on out. I don’t know anyone who thinks that bacteria are conscious, so we can talk about them from an entirely chemical perspective, even if you think that consciousness is more than chemistry. Further, the first cell is thought to have been something similar to a cyanobacteria, so talking about bacteria is rational path.

Bacterial reproduction

Diagram of a bacterium. Image credit: Ali Zifan via Wikimedia

Now that we have agreed to focus on bacteria, let’s talk about how bacteria reproduce. Bacteria are very simple organisms and really only consist of a few major parts: a cell wall (made of the chemical peptidoglycan), DNA, proteins known as ribosomes, cytoplasm (the gooey fluid inside), and a few other bits and pieces. Again, all of those parts are made of nonliving matter, and are themselves nonliving, but when they are arranged correctly, and the correct chemical reactions occur, the cell as a whole exhibits the properties that we use to define life. In other words, ribosomes are not alive, the cell wall is not alive, DNA is not alive, etc., but when all of those things are put together and the correct chemical reactions occur, we describe the entire cell as being alive. Further, we would describe it as “dead” only if those chemical reactions ceased. Thus, biological life is defined by the occurrence of specific chemical reactions.

When a bacterium detects that the environment and resources are good for reproducing, a series of chemical reactions are triggered. Most importantly, the contents of the cell (DNA, ribosomes, enzymes, etc.) are duplicated. The DNA duplication involves a series of enzymes and chemical reactions that read the DNA strands and make identical copies. Again, this is a spontaneous chemical reaction that will occur anytime that the right chemicals are supplied under the right conditions.

Unlike the DNA, the proteins are duplicated by the ribosomes (which are themselves made of proteins). The ribosomes are protein factories. They receive blueprints from the DNA (in the form of mRNA) and building materials from the rest of the cell (in the form of tRNA), and they arrange those building materials according to the blue prints. Here again, this all happens because of inevitable chemical reactions (given the reactants and environment; see note at the end for more details on how proteins are made).

Diagram of protein synthesis. Image credit: Kelvinsong via Wikimedia

Hopefully at this point the picture is becoming clear. The entire process of forming a new cell is just a long string of chemical reactions. It is true that in nature, we have only observed this entire chain of reactions occurring in living cells, but that is just because the cell provides the right environment, conditions, and reactants for those reactions to take place. If the right conditions occurred outside of a cell those reactions would still happen. Imagine, for example, that we figured out how to artificially produce ribosomes, then put them in a beaker with the correct reactants, mRNA templates, enzymes, tRNA, energy input, etc. Would they form proteins? Yes. In fact, we have done essentially that. We have developed methods known as cell-free protein synthesis that allow you to produce the proteins for a given strand of DNA in a test tube without needing a living cell!

The significance of PCR

For most of the history of life on planet earth, DNA replication only happened in one place: a living cell. During human history, DNA didn’t spontaneously replicate in nonliving environments. Just like the production of proteins and the other steps involved in making a new cell, DNA replication required a living cell. DNA replication is, however, just chemistry (just like the other steps of making a new cell), and scientists saw no reason why it shouldn’t be possible to replicate DNA if the right conditions were created outside of a cell. So, they began studying the chemistry, and after years of work, they figured it out, ultimately resulting in the polymerase change reaction (PCR).

If you ever take even an introductory course on genetics, you’ll almost certainly have to do a PCR reaction, because it is one of the most common tools in laboratories around the world (a substantial amount of my life has been spent running these reactions). To do PCR, you take a strand of DNA that you want to replicate, add the necessary chemicals (enzymes, bases, primers, etc.), put the mixture into a thermocycler that creates the correct temperature profile for the reaction to occur (i.e., the environment), and lo and behold, you replicate DNA without needing a living cell.

Why is that possible? Why is it possible to take a process that, in nature, requires a living cell, and do it without a living cell? Because the process is entirely chemical! Again, the cell just provides the environment necessary for that reaction to occur, but if you can replicate a suitable environment outside of a cell, then you can do the same reaction. Further, there is absolutely no reason to think that this only applies to DNA replication. Every step involved in making a cell is just a series of chemical reactions, and there is absolutely no reason why a living cell should be the only environment in which those reactions are possible.

Additionally, it is important to remember that the series of reactions that occur in living cells today are more complicated than would be necessary to form a rudimentary cell. Indeed, scientists are actively studying chemical reactions that can produce primitive versions of various cellular components without requiring a living cell.

Bringing it all together

Let’s recap, shall we? Matter itself is not living. Rather, when nonliving chemicals are arranged together and react in certain ways, they produce living organisms that consist of nonliving matter. Further, the processes and actions of these living organisms are simply the result of complex chemical reactions. Additionally, these chemical reactions occur “spontaneously” in that they will occur on their own given the right chemicals in the right environment. Indeed, all living organisms are accurately described as biochemical machines, with these “spontaneous” reactions driving their functions.

As a result of all of this, it is completely fair to say that life constantly arises from nonliving matter, because each new cell is formed by arranging nonliving matter into a configuration that will result in the chemical reactions that produce the properties that we describe as life. It is true that currently these reactions do not occur in nature outside of a cell. In other words, each new cell is formed by existing cells; however, because that formation process is entirely chemical, there is no reason to think that those chemical reactions could not occur elsewhere. To put that another way, living cells simply provide the right environment and resources for those reactions to occur, but if the right environment occurred outside of a cell, those reactions would still occur even in the absence of a cell. Indeed, we have clearly demonstrated this by replicating a key component of cellular reproduction (i.e., DNA replication) in the lab. Further, it is likely that the environment on planet earth billions of years ago would have also been conducive to these types of reactions.

In short, there is absolutely no reason to think that life couldn’t form “spontaneously from nonliving matter,” because matter is never alive, and the formation of life is nothing more than a complex series of chemical reactions.

Note: Someone is probably getting ready to point out that although PCR replicates DNA, it is not exactly the same reaction used by living cells. That is true, but completely irrelevant. There are lots of different variants of the DNA replication process found in nature, and it is entirely possible the first cells used mechanisms that were different from those of current cells. So, all that matters is that we were able to replicate DNA in the lab. In other words, the point is simply that a living cell is not required for that task to be accomplished. The end product is what matters, not the mechanism through which it happened.

More details on protein synthesis: The process here is complicated, but the simplest way to explain it is like this. DNA is a chemical molecule, and the four bases of DNA (ATCG) are four different chemical molecules. When the cell sends blue prints to the ribosome, it translates the DNA into mRNA, which also consists of four bases (AUCG; mRNA is a single-stranded complimentary copy of the DNA strand with T replaced with U). The bases on mRNA are arranged into sets of three, known as codons, and each codon codes for a specific amino acid. Once this strand of mRNA is in the ribosome, it will react with tRNA, which consists of anticodons attached to an amino acid. The anticodon is the compliment of the codon, and, because of the chemistry, anticodons (and, as a result, the amino acids they carry) are specific to specific codons. Thus, each anticodon reacts with a specific codon, ultimately resulting in its amino acid getting added to the amino acid from the previous anticodon. In other words, the ribosome matches the codons with the correct amino acid, resulting in reactions that bind the amino acids together into chains, and those chains fold to form proteins. I realize that may sound like the ribosome is a conscious entity that is consciously deciding how to do this, but it is not. All of this is 100% chemistry. In the presence of the right enzymes, chemical reactions will occur with the codons, anticodons, ribosome, amino acids, etc., ultimately causing the amino acids to string together in a certain order that is dictated by the chemistry of the RNA, which is in turn dictated by the chemistry of the DNA.

 Related post
Abiogenesis: An unsolved mystery is not evidence of a creator

Posted in Science of Evolution | Tagged , | 24 Comments

Methodolatry: An over-reliance on placebo-controlled trials

vial of homeopathic medicineRandomized placebo-controlled trials (RCTs) are often referred to as the “gold standard” for clinical research, and they certainly are useful and powerful tools. They aren’t, however, the only reliable tools, nor are they always the best tools. Indeed, there are many situations where other methods are actually superior. Nevertheless, among the general public, there is a common misconception that RCTs are the only reliable method and nothing else counts as valid science. For example, anti-vaccers often demand RCTs showing that vaccines are safe and effective (see note), and they routinely dismiss other methods, such as cohort studies, even when those methods are more suitable to answering the question at hand. At an even further extreme, I have encountered climate change deniers who say that climate science isn’t real science because it doesn’t use RCTs (i.e., we don’t take multiple earths and burn fossil fuels on half of them while using clean energy on the other half). This is, of course, nonsense and represents a fundamental misunderstanding of how science works. Indeed, most fields of science don’t rely on RCTs. Given how common these misconceptions are, I want to spend a few minutes talking about some of the pitfalls and problems with RCTs and explain why, in some cases, other methods are superior (there is actually a term for an over-reliance on RCTs: methodolatry).

Note: There are actually lots of RCTs for vaccines, anti-vaccers just ignore them. These studies usually come from the development of new vaccines, and they are quite good at showing that vaccines are effective; however, while they do support the safety of vaccines, for many safety concerns, other methods are actually superior (see the rest of the post)

Sample size

The biggest problem with RCTs is that they are only powerful when they have a large enough sample size, but getting the sample size necessary to test for rare effects is often impossible. To conduct an RCT on people, you need a large group of people who are eligible for the trial, are willing to come to your clinic/lab, are willing to risk taking an experimental treatment, are willing to risk being in the placebo group, will reliably follow the rules of your study, and will reliably return at the appropriate times for follow-ups. All of that is difficult and expensive to achieve, and, as a result, RCTs often only have a few dozen participants, and an RCT with a few hundred participants would be considered a large study.

Having a few dozen participants is fine if you are testing something with a large effect, but it is problematic if you are testing a rare effect. Imagine, for example, that you were testing a drug that caused the desired effect in 80% of test subjects and the background rate of that effect (i.e., its rate in the placebo group) was only 10%. In that case, if you had 60 participants (30 in each group) you’d expect to have 3 “successes” in the placebo group and 24 or more in the experimental group. That is a large difference that the RCT could easily detect (it would be statistically significant with a P value <0.0001).

Now imagine that you are testing something much more rare. Let’s say that you are testing an adverse event that has a background rate of 1% (i.e., 1% of people will have it even if they didn’t take the drug being tested). Further, let’s assume that the drug being tested does actually cause the side effect in an additional 1% of people (i.e., it causes it in 1 in 100 people, plus a background rate of 1 in 100). Now, we expect 1% of the control group to have the adverse event, and 2% of the experimental group to have it. That’s a problem if we only have 30 people in each group in our RCT. Mathematically, we’d only expect 0–1 people in each group to have the event, and we can’t distinguish that statistically. Further, even if we had a full 100 people in each group, we’d only get 1 event in the control group and 2 in the experimental group, which is far from being statistically significant even though the drug did actually have an effect (P = 1.0). Even at an enormous sample size of 1,000 people per group, we would not be able to detect the effect of the drug (P = 0.0978).  Indeed, would need a sample size of nearly 1,400 people per group before we would be able to detect a significant difference, and even then, we could just barely detect it.

hierarchy of evidence

This is the standard hierarchy of science, and in many cases it is correct, but it is not absolute, and other factors, such as sample size, have to be considered when comparing certain methods.

This sample size issue is a huge problem that severely limits RCTs. Fortunately, other methods like case-controlled trials and cohort studies are available. I talked about both of these methods in detail here, but briefly, case-control studies basically work backwards from RCTs. Rather than taking a group of people, giving half of them a treatment, then measuring an outcome of interest, case-controlled studies start by identifying a group of people who have an outcome of interest (the cases) then matching them to people who are as similar as possible in age, sex, race, etc. but lack the outcome of interest (the controls). Then, they look for differences in potential causes of interest. If you were interested in vaccines and autism, for example, you could take a group of people with autism, match them with a group of people without autism, then see if the vaccination rates differed among groups (this has been done, and the vaccination rates don’t differ; Destefano et al. 2004; Smeeth et al. 2004; Price et al. 2010; DeStefano et al. 2013; Uno et al. 2015).

Cohort studies are similar in that they are observational, rather than actually administering a treatment; however, they follow more of a traditional approach (as opposed to the backwards approach of case-controlled studies). They select a group of people who do not currently have an outcome of interest, some of which are expected to receive a treatment of interest. Then, they follow the patients to see whether the treatment causes the outcome. For example, you could take a large group of children who have yet to be vaccinated, follow them, and, presumably, some of them will have parents who vaccinate, and others won’t. So, you can then compare things like autism rates among the vaccinated and unvaccinated without having to actually be the one who administers the vaccine. These types of studies can also be done retrospectively by using medical records (here again, this has been done for vaccines and autism with no evidence that vaccines cause autism; Anders et al. 2004; Madsen et al. 2002; Jain et al. 2015; Verstraeten et al. 2003).

Cohort studies and case-controlled studies do make it more difficult to control confounding factors, but when they are done well, it is possible to account for many of them and, at times, even randomize. Further, they make it much easier to have very large sample sizes because of their observational nature. Studies with hundreds, thousands, and even hundreds of thousands of samples are common for these types of studies. Indeed, for the topic of vaccines and autism, a meta-analysis of these studies had a sample size of over 1.2 million children (Taylor et al. 2014), which is far more power than we could hope for from an RCT. Think of it this way, if an RCT, case-controlled study, and cohort study all had the same sample sizes, the RCT would be more powerful, but if the RCT has a small sample size, the outcome being tested is rare, and the case-controlled and cohort studies have substantially larger sample sizes, then the case-controlled and cohort studies will often yield more meaningful results (assuming that all the studies were set up correctly with adequate case-matching, controls, etc.).

All of this is very important for concerns over vaccine safety, because the known side-effects of vaccines are quite rate, often only occurring at rates of 1 in 1,000 or fewer. As a result, examining them with RCTs isn’t feasible, and would actually make vaccines look better than they are. So when anti-vaccers demand RCTs, they are actually demanding an unsuitable method that almost guarantees the opposite result from what anti-vaccers are hoping for.

This is also true for the adverse-events that anti-vaccers erroneously attribute to vaccines. Take autism and vaccines as an example. The CDC currently puts autism rates at 1 in 68. Further, we know that autism has a genetic component and that children without vaccines can still develop it. So, even if we assume that 50% of autism cases are caused by vaccines (they aren’t), we would need an enormous sample size with over 1,000 children in each group before an RCT could detect a significant effect, and getting that sample size for an RCT just isn’t feasible in most cases. Case-controlled and, especially, cohort studies, however, can achieve those sample sizes, and they have. Madsen et al. 2002, for example, was a cohort study on vaccines and autism that had a total sample size of over 400,000 children, which is far, far larger than could be achieved for an RCT (like all of the others, it failed to find a significant association between vaccines and autism).

You don’t always need a placebo

Some people might try to object to everything that I said above by insisting that, because of the nature of placebo effects, the control has to be a placebo, not simply a lack of treatment. That line of reasoning is, however, nearly always based on a misunderstanding of how placebo effects work. They aren’t magical and they aren’t going to do something like protect you from an infectious disease. Conversely nocebo effects (which occur when people respond negatively to a placebo) aren’t going to give you autism or anything like that. Thus, whether or not the control needs to be a placebo depends entirely on the type of outcome being researched. When the outcome is something subjective, like pain, placebos are critical. They are also important for things that can be subject to regression to the mean, such as the treatment of an existing ailment; however, when the outcome is something like getting an infectious disease or a developmental problem like autism, placebos are far less important.

For more on placebo effects, I recommend this post by Science-Based Medicine.

Ethical issues

Another problem with RCTs is that, at times, they are unethical. This is the case when either there is compelling evidence from other sources to suggest that the treatment is likely dangerous or there is compelling evidence from other sources to suggest that withholding the treatment is likely dangerous. Consider, for example, an experimental drug which, during animal testing, kills half the test subjects. Even though RCTs in humans rank far higher than animal studies on the hierarchy of evidence, in that case, we would have to be content with the animal trials, because it would clearly be unethical to administer that treatment to humans. The inverse is also true. When a treatment is already known to be highly beneficial, it would be unethical to withhold it.

Indeed, this is the reason why RCTs for vaccines are often limited to the development of new vaccines. We know from population-based studies, cohort studies, case-controlled studies, etc. that vaccines are exceedingly beneficial and save millions of lives. Therefore, it would be unethical to give a child a placebo and expose them to potentially deadly diseases.

Most fields of science don’t use RCTs

Finally, I want to address the notion that fields like climate science aren’t real science because they don’t use RCTs. First, as should be obvious by now, even in fields like clinical research, RCTs aren’t the only option. Second, and more importantly, science is simply a systematic process of making and testing falsifiable predictions, and for many fields, those predictions do not require RCTs to test. Consider evolutionary biology, for example. If you have a hypothesis that two species share a common ancestor, you can make predictions about what genetic and fossil patterns you should see if those species share a common ancestor, then you can examine the fossils and genetics and see if those predictions are correct. Testing that hypothesis does not require an RCT and an RCT wouldn’t even be useful for testing it. The same is true for fields like climatology. RCTs are neither necessary nor possible for studying past climates or our current climate, and that’s fine. The definition of science is not, “randomized placebo controlled trials.” Rather, it is a systematic process of making and testing predictions, and the exact methods used to test those predictions vary greatly from one field to the next.

Conclusion

In summary, placebo controlled trials are extremely useful and powerful tools, but they are not the only ones available, nor are they always the best ones. They are often limited by sample size, and in many cases, other methods are actually superior. Further, it is unethical to use them when substantial benefits or serious harms have been established by other methods. Therefore, the necessity and utility of RCTs has to be evaluated on a case-by-case basis. For some questions, using RCTs is extremely important, even vital, but in other cases, methods other than RCTs may suffice or even be superior.

Related posts

Literature cited

  • Anders et al. 2004. Thimerosal exposure in infants and developmental disorders: a retrospective cohort study in the United Kingdom does not support a causal association. Pediatrics 114:584–591
  • DeStefano et al. 2004. Age at first measles-mumps-rubella vaccination in children with autism and school-matched control subjects: a population-based study in metropolitan Atlanta. Pediatrics 113:259–266
  • DeStefano et al. 2013. Increasing exposure to antibody-stimulating proteins and polysaccharides in vaccines is not associated with risk of autism. J Ped 163:561–567
  • Jain et al. 2015. Autism occurrence by MMR vaccine status among US children with older siblings with and without autism. JAMA 313:1534–1540
  • Madsen et al. 2002. A population-based study of measles, mumps, and rubella vaccination and autism. New England Journal of Medicine 347:1477–1482
  • Price et al. 2010. Prenatal and infant exposure to thimerosal from vaccines and immunoglobulins and risk of autism. Pediatrics 16:656–64
  • Smeeth et al. 2004. MMR vaccination and pervasive developmental disorders: a case-control study. Lancet 364:963–969
  • Taylor et al. 2014. Vaccines are not associated with autism: and evidence-based meta-analysis of case-control and cohort studies. Elsevier 32:3623-3629
  • Uno et al. 2015. Early exposure to the combined measles-mumps-rubella vaccine and thimerosal-containing vaccines and risk of autism spectrum disorder. Vaccine 33:2511–2516
  • Verstraeten et al. 2003. Safety of Thimerosal-Containing Vaccines: A two-phased study of computerized health maintenance organization databases. Pediatrics 112:1039–1048
Posted in Nature of Science, Vaccines/Alternative Medicine | Tagged , , , ,

The existence of real conspiracies does not justify conspiracy theories

Most science deniers are conspiracy theorists. Many of them don’t like to think of themselves as conspiracy theorists and would even ardently deny that they deserve that label, yet when you present them with peer-reviewed evidence for anthropogenic climate change, the safety of vaccines, the safety of GMOs, etc. they almost invariably respond by asserting that those studies aren’t valid evidence because vast corporations, governments, etc. have bought off all of the scientists, doctors, regulatory bodies, etc. That claim has no evidence to support it and is a textbook example of a conspiracy theory.

As a result of the conspiratorial nature of science-deniers, conspiracy theories are a frequent target of my blog/Facebook page, but almost any time that I post about the illogical nature of conspiracy theories, I get irate responses from people who insist that conspiracy theories are not inherently illogical, and they do this based on one of two lines of logic. Either they bring up the fact that real conspiracies do exist and have been discovered, or they cite agencies like the FBI that investigate criminal conspiracies and they incorrectly assert that I am suggesting that those agencies are inherently irrational (i.e., that FBI agents are conspiracy theorists). Both of these arguments are wrong and rely on infuriating semantic games rather than actual facts or logic, but I encounter them frequently enough that I want to spend a few minutes explaining the problems with them.

Both of these lines of reasoning rely on conflating real conspiracies with conspiracy theories, but that is semantic tomfoolery. The term “conspiracy theory” generally refers to self-reinforcing ideas that rely on inserting assumptions into gaps in our knowledge, have little or no actual evidence, and conveniently excuse any evidence that is presented against them. That is a very different thing from the type of investigation that an organization like the FBI does, and we don’t use the term “conspiracy theory” to refer to that type of investigation.

Let me give an example to illustrate this. On many occasions, I have showed a science-denier a peer-reviewed paper that discredits their view, only to have them claim that the authors were paid off. Then, I showed them the conflicts of interest section of the paper which clearly showed that no conflicts of interest existed. At that point, they, of course, said that the payment was secret and, therefore, not reported. When I asked them for evidence to support that claim, however, they couldn’t provide it. It was an assumption that they were making simply because the evidence did not fit their view. This is how conspiracy theories operate. Any evidence that conflicts with the theory is explained away as part of the conspiracy, and a lack of evidence to support claims is also justified as simply being part of the conspiracy (e.g., claiming that large corporations are silencing scientists and preventing them from publishing).

Now, let’s contrast that with actual investigations of actual conspiracies. Imagine, for example, that the FBI was investigating corruption, found no evidence of it existing (i.e., no conflicts of interest) but they ignored that lack of evidence and assumed that it was simply part of the conspiracy. Obviously, that would be a really bad investigation, and they could never get a conviction out of it. If they failed to find evidence to support what they thought was true, they would have to move on. They couldn’t just ignore any evidence that disagreed with them. To put that another way, agencies like the FBI rely on evidence, not conjecture when conducting their investigations, and their hypotheses change as new evidence arises.

Do you see the difference? Conspiracy theories conveniently excuse contrary evidence by writing it off as part of the conspiracy; whereas real investigations are based on the available evidence and don’t blindly ignore any evidence that disagrees with them. Having said that, real conspiracies certainly do exist, and multiple of them have been uncovered, but the fact that real conspiracies exist does not mean that your conspiracy theory is logical or justified.

To give an example of why the existence of real conspiracies doesn’t justify conspiracy theories, imagine that I have a friend named Bob, and for one reason or another, I decided that Bob was a murderer. I didn’t have any real evidence, but I “just knew” that I was right. Now, imagine that you confronted me about this and demanded evidence for my claims, and I responded by saying, “lots of real murderers have been caught, it happens all the time; therefore, it is rational for me to think that Bob is a murderer.” Would my reasoning be correct? Obviously not. The fact that there are murderers does not in any way shape or form make it rational to think that Bob is a murderer. I need actual evidence specifically showing that Bob murdered someone. The same thing is true with conspiracies. The fact that real ones exist doesn’t mean that your theory is justified. You need actual evidence for your view to be rational.

Now, at this point, conspiracy theorists will inevitably protest and claim that they do, in fact, have evidence. However, every time that I have ever asked to see that evidence, I have been sorely disappointed. Inevitably, the “evidence” takes the form of blogs, youtube videos, and baseless conjecture, often espousing ideas that have been thoroughly investigated and debunked. For example, I still encounter people who cite “climategate” as evidence that climatologists are involved in a conspiracy, despite the fact that multiple independent and well-respected scientific bodies examined the situation and concluded that no wrong-doing or data-manipulation had occurred. Of course, the conspiracy theorists inevitably respond by asserting that those scientific bodies are also part of the conspiracy, but that just illustrates my point. Conspiracy theories are irrational precisely because they twist any evidence to fit the conspiracy. Think about it, what could you possibly show a conspiracy theorist to convince them that the theory was wrong? Nothing, because no matter what evidence you show them, they will argue that the evidence is also part of the conspiracy.

My point in all of this is really simple. The term “conspiracy theory” specifically refers to imagined conspiracies that have no real evidence to support them and inherently rely on making assumptions to fill gaps in knowledge, rather than actually basing views on the available evidence. The existence of real conspiracies does not justify these conspiracy theories, nor should you play semantic games to try to equate conspiracy theorists with evidence-based investigative bodies like the FBI. So please, if you see a post about conspiracy theories, spare us all from your pedantry.

Note: I want to be clear that the use of the word “theory” in the term “conspiracy theory” is very different from its use in science (it is far more similar to its use in the term “movie theory”). In science, a theory is an explanatory framework that has been rigorously tested and shown to have a high predictive power. It is not simply a guess nor does it indicated that we are unsure of its veracity.

Posted in Uncategorized | Tagged | 9 Comments

Training to be a scientist: It’s not an indoctrination and it’s more than just reading

Many people seem to have some rather strange preconceptions about how higher education in science works. I often encounter people who insist that graduate school “indoctrinates” students, robs them of creativity, and “brainwashes” them to “blindly accept scientific dogma.” Further, others are under the delusion that training in the sciences just consists of lectures and reading, and they think that just spending some time on Google is sufficient to learn everything that they would get from actually earning an advanced degree. It probably shouldn’t be surprising that the people making these claims have never received any formal training in science and are, in fact, projecting their own biases onto a system that they know nothing about. Having said that, it’s not particularly surprising that so many people are so hopelessly wrong about how graduate school works, because it is a very unique system that is unlike most other forms of education. Therefore, I thought it would be a good idea to explain what advanced training in the sciences is actually like, and in so doing, I want to dispel common myths about “indoctrinations” as well as the notion that reading articles on Google is equivalent to receiving an advanced degree.

Note: There is a lot of variation in graduate programs, and different countries often have different systems (AU and the USA are quite different, for example). So, I will focus on the commonalities and highlight some differences where relevant.

Note: I earned a BSc and MSc at two different universities in the USA, and I am currently about three quarters of the way through a PhD in AU.

Time frame

If you plan on having a career in science, you will almost certainly need a MSc and in most cases a PhD. To earn those degrees, you will first have to earn a BSc, which will take you 3–4 years, depending on where you earn it. Typically, people then go for a MSc, which is usually another 2-3 years, followed by a PhD, which is another 4-6 years (generally speaking). There are exceptions though. For example, in the US, some schools allow you to skip the MSc and go straight for the PhD, but you generally have to have a fair amount of previous research experience, and those PhD programs generally take a bit longer than PhD programs that have a MSc as a prerequisite. Regardless of the path you take, however, you are probably going to need at least a decade of higher education (often more) before you can get a job as a researcher (also, even after earning a PhD, if you want to go into academia, you have to spend several years doing post-docs, but that is a whole other topic).

I want to pause here for a moment to consider the frequent claims on the internet that someone has done “thousands of hours of research.” For example, anti-vaccers often like to claim that they have spent thousands of hours studying vaccines and, therefore, are just as qualified as scientists and doctors. Let’s do some math on that for a minute. Graduate students generally work a minimum of 60 hours a week, and don’t take many holidays (undergrad is similarly strenuous in the sciences). Thus, completing training in the sciences will usually take a minimum of 60 hours a week, for 50 weeks a year (assuming two weeks of vacation), for 10 years (you don’t get summers off as a graduate student, and your undergraduate summers will often be spent on internships). That’s 30,000 hours of training. Further, even if we want to be absurdly, unrealistically generous and say that someone completes their degrees in eight years, working only 50 hours a week, that is still 20,000 hours! Please keep that in mind the next time that someone claims to know more than scientists just because they spent some time reading Google! To put that another way, even if grad school consisted of nothing but reading (which, as I’ll explain in a minute, it doesn’t), that would still mean that people with advanced degrees are far, far more well-read than someone who reads blogs in their spare time.

It’s about research, not coursework

Probably the biggest misconception about advanced education in the sciences is the notion that it is mostly coursework. Indeed, I often have friends and family members ask me things like, “when do your courses start for the year?” or “do you have any big tests coming up?” But that’s not actually how grad school works. There are courses, but the emphasis is on research.

To expand on that, earning an undergraduate degree in science does, in fact, require a lot of courses. However, many of those courses involve a laboratory or field component. These courses have traditional lectures, but also give you hands on training and experience, and that hands-on learning is vital. Reading about how to set up an experiment, and actually setting up an experiment are two very different things. Further, if you plan on going to graduate school after earning your bachelors, you will probably (but not always) need to gain some real research experience by doing summer research internships, assisting graduate students with their research, doing independent research with a professor, etc. Regardless of how you gain that research experience, the result is the same: you get actual experience with real research and one-on-one mentoring from experienced researchers. Thus, even at the undergraduate level, you should be gaining knowledge from hands-on learning, not just lectures and reading.

At the graduate level, there is a lot of variation depending on where you go. The US, for example, generally does have some coursework requirements, but they are fairly minimal. Where I did my MSc, for example, I took about two courses a semester for my first three semesters, and didn’t take any courses my last semester. Most of my time, however, was spent doing research. That is fairly typical for US universities, and if you are doing a PhD in the US, you can expect to take several courses during your first two years or so (while simultaneously doing research), then do nothing but research for the last several years.

The Australian system differs a bit. It assumes that you have already taken enough courses, and you just jump straight into pure research. As a result, their degrees are typically on the short end of the spectrum. Also, although there are no required courses, you do often have to do a certain number of “workshop” hours. For example, by taking a week-long course on statistics, a workshop on grant-writing, etc.

Lego Grad Student is one of the most entertainingly accurate representations you’ll ever see of what it is like to be a graduate student. It is one of my few sources of joy in life, and I highly recommend that you follow him on facebook, twitter, etc.

So, if grad school isn’t focused on coursework, then what do grad students actually do? In short, we do research. To complete a graduate degree, you pick a topic that you want to study, design a series of experiments* to test your hypotheses about that topic, conduct those experiments, analyze the results, and write a formal report of your experiments and results (you have to submit a thesis or dissertation to your school, but if you want a job after grad school, you’ll usually want to publish each chapter of your thesis/dissertation as a paper in a peer-reviewed journal). A typical day for me, for example, consists of either spending all day in the field collecting samples, or spending all day in the lab processing those samples (extracting DNA, running PCRs, etc.), or sitting at my computer analyzing the data from those samples, or writing a paper on the results, or (usually) a hectic, scatterbrained combination of all of the above. I get up each morning, seven days a week, do research for a minimum of 10 hours, watch an episode of Star Trek or Dr Who, rant on Facebook, go to bed, get up the next morning and do it all over again. This is what grad school does: it teaches you to be researcher by throwing you into the deep end and seeing whether you sink or swim.

Here again, I simply cannot overstate the importance of this learning-by-experience approach. Reading about how to design an experiment, how to run statistics, how to control for confounding variables, etc. simply is not the same as actually doing it, which is why I find it utterly ridiculous that people think that reading a few blogs or even books makes them qualified to tell scientists that they are wrong.

There is a lot of reading

Another gem from Lego Grad Student

Despite what I’ve so far about grad school being more than just reading, there is still reading, a LOT of reading. However, when I say, “reading” I don’t mean reading blogs or poorly-referenced books by fringe academics. Rather, I mean reading hundreds of peer-reviewed papers, as well as lengthy academic books. As a grad student, you should be reading close to a paper a day. You have to read that much if you want to actually be an expert in your field. Further, new research is coming out all the time, which means that you have to stay up to speed on the most recent publications. As a result, even after earning your degree, you still have to read constantly.

Out of curiosity, when I was writing this post, I checked the number of full papers that I had saved in my desktop reference manager (I use Mendeley, just fyi), and there are over 1,100 papers in there that I have read and taken notes on. Further, those are just the papers that I thought were worth keeping the entire text of. I probably only save about half the papers I read at most, not to mention all the academic books I’ve read.

Further, just to be clear, I am in no way unusual in this regard. I’m not bragging about how well-read I am. Rather, my point is that this is normal for training in the sciences. This is simply what it takes to be an expert in science. It requires constantly reading the original research, and it is utterly absurd to think that watching Youtube and reading blogs is going to give you knowledge that is equivalent to the type of knowledge that is gained through that level of dedication and study.

It’s a collaborative learning environment

So far, I have stressed the importance of reading massive amounts of peer-reviewed literature and the importance of actually doing research, but there is another key aspect of graduate training: learning from your advisers and peers.

Graduate school in the sciences is, in many ways, an apprenticeship. When you join a program, you join the lab of a senior academic who serves as your primary supervisor. Their job is to train you, but also to keep you on track. Good advisers, in my opinion, should give you a lot of freedom to explore ideas and go down wrong roads, but they should also reign you back in when you are too far off track. I’ve personally been really lucky with advisers, because I’ve had great ones at every level who I’ve learned a tremendous amount from, and when I say “learned from,” I don’t want you to picture a situation where I go into their offices and they lay down the law and tell me exactly what to think and do, because that is way off from how the relationship actually works. Rather, I go to them with an idea I’ve come up with or a problem that I am running into, and we sit down and debate and discuss what to do with that idea or problem. There is a wonderful back-and-forth and an exchange of ideas and knowledge that helps me to not simply solve the current problem, but to think more clearly about how to solve future problems.

Here again, this is something that you simply don’t get from reading the internet. The ability to, on a regular basis, sit down with an expert with years of experience and debate and discuss ideas is invaluable. That type of exchange teaches you to think critically, and critical thinking is a skill that has to be honed and practiced.

Further, as a graduate student, you shouldn’t just be learning from your primary supervisor. You should also have secondary supervisors, collaborators, etc. all of whom have different knowledge bases and skill sets that you can learn from. Additionally, fellow graduate students are great for bouncing ideas off of and getting help from. Even within a lab group, everyone has their own areas of expertise, and in a good group, that expertise gets shared. At some point, on nearly any given day, I end up going to another member of my lab to run something past them, see if they have any experience with a particular analysis method, etc., and, conversely, other students come to see me when their research intersects with something that I’m knowledgeable about. This constant exchange of ideas and information is incredible, it’s one of my favorited things about academia. I constantly get to benefit from the hard-earned knowledge of those around me. Further, even when I am the one sharing knowledge, I find that I benefit from the exchange. At some point, you’ve probably heard the sentiment that if you want to see if you really understand something, you should try explaining it to someone else. I think that there is a lot of truth to that, and on many occasions the process of explaining something to someone has made me think of something I hadn’t considered before or realize that there were gaps in my knowledge that needed to be filled.

I know that I sound like a broken record by now, but once again, this type of knowledge-sharing environment is something that is hard to come by on the internet. There are plenty of echo chambers to convince you that you’re correct about what you already think is true, but collaborative learning environments like what you experience in grad school are few and far between.

It’s not an indoctrination

At this point, I want to directly dispel several myths. The first of which is the bizarre notion that students are indoctrinated to blindly follow the accepted wisdom of their fields, rather than thinking for themselves. This is exactly the opposite of reality. Professors constantly encourage students to think outside of the box, ask questions, and challenge accepted notions. For example, most graduate programs involve some form of seminar, journal club, etc. where a professor (or sometimes a student) picks a paper for the group to read, then everyone sits around debating the paper, trying to shoot holes in it, brining other papers into the discussion, etc. It is an exercise that is specifically designed to make students think critically and questions papers rather than blindly accept them. Indeed, at all three of the universities I’ve attended, I’ve had professors explicitly instruct students to read papers critically, rather than assuming that their results are correct. There was one seminar that particularly sticks out in my mind where the professor was really excited by the paper he wanted us to read, but by the end of the seminar the entire group, professor included, agreed that there were serious problems with the paper. Rather than simply giving us a paper and saying, “here it is, believe it,” he took us through the process of assessing the paper and thinking critically about it.

Further, the system is designed not simply to teach you to be critical, but also to teach you how to be critical. By this process of discussing and debating papers with your peers, you learn how to assess papers, how to look for weak points in their methodologies, how to place them into the broader context of a whole body of literature, etc. This is something else that you simply do not get from merely reading.

To put this another way, science is all about asking questions. That’s fundamentally what science is: a systematic process for asking and answering questions. Therefore, it really shouldn’t be surprising that graduate programs place an emphasis on teaching students to ask questions and think critically about those questions. If you’re not willing to ask questions then you have no business being in science.

Scientists are creative

There is a common stereotype that scientists aren’t creative, and I often hear people claim that grad school squashes creativity. That is, of course, utter nonsense, and, honestly, it’s a pretty insulting assertion. Scientists are extremely creative; we have to be. Things go wrong constantly in science. Experiments never go the way that you think they will, and you have to come up with solutions to those problems, which generally requires creativity. I once heard someone describe lab work as, “a never-ending series of experiments to figure out why the last experiment failed.” That’s a pretty fair assessment, and if you can’t be creative and think outside of the box, you’re never going to find the solution to the problem you are trying to solve.

Let me just give a few examples of the types of things my peers come up with. One of my friends needed a way to survey arboreal lizards that lived in under the bark of trees, but he didn’t want to peel the bark, because that was bad for the trees. So, he came up with the idea of taking a foam puzzle-piece mat (like the ones you put together on the floor of a kid’s play area) wrapping it around a tree, and holding it in place with two bungie cords. It’s simple, cheap, innovative, and works great. It’s a creative solution to a problem. Another one of my friends needed to collect musk samples from snakes (most snakes excrete a musk from in or around their cloaca) and he figured out that the easiest way to do it was just to put a condom over the snakes’ tails. Again, it’s a brilliant solution, and he had to think outside of the box to come up with it. Similarly, last year, someone else in my field figured out that you could use a vibrator to tell the sex of turtles. I may not have personally done anything that interesting, but I’ve still had to design and build all sorts of crazy contraptions for field work, and I’ve spent the last few months working on a novel method for removing contamination in laboratory reagents. This type of creative problem solving is the norm for scientists. Actually read the scientific literature, and you’ll be blown away by the creative solutions that the men and women in science come up with.

Indeed, earning a PhD is largely a test of whether you can creatively problem solve, because problems are going to arise constantly, and you are going to have to come up with creative solutions for them. So this notion that scientists are rigid and can’t think outside of the box is utter nonsense. Further, it should be blatantly obvious that it is nonsense, because if it was true, science would never have progressed. How could science possible move forward if scientists didn’t question the accepted wisdom and come up with novel, creative solutions?

Finally, it is worth explicitly stating that being scientific and being artistic are not mutually exclusive. Plenty of scientists are also brilliant musicians, painters, etc., and they create things that are aesthetically pleasing as well as things that are intellectually pleasing. For that matter, “data art” is becoming a big and wonderful thing, where scientists make graphs and figures that are beautiful to look at as well as informative.

Other tasks

This post has become longer than I intended, so I’ll be brief here, but I do want to point out that there are lots of other tasks imposed on graduate students. For example, you usually have some form of teaching duties, your expected to write grants to fund your research, you have to prepare presentations and give them at conferences, etc. All of this is quite stressful, and between all of these demands and the 60+ hour work-week, depression and mental illness tend to be quite common among graduate students. There are certainly things about the system that need to change to address that issue, but that is a topic for another post.

Conclusion

The point that I’m trying to drive home here is that earning an advanced degree in the sciences is far, far more than simply doing a lot of reading. A tremendous amount of reading is involved, but it’s only part of a much bigger picture. Similarly, coursework is fairly unimportant for graduate school in the sciences, and some programs have no course work requirements at all. Instead, graduate school focuses on research. As a graduate student, you are a researcher and you will spend your days designing and conducting experiments, analyzing results, writing papers, etc. That type of hands-on, experience-based learning simply can’t be replaced by a few hours on Google. Further, as part of your training, you get to work with and learn from experts in your field as well as your fellow students. You get to be immersed in a constant exchange of ideas and knowledge. Additionally, this process does not brainwash you or squelch your creativity, quite the opposite. Creativity, the ability to ask questions, and critical thinking are vital for being a successful scientist, and graduate schools foster those talents rather than suppressing them.

In most areas of life, people have no problems deferring to experts, but for some reason when it comes to science, people view expertise as a bad thing.

When you add all of this up, it should be blatantly obvious that reading blogs and watching Youtube videos does not put you on par with people with that level of training and experience. That doesn’t seem like something I should have to say, but apparently, I do. By way of analogy, imagine that someone spends years being trained on how to be a mechanic, they read hundreds of books and manuals on mechanics, work with experts, and spend years actually working on cars. Then, imagine that someone who has never picked up a wrench reads some blogs and maybe a book or two, then has the audacity to not only say that their knowledge is equivalent to the expert’s, but also that the expert is actually fundamentally wrong about many basic aspects of mechanics. It’s an insane scenario, yet it is exactly what people do all the time with science, and science is way more complicated than car mechanics.

Having said all of that, I’m not trying to discourage laypeople from studying science. By all means, study science. Learn as much as you can. Our universe is amazing, and science is a window into its beauty and majesty. So please, read, study, and learn, but when you do that, first make sure that you are using good sources, and second, have a healthy respect for the amount of work, knowledge, and expertise that it took to make those discoveries that you are reading about. If you think that you found something that scientists are wrong about, stop and think about the amount of training required to become a scientist and the amount of work that goes into research, then ask yourself how likely it really is that you, as a non-expert, found something that all of the experts missed. You should be very, very cautious before concluding that they are wrong, just as someone who has never worked on a car should be very, very cautious before deciding that virtually every professional mechanic is wrong.

*Note: I use the term “experiment” pretty broadly to simply describe a scientific test of an idea. Thus, it may not be the traditional randomized controlled design that people think of. Rather, it could be collecting tissue samples to look at population genetics, testing a hypothesis about evolution by seeing whether fossils match your predictions, etc.

Please read these posts before you accuse me of an appeal to authority fallacy:

Posted in Nature of Science | 13 Comments