Basic Statistics Part 6: Confounding Factors and Experimental Design

The topic of confounding factors is extremely important for understanding experimental design and evaluating published papers. Nevertheless, confounding factors are poorly understood among the general public, and even professional scientists often fail to appropriately account for them, which results in junk science. Therefore, I want to briefly explain what they are, and how to deal with them. You should then apply this whenever you are reading a scientific study or proposing that a particular study should be done.

Before I can explain confounding factors, I need to explain some fundamentals of experimental design. As a general rule, when you are designing an experiment, you want to have an experimental group and a control group. These two groups should be identical except that one group (the experimental group) should receive some form or treatment (what we call the experimental factor or experimental variable) while the other should either receive nothing or receive a placebo (depending on the exact type of study being conducted). In other words, the two groups should be totally identical except for the experimental variable. When that condition is met, you can then infer that differences between the two groups are being caused by the experimental variable. In other words, if the two groups are completely and totally identical in every way except for experimental variable, then any differences in your response variable (i.e., the thing that you are measuring) must be being caused by the experimental variable, because it is the only thing that differs between the two groups (see Note 1; also note that you can also have multiple experimental groups testing different things simultaneously, but I will stick with two for now for the sake of simplicity).

This sounds all well and good, but the problem is that in reality, having two identical groups is almost never possible. This is where the topic of confounding factors come in. If you Google a definition of “confounding factors,” you’ll basically find two lines of thought. One is that a confounding factor is a third variable that actually has a causal effect. The second is that a confounding factor is simply a third variable that might have a causal effect, thus preventing you from being able to assign causation. I detest semantic debates, so I won’t waste time quibbling over which definition is better, but for the sake of this post, I am going to use the second definition, both because that is how I hear people use it more frequently in actual discussions among scientists, and because that is the concept that I am trying to convey even if you want to use a different term (i.e., I don’t care if you remember the term “confounding factor” but I do care if you remember the concept of a confounded experiment).

This is an illustration of a fully confounded experiment. Because different brands were used in each group, there is a third variable that confounds the experiment and makes it impossible to assign causation.

This is an illustration of a fully confounded experiment. Because different brands were used in each group, there is a third variable that confounds the experiment and makes it impossible to assign causation.

Let me give a simple example of how this might play out. Imagine that you have designed a product that is intended to prolong the life of a car. Thus, you get two groups of new cars, and pour the product (the experimental variable) into the engine of one group (the experimental group), while giving the other group an equivalent volume of regular engine oil (the control group). Then, you hire people to drive them around a racetrack at high speeds for weeks until they eventually die (or perhaps put them on a treadmill of sorts, I don’t pretend to know how car testing works). Then, you record how long the cars last (your response variable). Now, suppose that at the end of the experiment, you find that, on average, the experimental group lasted significantly longer than the control group. Thus, you conclude that the product works. There is one problem, however. All of the experimental cars were Toyotas, and all of the control cars were Nissans. This is what we would call a fully confounded experiment, because there is a third variable (car brand) that is also completely different between your groups. Thus, it would be impossible to use this experiment to say that the product works, because you have no way of knowing if the product actually worked or if Toyotas simply last longer than Nissans. Do you see how that works? Because of that third variable, there is no way to confidently assign causation.

That is obviously an extreme example, but this type of thing happens all the time in real experiments, and it can occur in very subtle ways. For example, let’s say that you are testing a drug on rats, and you have your rack of cages with control rats on one side of the room and your rack of cages with the experimental rats on the other side. That may not sound like a problem, but it can be. Imagine, for example, that there is a draft in your lab, and as a result, one rack experiences a different temperature than the other. Temperature can affect metabolism and a host of other biological processes, so that would confound your experiment. Similarly, perhaps people walk through one half of your lab more than the other. That could stress the rats, and stress also affects many biological processes. There could also be a slight difference in the lighting, or any other host of factors. All of that may seem minor, but it really can make big differences in your results, and when you have a fully confounded experiment like that, you simply can’t assign causation.

My rat example is also a bit extreme, because it is, once again, a fully confounded design, meaning that the confounding factor (cage position) is totally different between the two groups; however, there are also many cases where experiments are partially confounded, and they can be just as problematic. Let’s say, for example, that you are testing a drug, and your control group is made up of 90% men and 10% women, whereas your experimental group is made up of 20% men and 80% women. That is actually a problem, because women and men have biochemical differences and they often respond differently. As a result, any differences that you see could be being driven by male/female differences rather than control/treatment differences. Thus, sex is a confounding factor and makes it very difficult to assign causation (though in this case perhaps not impossible given a large enough sample size).

So how do scientists actually deal with this? As I said early on, confounding factors are everywhere, and having two totally identical groups is virtually impossible. Fortunately, there are several important strategies for dealing them: eliminating, randomizing, blocking, and measuring.

I’m going to try to stick with my car example as I explain how these work, because I think it is useful to use a hypothetical topic that is totally separate from any real or contentious issue. The experimental design that I mentioned originally where the experimental group was entirely Toyotas and the control group was entirely Nissans is a terrible one. If you actually did that experiment, you would be screwed, because there is no statistical test on the planet that could tease out the effects of Toyotas vs the effects of the treatment in a fully confounded experiment like that (see Note 2 at the end). Fortunately, most scientists aren’t that brainless. We spend a great deal of time thinking about confounding factors before doing our experiments, and we try as hard as possible to control them. So in this case, we could control this experiment by simply eliminating many of the confounding factors. For example, we could use only one brand and one model of car for both groups (thus eliminating brand as a factor). Further, we would want to control the year of the car, the factory that produced it, etc. Basically, the idea is that if you can remove a variable from your experiment, then you should do so (unless those variables are interesting to you, more on that later). Remember, ideally you want your groups to be totally, 100% identical. So to do that, you need to eliminate as many confounding factors as you possibly can.

Once you’ve done that, and you’ve gotten your two groups to be as similar as possible, you may think that you are good to go, but you actually aren’t. Once again, having two totally equal groups is virtually impossible. So even after you have controlled every confounding factor that you can think of, there there will almost certainly still be some slight variation that you aren’t aware of. For example, there may be slight inconsistencies in the manufacturing process, the steel that was used, etc. Because you don’t know what those differences are, you can’t eliminate them, but you can compensate for them by randomizing. In other words, you should take your entire pool of cars (which should already be as similar as possible) and randomly select which ones go into each group. Thus, any variation gets randomly dispersed into your two groups, rather than falling disproportionately into one group. This is an extremely powerful and important tool that should be used whenever possible. In any experiment, you should thoroughly randomize whenever possible (in this case we would also randomize our drivers).

So far so good, but what happens when you have a confounding variable that you know about, but can’t get rid of? In many cases there are practical reasons why it is impossible to get rid of confounding factors, and in other cases there are scientific reasons. For example, maybe you want to know if there is an interaction between the product and car brand (i.e., does it work better on Toyotas than Nissans?), or you may simply want your results to be as broadly applicable as possible. After all, if you only test it on Toyotas, then all that you have actually shown is that it works on Toyotas (assuming it works at all), and you are making an assumption when you apply that result to other car brands. That assumption is probably reasonable, but it would be better to actually test it.

This is a blocked experimental design. Because each car brand occurs in both the experimental group and the control group, car brand can be included as a factor in the analysis, and causation can be assigned.

This is a blocked experimental design. Because each car brand occurs in both the experimental group and the control group, car brand can be included as a factor in the analysis, and causation can be assigned.

This is where blocking and measuring your confounding factors come in. Let’s say that you want to test this product on Toyotas, Nissans, and Fords. So, you select one model and year of each brand and control for confounding factors within car brand as much as possible, just like before. Having done that, one option would be to simply pool all three car brands and randomly select your experimental cars and control cars from that. There is nothing technically wrong with that (assuming you still include car brand as a factor in your analyses, see Note 3), but it’s not the most powerful design available to you. A much better design would be to block or group your experiment. You could, for example, have 30 of each brand in each group, in which case brand would be a blocking variable. To be clear, you still must randomize, but the randomization would take place within blocks. In other words, you would take your 60 Toyotas, and randomly assign half to each group, then you would take your Nissans and randomly assign half to each group, etc. Then, when you do your statistics at the end of the test, you would include brand as a variable in your statistical analyses, and this would be a very robust design (I won’t go into the details of why this design is so powerful here, but if you want to learn more, looking into two-factor ANOVAs is a good place to start).

This is a more complex blocked experiment. It also includes "nesting" meaning that you have car models "nested" within car brands.

This is a more complex blocked experiment. It also includes “nesting” meaning that you have car models “nested” within car brands.

You can also build on this design by including additional blocks. For example, you could have several models of car within each brand (this would then introduce yet another concept known as nesting, which I won’t go into but you can read about here). Alternatively, perhaps you are interested in how the product works in heavy duty vehicles like SUVs vs standard cars. In that case, you could have one car model and one SUV model from each car brand and include vehicle class (SUV vs car) as an additional blocking variable (again you would want to randomize within each block). You could even go one step further and have several car models and several SUV models within each brand, at which point you would have three blocks (brand, class, and model) as well as nesting. As you can see, this all becomes very complicated very quickly, and I don’t expect you to be thinking at the three-block stage right now, but I want you to be aware that blocking is a very powerful tool that lets you make sense of complex experimental designs that may, at first, appear to have serious issues with confounding factors.

Finally, let’s imagine that for some reason you can’t block your experiment. In other word, there is some confounding factor that you know about, but for one reason or another you can’t block against it. For example, perhaps all of your cars are used, rather than brand new. That would obviously greatly increase the variation in your data and would likely force you to greatly increase your sample size, but even with a larger sample size and randomization, you would still need some mechanism for dealing with the fact that some cars had been driven more than others prior to your experiment. The solution is actually quite simple, you record the pre-existing mileage on each car and include those data as a factor known as a covariate in your analyses. The idea is basically that covariates explain some of the variation in your data, so by including them in the model, you get that explanation and can compensate for the variation caused by the confounding factor. As a general rule, you should do this anytime that you have some measurable variation that can’t be eliminated or blocked (see Note 4). You should measure it, then include those measurements in the analysis.

At this point, you may be wondering what on earth the point of all of this is. After all, most of my readers aren’t scientists who are going to be designing experiments, and if you are, you should be consulting a statistician or good stats book, not reading my blog. Nevertheless, if you have read this far, then I am going to assume that you are interested in science and understanding how the world works, and I’m going to assume that this interest will invariably lead you to read some scientific literature. That is where this comes in, because not all scientists know what they are doing, and we are all prone to mistakes, so when you read a scientific paper, you should look for things like this. See if they eliminated as many confounding factors as possible, see if they blocked the experiment and included those blocks in the analyses, see if they randomized correctly, and make sure that they included measurable variation in the analyses. If they didn’t do these things, then you should be dubious of their results. If you see confounding factors that they didn’t account for, or they didn’t randomize, etc. you should think twice before accepting their conclusions (for example, see my analysis of a rat/Roundup that was not done correctly). I should also clarify here that although I have been talking specifically about randomized controlled studies in this post, what I have said applies to other designs such as cohort studies as well. The techniques that I have laid out are extremely important for dealing with confounding factors, and you should make sure that they are being used correctly when you read a study.

Related posts

Notes: As you can probably tell, this topic can be quite complex, and I have had to make some simplifications and generalizations to try to get the key points across in a brief blog post. Nevertheless, there are a few things that I would like to provide some additional information on.

Note 1: When I said, “if the two groups are completely and totally identical in every way except for experimental variable, then any differences in your response variable (i.e., the thing that you are measuring) must be being caused by the experimental variable” I want to be completely clear that I was talking about two 100% identical groups (i.e. no variation between the groups whatsoever). Only then can you be totally certain that your relationships are causal, but as I explained in the post, there is always some variation. Therefore, in actual experiments, you can’t assign causation with 100% significance. This is one of the many reasons that we use statistics to tell us the probability that the result that we are observing could have arisen just by chance (more details on how that works here).

Note 2: When I said that no statistical tests could determine causation with a fully confounded experiment, I want to be clear that I am talking simply about taking the measurement variable (car lifespan) and doing a statistical test comparing care lifespan between your experimental group and treatment group. I am not talking about investigations into the mechanisms through which the product worked. In other words, if you had a hypothesis about the mechanism that the product should use, then perhaps you could examine the cars in the two groups to try to actually look at their mechanical properties, but that is a very different thing from the type of statistics that I have been talking about here.

Note 3: I have been talking about blocks as groups that you deliberately design, but I should point out that for many statistical tests, you can include blocking factors even if you didn’t set up the type of nice even design that I have laid out here. For example, if you took a pool of Toyotas, Nissans, and Fords and randomly assigned them into groups instead of treating them as blocks with randomization with blocks, you could (and indeed should) still include car brand as a factor in your analysis. Your groups will probably just be a bit lopsided (i.e. out of 60 cars, 40 might get randomly selected to go into the control group and only 20 might end up in the experimental group). Deliberately blocking is just generally a bit better because you can control the number of samples in each block.

Note 4: I said that you should always measure sources of variation that can’t be eliminated or blocked and include them in your experiment, but there are exceptions. Sometimes, the variation may not have anything whatsoever to do with the causation, and “over-fitting” statistical models (i.e., including unnecessary variables) is actually a problem. So, if you are in a situation where there is no reasonable mechanism through which a measurable source of variation could influence the results, don’t include it. For example, including the paint color of the cars as a factor would almost certainly be meaningless, so don’t include it. Additionally, depending on the type of statistical test that you are using, you may be able to construct several statistical models using different combinations of variables, then test which one expalins the data better using things like AICs and BICs. To be clear, you aren’t running a bunch of models to see which one gives you the results that you like, rather you are selecting the model that does the best job of explaining the data (this is a complex topic for another time).

Posted in Nature of Science | Tagged , , , | 1 Comment

I used to be a science denier: My journey from pseudoscience to skepticism

In this post, I decided to do something a bit different and write about my personal background rather than debunking woo, explaining a logical/scientific concept, or any of the other things that I usually do on this blog. The embarrassing reality is that I wasn’t always a skeptic, far from it, in fact, and I think that my journey towards skepticism (a Fallacy Man origin story, if you will) may be helpful and instructive both for people who are currently on a similar quest for truth, as well as my fellow skeptics who are endeavoring to teach science and critical thinking. Because of my background, I have an atypical perspective on things that I hope will be both enlightening and encouraging to others. Therefore, I will explain my background, as well as directing several comments specifically to those who reject “mainstream” science (as I once did), as well as comments  to my fellow skeptics.

Note: although this is specifically about science, it all applies to political topics as well, and people on both sides of the political spectrum should consider it.

 

My Background

I was raised in a very conservative Christian home, where I was taught to believe in a literal interpretation of the Bible, including young earth creationism (i.e., evolution is false, the earth is 6,000 years old, Noah’s flood was a real historical event, etc.). Nevertheless, I’ve been interested in science for as long as I can remember (particularly paleontology and zoology), and if you had asked the 8-year-old me what I wanted to be when I grew up, I would have passionately told you that I wanted to be a scientist. As a result, my parents furnished me with a multitude of books, but they were always given to me with a caveat. Namely, I was told to ignore all of the sections on evolution, and in addition to the actual science books, I was supplied with numerous books from Answers in Genesis and other creationist organizations so that I would learn the “truth” about the history of life and wouldn’t be led astray by the devil’s lies (that is literally how evolution is viewed among my family members).

I want to pause for a moment to make two quick clarifications. First, I am not attacking religion. As I have repeatedly stated, this blog is about science, and my posts only deal with religion as it relates to science. So, if you want to both believe in God and accept evolution, the big bang, etc., then I don’t really feel the need to debate you here (that isn’t to say that discussions about the existence of a God aren’t worth having, but simply that they aren’t within the scope of this blog). Second, I want to be clear that I am not attacking my family. Although I now disagree with them on a great many things, I still love and respect them. I appreciate the home that I was raised in, and I appreciate the fact that my parents really did foster my love of science by buying me tons of books, chemistry sets, microscopes, etc., putting up with me converting the back yard into a “dig site,” letting me bring home all manner of animals as pets/study subjects, tolerating me dragging home dead animals to dissect/reassemble their skeletons, etc. (I was a strange child, if you haven’t figured that out).

Getting back to my origin story, by the time that I graduated high school, I was thoroughly vested in young earth creationism. I was also extremely argumentative (a trait that I’ve never outgrown), and I was quick to pick fights over evolution. Indeed, I was one of the people who would get on the comments sections of blogs like this one and viciously defend creationism. I even wrote my high school senior term paper on why evolution wasn’t true, and in my college freshman speech class, I gave a speech about why abiogenesis wasn’t true. I got A’s on both assignments, and at the time I thought that my arguments were rock solid. Looking back, however, I can see the numerous scientific inaccuracies and logical fallacies that I was committing.

fact checking fake newsAlthough creationism was probably my biggest scientific blunder, it was far from the only one. I also denied anthropogenic climate change (although less adamantly), and was seriously mistaken about many other more minor aspects of science. Vaccines and GMOs were never hot topics in my household, so I didn’t know much about them at the time, but given my gullibility and inability to think critically, I am quite certain that I could easily have fallen for anti-GMO and anti-vaccine propaganda as well (indeed, I have a relative who is still a young earth creationist and still denies climate change, but he is now also opposed to vaccines/GMOs and believes in all sorts of woo).

The point of all of this is simply that going into college, I was as far from a skeptic as I possibly could have been, even though I was very interested in science. Now, you may think that I’m about to tell you that during college I took a bunch of science courses and they opened my eyes…but you’d be wrong. During my freshman year, I was quite impervious to facts. I was a biology major, so my freshman schedule included a zoology course, during which evolution was a frequent topic, but rather than seeing the errors of my ways, I clung to my misconceptions. Indeed, I was the type of student that you see on Youtube videos debating professors when the topic of evolution comes up, and I was totally deaf to their facts and well-reasoned arguments.

So, if science courses didn’t make any difference, then what did? The answer may surprise many of my fellow skeptics/scientists. It was a philosophy course. During the first semester of my sophomore year, I took an introductory philosophy course, and that class literally changed my life, because it did something invaluable: it taught me to think critically. It taught me the rules of logic and how to tell the difference between a good argument and a bad argument. It exposed my biases and made me realize just how blind and arrogant I was being. That course provided a revolutionary light bulb moment that has literally shaped my entire life since. It gave me my raison d’être. It was my radioactive spider bite, my exposure to gamma radiation, etc. I distinctly remember sitting in my dorm room feeling simultaneously empowered by the logical tools that I was being given and horrified by how gullible I had been.

Following that course, there was no going back. I wanted more. I wanted to get better at critical thinking, and most importantly, I wanted to actually understand the way that the universe worked. I was finally willing to challenge my views, and that is exactly what I did. I took more philosophy courses to hone my logical skills, I started actually listening to professors and considering their arguments, and I started questioning everything. I took each of my core views and for the first time in my life, I critically examined them. I actually tried to find problems with them, rather than blindly defaulting to the counterarguments I had been raised with, and if I’m honest, it was not easy or fun. I suffered many sleepless nights and spent months reading, studying, debating, and talking to people on both sides of each topic, but unlike every previous debate I had been in, I was trying to learn rather than trying to win. It was a slow and painful process that cost me friends and strained my relationship with my family, but it was worth it, because one by one nearly all of my core views crumbled. I suddenly could see the problems with the arguments that I had always used, and once I saw them, I couldn’t ignore them. As a result, I made a 180 degree turn on almost everything. My views on science, politics, ethics, religion, etc. all changed, and I emerged as a skeptic.

After that transformation, I continued to try to learn, both professionally and personally. Professionally, I went on to earn a Master’s degree in biology, and I am currently approaching the half way mark for my PhD. Personally, I continued to read and study as much as I could about skepticism, logic, and critical thinking. This ultimately lead me to join the skeptic movement and start the blog that you are currently reading. I often have people accuse me of being a paid shill (I make no money off of this, btw) or some other such nonsense, but hopefully you can now see that I am actually doing this because I care about this. I am deeply indebted to the philosophy professor who opened my eyes, as well as to the skeptic blogs, websites, books, etc. that helped me when I began wrestling with my views and beliefs, so the least that I can do to repay that debt is to join in the fight.

With that in mind, I want to do several things in the remainder of this post. First, I want to speak directly to the creationists, anti-vaccers, climate change deniers, naturopaths, etc. who read my blog. Second, I want to provide some advice for my fellow skeptics, and finally, I want to offer a brief word of encouragement to the other people who devote their time to defending science and teaching critical thinking.

 

To the creationists, climate change deniers, anti-vaccers, GMO-activists, homeopaths, naturopaths, conspiracy theorists, etc.,

This quote comes from Tim Minchin's excellent UWA 2013 graduation speech. It is worth watching in its entirety.

This quote comes from Tim Minchin’s excellent UWA 2013 graduation speech. It is worth watching in its entirety.

I want to encourage you to embark on a journey of discovery like I did. I understand how you think and why you hold the views that you do, because I used to be one of you. I realize that you are not consciously dismissing facts and that you truly believe that the evidence supports your position, because that is exactly how I felt. I really thought that I was right, and it seemed totally reasonable to me that the entire scientific community was involved in some sort of massive cover-up to avoid admitting that they were wrong. It seemed totally rational to me that everyone who disagreed with me was ignorant, biased, or corrupt, and I never once thought that I was denying science. Rather, I thought that the facts were on my side and I was simply one of the few people who was enlightened enough to see the truth. Further, I thought that I truly was fact checking. I had convinced myself that I had actually looked at both sides and was rationally assessing the facts, and that deluded self-conviction led me to 100% surety that I was right, when in reality, I was totally wrong. All the belief in the world doesn’t make something true, and no amount of personal conviction can change facts. So I encourage you not to fall for the same cognitive biases and pitfalls that once ensnared me.

In hindsight, my errors are obvious. I was engaging in what is known as motivated reasoning. Creationism and many other views were core convictions that I held. They were fundamental parts of who I was. As a result, any time that I was faced with evidence against them, I would subconsciously issue a red-alert and raise my mental shields. I would find some way to convince myself that my opponent’s arguments were false. For example, when I encountered evidence for evolution, I would immediately go to sources like Answers in Genesis, find an article or argument that agreed with me, then cling to that as evidence, even if it was total nonsense. I was so biased and so committed to creationism that I could not see my errors, and totally nonsensical creationist claims seemed to make perfect sense to me. I can distinctly remember particular debates where I used one creationist trope after another, my opponent soundly defeated every last one of them, yet I walked away feeling victorious. That’s what motivated reasoning does, and it is extremely dangerous. It prevents you from ever realizing that you are wrong, no matter how clearly wrong you actually are.

My point is simply this: I understand you. I know that you aren’t stupid, crazy, uneducated, etc. I know that you really think that you are correct, but you aren’t, and you will never be able to see that until you are willing to be wrong. Now, you might respond to that by insisting that you have actually carefully considered both sides of the situation (I would have said exactly the same thing), but have you really? I hadn’t. Actually think about this for a minute. Have you actually tried as hard as you can to disprove your own views, or have you simply looked for evidence to support them? As useful exercise, I want you to consider what it would take to convince you that you were wrong. In other words, what piece of evidence would be sufficient to discredit your view? If you can’t think of any reasonable evidence that would make you change your mind, then you have a very serious problem, and you are almost certainly engaging in the same type of motivated reasoning that misguided me for years.

 

To my fellow skeptics,

First, I would like to offer some constructive criticism. It is very, very easy to fall into the trap of thinking that skeptics are somehow special. It is easy to write off everyone else as crazy idiots, but that type of thinking is both wrong and unhelpful. If you honestly think that you are somehow vastly superior to everyone else, then you need to get over yourself. We all have flaws and cognitive biases, and we can all be led astray by misinformation. For me personally, my past is a constant and painful reminder of just how susceptible I am to cognitive biases and motivated reasoning. I am admittedly speculating here, but I really don’t think that I am an exception, and I think that under the right set of circumstances, anyone could be duped by pseudoscience.

On that note, I want to state explicitly that skepticism vs denialism isn’t about being smart. My IQ did not magically increase when I rejected creationism, nor was my brain’s computing power boosted. I was the same flawed idiot that I was before. The only differences were that I was willing to be wrong and I had been taught the necessary logical tools to tell whether or not I actually was wrong. In other words, it is easy to think that the people who disagree with you are crazy or stupid, but most of them aren’t. Most of them are perfectly sane people who are capable of being very rational in most aspects of their life. So intelligence is not the problem. Rather, the problems are biases and an unwillingness to be wrong. Those are difficult things to correct, and denigrating the people that we would like to convert (so to speak) isn’t helpful. To be fair, I have been guilty of this myself on multiple occasions, because it is a very easy trap to fall into, but I think that it is important the we acknowledge the flaws in this approach and try to keep our arrogance in check.

Having said all of that, I want to conclude with a few brief words of thanks and encouragement. Being a public skeptic is time consuming, frequently unpleasant, and often seems pointless, but it really does make a difference (at least it did for me). I cannot begin to express my gratitude for the women and men who wrote the skeptic books, blogs, websites, etc. that helped me to shake off my biases. Similarly, I am indebted to the people in forums, chat rooms, comment threads, etc. who took the time to debate me and point out my errors. You all have made a tremendous difference in my life, and it is a debt that I can never repay. Also, here again, I don’t think that I am an exception. There certainly are some people who will never change their views, but there are also people who are willing to challenge their preconceptions and embrace new ideas, and public skeptics do an incredible public service by taking the time to reach and help those people. So thank you all, and keep up the good work. This is a battle that is worth fighting.

Posted in Uncategorized | Tagged , , , | 12 Comments

The image that took me two months to obtain: A window into what it is like to do research

Most people don’t really understand what scientists actually do or what is required to do scientific research. As a result, they don’t appreciate the amount of training and work that goes into being a scientists and conducting research. I personally think that is a problem, because it leads to all sorts of confusion and misconceptions. So I want to try to clear some of that up by telling you all a bit about what I have been working on for the last few months. I hope that this example will help you to understand and appreciate just how much goes into scientific research, as well as clearing up a lot of misnomers. Many people seem to think that scientists are lazy and constantly take short cuts, falsify data, etc., but I hope that this will help you to see why that is rarely the case. I should clarify, however, that because I am going to talk about some of my own work here, there may be a few places where it sounds like I am boasting, but I assure that that isn’t the case. Quite the opposite, in fact. What I want to impress upon you is that experiences like mine are the norm in science, not the exception. I’m a 27-year-old who still carries a valid student ID. I am by no means a prodigy. At best, I’m average.

Two months to get one image

Before I can explain why it took my two months to get one image, I need to give you a bit of background about my current project. I won’t go into the details, but briefly, I am doing a microbiome project looking at how microbiomes (i.e., the bacteria, fungi, etc. that live on an organism) affect disease ecology. So I collected a bunch of samples (swabs in this case), and now I need to extract the microbial DNA from them. Once I have that DNA, I can sequence it and use those sequences to identify the species of micro-organisms that were living on my study subjects as well at looking at relative abundances.

That DNA extraction step has, however, proved quite challenging. You see, I am trying to get DNA from a range of organisms (gram negative bacteria, gram positive bacteria, and fungi), which is challenging because of differences between the groups. Gram positive bacteria, for example, have thick peptidoglycan cell walls that often require a mixture of chemicals to break them apart. In contrast, fungi have chitin cell walls that often require mechanical methods for breaking them apart (e.g., you add some tiny beads to the sample and use a “bead beater” to rapidly bash them against the sample, thus breaking apart the cells). However, not all of these methods work well together, and some methods that work really well for one group actually reduce the DNA yields from other groups. Further, to make things even worse for me, my swabs are very low yield samples, which means that there isn’t much DNA there to begin with.

This is a gel from an extraction method that completely failed. The two bright bands are control samples to ensure that the PCR and Gel worked correctly. If any of the other samples had worked, they would have shown up as similar bright bands. Thus, since my controls worked, but my samples did not, I concluded that this method does not work well for my samples.

This is a gel from an extraction method that completely failed. The two bright bands are control samples to ensure that the PCR and gel worked correctly. If any of the other samples had worked, they would have shown up as similar bright bands. Thus, since my controls worked, but my samples did not, I concluded that this method does not work well for my samples.

The result of all of this is simply that I have spent two months working in the lab trying to get an extraction method that will work reliably and consistently for all of the groups that I am interested in, even when they are very unabundant. Let me explain a bit of what that has looked like. First, I had to make a whole bunch of mock samples with known quantities of the organisms that I was interested in, that way I could test my extraction techniques on them to see if they worked. Then, I picked the method that I thought was the most promising, and I tried it on a bunch of test samples. That took a full day because there were several long steps where I had to let the samples incubate, precipitate, etc. The next day, I used a PCR (polymerase chain reaction) to try to amplify the DNA from those samples. This basically just replicates the regions of the DNA that I am interested in, that way there are enough copies for me to work with. Finally, I took those PCR products and ran them on a check gel. I won’t go into the details of what that means, but it is just a simple method for seeing wither or not you got DNA amplification, how much you have, and how long the DNA fragments are. So on a gel, if I was successful and my methods worked, I should get nice bright bands of DNA. However, I usually got gels that look like the one to the right. The only bands there are the positive controls and the ladder (you always run positives and negatives in a PCR so that you know if the it is the PCR or the extraction that failed, and the ladder is a standard that lets you tell how large the DNA fragments are).

Following that first failure, I did a bunch more reading on the method I was using, talked to several colleagues, then made some modifications to the protocol and tried again, but once again, I was met with failure. By this point, I was already four days in, but things were just getting started, because this cycle continued to repeat itself over and over again. I would try a method, it would fail, I would modify the method by changing the chemicals, temperatures, times, etc., that would fail, I would modify it again (or try an entirely different method), that would fail, so on and so forth.

This is a gel from a method that actually worked. This time, you will notice that nearly every well has a band, indicating that the extraction worked (the cells without bands were negative controls). You will also notice that some of these are smeared. That is because I was testing several methods, and I will not be using the method that produced the smears.

This is a gel from a method that actually worked. This time, you will notice that nearly every well has a band, indicating that the extraction worked (the cells without bands were negative controls). You will also notice that some of these are smeared. That is because I was testing several methods, and I will not be using the method that produced the smears.

This lasted for two months, during which, I worked in the lab at least six days a week, often working over ten hours a day trying to find a reliable extraction technique, and I want to stress that word “reliable” because it’s important. There were multiple times when I found a method that “worked” in that I could get DNA from many samples, but those methods did not give me good, consistent coverage over all of the groups of organisms that I was interested in, especially when low quantities were present. In other words, I could have done my study using one of those methods, and I would have gotten what appeared to be good results that I could easily have published, but the results would have been misleading because I didn’t use a reliable method that could consistently detect all of the groups I was interested in. Therefore, I kept trying over and over again, until finally, last week, I got the image that you see on your left. This time, you will notice that there are DNA bands, and although I have not labelled the image, I got bands even from samples that only had a handful of cells. I have also replicated these results, and they are consistent across my organisms of interest. So I finally have a reliable method, but it took me two months to get here.

Note: there are actually some issues with that gel such as smears and double bands, but that is just because I was running several methods at once. So the lanes that look messed up are from the method that I will not be using, and the lanes that look good are from the method that I will be using. There are also still a few tweaks (particularly to the PCR procedure) that I will make to clean things up a bit more before extracting the DNA from my actual samples.

The big picture: what it takes to do research

Now that you understand my suffering for the past two months, I want to talk a bit more generally about what has been required for this project. First, at the very outset I had to read a tremendous amount of scientific literature. I needed to understand the topic that I was going to study, I needed to know what other people had already found, and I needed to identify the current gaps in our knowledge. This type of background work is absolutely essential in science, and it can be quite time consuming. In many cases, this requires reading hundreds of papers.

Following that, I had to actually design a study, which means that I needed to know enough about experimental design and statistics to design a project that would let me reliably answer the questions that I was interested. So once again, a lot of background knowledge and prior training was necessary.

Simply designing a project clearly isn’t enough, however. I actually needed to do it, and for that I needed money. So, I spent several weeks writing grants to lots of different funding agencies to try to get money for this project. Several of them rejected me, but several others accepted my proposal so I got the funding. During that same time period, I also applied for research permits and filled out ethics applications so that I could actually do the project. Fortunately, all of that came together, so once I had money, permits, and ethical clearance, I immediately moved on to the next phase and actually went out into the field and collected my samples (that was also quite a challenge because it required hiking up ridiculous rainforest streams, climbing cliffs and waterfalls, etc.).

After getting the samples, I started on the lab work that I have been talking about thus far, and now that I have a working protocol, I can move on to the next stage: actually getting DNA from my real samples. Once I have the DNA from those samples, I will amplify and sequence them, and that will finally give me the actual data which I will then do a series of statistical analyses on. Following those analyses, I will write a paper on the results and submit to a journal (it will probably be another 4–6 months before I have something submitted, maybe longer).

If we put all of that together, you should start to be able to see just how much effort and how many skills are required for a project like this. I needed to have an enormous amount of background knowledge on the topic and on experimental design just to plan the study, I needed writing skills to get the grants/permits/ethics clearance, I need field skills to get the samples, I needed lab skills and a knowledge of chemistry to get the extractions working, once I finish the lab work I will need statistical skills to analyze the data, and I will need writing skills again to actually craft a paper. Perhaps most importantly though, I really need to understand the ecology of my species in order to make sense of the data that I am collecting. In other words, it’s not just enough to know a bit about the background and get the raw data, I need to actually understand things well enough that I can make sense of my results and see how they fit into the big picture. So a tremendous amount of work goes into a project like this, and a lot of knowledge and skill is required.

At this point, I again want to clarify that I’m not trying to convince you that I’m a brilliant, naturally gifted researcher, because I’m not. What I am, however, is highly trained. This brings me to perhaps my most important point in this post: science is hard, and it takes lots of training and work. I couldn’t have done a project like this when I started my scientific training almost a decade ago, and, honestly, I couldn’t even have done it a few years ago. It takes years of extremely hard work to become an academic because there is so much that you need to know. There is an overwhelming amount of knowledge that needs to be acquired and skills that need to be mastered in order to do research like this, and it takes a long time to slowly build up that knowledge and those skills. As a result, I get very upset when people who have never taken an advanced science course, never set foot in a lab, never learned statistics, etc. pretend to know more than scientists and claim that scientists are wrong on vaccines, climate change, GMOs, evolution, etc. To be clear, I’m not being elitist here. I’m not suggesting that scientists are better or smarter than everyone else, but they are highly trained and experienced, and that matters. The idea that you will understand a topic better than scientists simply by reading some websites and books is insane and extremely arrogant. Scientists are carefully trained on how to design experiments and analyze data, they spend an inordinate amount of time reading every available paper on the topics that they study, and they spend years actually doing research. That type of training, knowledge, and experience isn’t going to be toppled by a few Youtube videos.

Note: I want to be clear that I am not trying to be dismissive of citizen science projects, because I actually think that those are fantastic, largely because they help to train the public to think like scientists. Similarly, I am not trying to disparage anyone who is trying to learn about science. As I said, being a scientist is as much about training and diligence as anything else, and anyone can learn how to be a scientist. My point is simply that you shouldn’t place too much weight on your degree for Google University.

It’s a group effort

I’ve been writing this post in the first person, but I want to be totally clear that I have not been doing this alone. I have had a tremendous amount of help and support at every step of this. I have gotten help and advice from my fellow graduate students who are in the lab working on their own projects, I have gotten advice from post-doctoral researchers who recently finished their PhD’s, and I have gotten advice and help from my advisers, each of whom is an experienced researcher who spends much of their time passing on their knowledge to students. Further, this is the norm in science and has been the norm for me at every level of my training. I have had great mentors from day one and have learned a tremendous amount from working with them. Once again, that’s a type of practical learning that just can’t be matched by surfing the web. Also, you should realize that science is extremely collaborative, and most projects get input and advice from a whole host of people who don’t make it onto the actual author list. This is, in my opinion, one of the greatest strengths of science, because everyone has a different set of skills and a different knowledge base, so when those skills and knowledge bases are combined, you end up with an extraordinary and diverse amount of knowledge and ability being funneled into one project.

Most scientists aren’t incompetent or unethical

The final thing that I want to deal with in this post is the notion that scientists are either hopelessly incompetent or horribly unethical. I hear these claims all of the time, and they are ludicrous. For example, when it comes to climate change, I frequently hear people say things like, “the answer is obvious, it’s the sun, but scientists just plow forward with their beliefs instead of looking at the obvious answer” or “but the climate has changed repeatedly in the past, so it’s silly to think that the current change isn’t natural.” Claims like that are absurd. Scientists aren’t incompetent, and they have, in fact, carefully tested the natural drivers of climate change (including the sun) and consistently found that natural causes of climate change cannot explain the current warming, but our greenhouses gasses can (more details and sources here). Scientists spend a great deal of time testing every reasonable possibility that they can think of, as well as systematically refining their methods and analyses to ensure that they get accurate results. That is one of the things that I hope you will take away from my two months of trying to get a reliable extraction method working. There were lots of points that I could have stopped and used a method that I knew wasn’t accurate or reliable, but I didn’t do that because I wanted to make sure that my final results will be correct.

Once again, this type of situation is the norm for scientists. Talk to any researcher, and they will tell you very similar stories of weeks, months, or even years spent trying to get a method to work reliably, trying to refine an analysis to make it more accurate, etc. We all spend an inordinate amount of time trying to make sure that we are doing things correctly and are accounting for all reasonable possibilities. To be clear, scientists aren’t perfect, and mistakes certainly get made, but most of us care tremendously about producing high quality research because we are curious people who want answers. That’s why we went into science (it certainly wasn’t for the money, because there’s not much of that). I chose and designed my current project because it is a topic that I care about. I want my results to be correct and accurate because I want to know what the answer is. That type of curiosity is a huge driving force in science, and it forces us to be diligent.

This brings me, finally, to the topic of wide-spread corruption. Many people are under the delusion that scientists are just in it for money, and we go around falsifying data whenever it disagrees with us. That idea is, however, total nonsense. If I didn’t actually care about the results, if I was willing to falsify data, then why on earth would I spend two months working 60+ hours a week trying to get a reliable extraction method working? If I was just going to make up the results, then I could have just gone ahead with one of the crappy, unreliable methods. For that matter, I could have just skipped the lab work altogether and sat down at my compute and generated fake results. Similarly, the idea that I would get to the end of this project only to have someone pay me to keep the results quiet or to change the results is nuts. Look, by the time that it is all said and done, I will have invested close to a year of my life into this project. I have poured everything into it, and there is no way on earth that I am going to get to the end and either not publish or change my results just because someone offers me a few thousand dollars. I am not so provincial and neither are the majority of scientists. Again, we don’t do this for the money. If money was what we cared about, we would have chosen a different field. We do this because we want to know how things work. We do this because we care about expanding mankind’s knowledge and making the world a better place, and the vast majority of us would not be willing to bury research for a simple bribe. To be clear, there are bad apples in every group, but again, what I am describing here is not out of the ordinary. Go talk to any scientists anywhere in the world and they will have similar stories, because this is the norm for science.

Posted in Nature of Science | Tagged , | 3 Comments

How long is long enough? Do we need more climate data?

climate change data global warming warmest years on record2016 was the hottest year on record, making it the third year in a row to set a new temperature record. Indeed, the past few decades have been so warm, that anyone who is 31 years old or younger has never experienced a single month when the earth’s temperature was below average (that’s using the 1951–1980 average that is generally used for benchmarking global temperatures; NASA data). That’s pretty striking evidence that the planet is warming. Nevertheless, there are many who object to this and claim that we simply don’t have enough data to know what is really going on. They point to the fact that our global records only go back to 1880 and insist that 130+ years is nothing when you consider the entire history of planet earth. However, this argument ignores several key points that I will elaborate on. First, we actually do know a lot about the climate prior to 1880, and we have used those data to understand how the climate works. Second, by using our understanding of past climate, we have been able to test the current drivers of climate change and determine with an extremely high degree of confidence that the current warming is not natural.

How long is long enough?

Before I deal with the actual evidence itself, I want to point out a fundamental problem with this argument. If you think that 130+ years isn’t enough data, then how much will be enough? How long does the trend need to go to convince you? Think about it, if the argument is that 130+ years is nothing compared to the billions of years that earth has been around for, then there is probably no realistic amount of data that would convince you. For example, would 200 years convince you? I doubt it, because in the grand scheme of things, those extra 70 years are also minuscule. To put this another way, people have been saying “there isn’t enough data” for decades now, even as more data has continued to accumulate. Back in 2000, for example, tons of people said that we don’t have enough data and the trend probably won’t continue, but 16 additional unusual (and often record setting) years of data have been collected since then, and many of them still aren’t convinced. So how long does the trend have to last for before you’ll accept it?

My point is that this argument sets up a non-falsifiable situation that allows people to appear rational by demanding more evidence, when the reality is that no amount of evidence will convince them. To put that another way, people who use this argument are picking an arbitrary amount of data that is necessary, rather than actually looking at the quality and content of the data that we already have. This argument is really a cop-out. It is an excuse for blind ignorance, rather than a logical argument. So if you are prone to using this argument, I hope that you will set it aside for a minute and actually look at the data that we have gathered.

Past climate data

When we talk about global temperature records, we are generally referring to the period starting with 1880, because that is when we had sufficient, real-time measurements from around the world to confidently state the global temperature. However, we do in fact have climate data going back much, much further thanks to things like ice cores, the ratios of certain chemicals, etc. So although we may not be able to confidently state a precise global mean temperature before 1880 that is directly comparable to our modern measurements, we still have a really good understanding of what the climate was like. For example, Martin et al. (2005) used the Mg/Ca ratios of benthic foraminifera (a marine protist) to trace ocean temperatures back for 90 thousand years (which is substantially longer than 130+).

This is really important, because we can use those past climate data to understand things like how solar cycles and CO2 affect the climate. Scientists have, of course, done this, and here is what they found in a nutshell. The earth goes through prolonged cycles known as Milankovitch cycles that affect the earth’s eccentricity, precession, and obliquity (i.e., orbit, tilt, and axis) and this affects the amount of energy entering the earth or the location/season at which it enters. This (as well as other factors like solar activity) often causes a small amount of warming (often regional), and that warming causes the oceans to release the CO2 that is stored in them (Martin et al. 2005; Toggweiler et al. 2006; Schmittner and Galbraith 2008; Skinner et al. 2010). That CO2 then drives the bulk of the warming (this is why there is a brief “lag” in the climate data; Lorius et al. 1990 ; Shakun et al. 2012). So in short, past events of global warming occurred whenever something caused enough small-scale warming (often regional) for the oceans to release CO2, and that spike in CO2 caused most of the actual warming (more details here).

Our current warming

Now that we have established the drivers of past climate change, we can look at our current situation to see whether or not natural factors are causing the current warming (spoiler alert, they aren’t). You see, the thing that the “only 130+ years of data” argument ignores is that we already understand causal relationships when it comes to the drivers of climate change. In other words, scientists aren’t just sitting back and saying “well it’s been warming for 130+ years, therefore it must be us and it will keep on warming until we change.” Rather, they have carefully examined the cause of the warming, and it is the understanding of that cause that allows us to be so confident.

So what is the cause? You probably guessed it; it’s CO2. The image to your right shows the earth’s atmospheric CO2 concentration over the past 800,000 years. You’ll notice that it is substantially higher now than it has been during any point in that time period (surely 800,000 years is long-term enough). Also, remember that we know from our studies of past climate change that CO2 is a major factor in driving planetary warming. At this point, the situation is really quite simple: we know that CO2 traps heat and is important for regulating our climate, we know that CO2 has been a major driver of past climate change, and we know that we have greatly increased the CO2 in our atmosphere, therefore, the only logical conclusion is that we are causing the climate to change. Please note that this conclusion is not based on 130+ years of warming. Rather, it is based on understanding physics and the factors that drive climate change. Indeed, scientists predicted that burning fossil fuels could change the climate way back in 1896. So the recent warming is simply the confirmation what scientists predicted over 100 years ago.

To put this another way, we can demonstrate that we are the cause using a simple syllogism.

  1. In the past, large increases in CO2 have caused the planet to warm
  2. We have caused a large increase in CO2
  3. Therefore, we are causing the climate to warm
This figure from Hansen et al. 2005 shows the effect of both the natural and anthropogenic drivers of climate change. Notice how only anthropogenic sources show a large warming trend. Also, see figure 2 of Meehl et al. 2004.

This figure from Hansen et al. 2005 shows the effect of both the natural and anthropogenic drivers of climate change. Notice how only anthropogenic sources show a large warming trend. Also, see figure 2 of Meehl et al. 2004.

Nevertheless, you may protest and say, “what about volcanoes, the sun, etc.” Well, first off, volcanoes actually only emit a very tiny amount of CO2 (less than 1% of what we produce Gerlach 2011). Second, scientists have carefully looked at volcanoes, the sun, and other natural drivers of climate change, and they simply cannot explain the current warming without including our greenhouse gas emissions (Stott et al. 2001; Meehl et al. 2004; Hansen et al. 2005; Allen et al. 2006; Lean and Rind 2008; Imbers et al. 2014).  In other words, we know that the current warming isn’t natural, because we have tested the natural causes of climate change, and they all come up short (even when combined together; more details here). Output from the sun, for example, simply doesn’t correlate with our current warming, therefore we know that it isn’t the sun that is driving it (note: the actual tests are far more complicated than simple correlations, because they both control for confounding variables and combine the effects of variables, but it is often convenient to think about it as simple correlation to start with, then build complexity on top of that). To be clear, these tests have also shown that the current warming is not 100% human induced. There is some natural background fluctuation, but the bulk of it is man-made, and it will continue if we don’t reduce our fossil fuel use.

It is probably also worth mentioning that we are certain that the increase in CO2 is from us because of isotope ratios. I explained this in more detail here, but in short, carbon has two stable isotopes (C13 and C12), and the ratio of those two isotopes in fossil fuels is different from the ratio that is in the atmosphere. Therefore, if the CO2 is from us burning fossil fuels, we would expect that atmospheric ratio to shift to be more like our fossil fuels, which is exactly what has happened (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009). These ratios are like fingerprints that demonstrate beyond any reasonable doubt that the CO2 is from us.

Confirmation via models

At this point, you may be tempted to think that all of this is well and good, but we can’t really know what is happening without more data. I disagree, but there are still additional pieces of evidence that we can bring to bear. For example, we can be highly confident that our current understanding is correct thanks to models (as well as other data that I’ll talk about later). Scientists are a pretty clever bunch, and what they realized is that we can construct models based on our current understanding of how climate works, and use those models to make predictions about the future. If our understanding is correct, then the models should make accurate estimates, and if our understanding is wrong, the models should make incorrect estimates (science thrives on testable predictions like this). Granted, there is a certain amount of uncertainty around this, because the models are reliant on the skills of their programmers, as well as natural factors that are out of our control (e.g., if the output of the sun changes and is different from the model input, that will cause the model to be wrong, even if our basic understanding of the climate is correct). Nevertheless, these models have actually done a remarkably good job of predicting the current warming (Hansen et al. 2006; Frame and Stone 2012; Rahmstorf et al. 2012; Cowtan et al. 2015; Marotzke and Firster 2015). Now, yes, I know, you have read on the internet that the models have all been wrong. Well, I hate to break it to you, but not everything on the internet is true. The claims that the models have been wrong are based on cherry-picking and misrepresenting how the models work. The actual scientific analyses have found that they have done a good job of predicting the current warming (more details here and here). Also, it is worth mentioning that the models have been refined over time because our understanding of the climate has improved over the years. However, those refinements simply improve the accuracy of the models, they don’t change the overarching trend or predictions.

Satellite measurements

Finally, I want to draw your attention to one final piece of evidence which, in my opinion, is the single most conclusive demonstration that we are causing the planet to warm. When energy from the sun hits the earth’s surface, much of it is radiated back of as long-wavelength, low-energy infrared radiation (IR, aka heat energy), and CO2 in our atmosphere is capable of absorbing those long wavelengths. Thus, it traps some of that heat energy before it leaves the earth. This is a scientific fact that no one disputes, and it is actually a good thing, because the earth would be inhospitably cold without CO2 and the other greenhouse gases. Nevertheless, it can be a problem, because if that CO2 increases, then more heat is trapped and the earth warms. Again, we know that this is largely what drove past climate changes.

Now that we have those facts in place, we can make another testable prediction. If our CO2 is causing the planet to warm, then the amount of energy entering the earth should be roughly constant, but the amount of energy that is leaving the earth should decrease significantly because some of it is being trapped by our CO2. Scientists have actually measured this using satellites, and just as expected, the energy from the sun is basically unchanged, but less energy is leaving at the specific wavelength that CO2 traps (Harries et al. 2001; Griggs and Harries. 2007; Huang et al. 2010). This is extremely clear evidence that we are causing the planet to warm, and if you want to say that the warming isn’t from us, then I want to know where that IR is going. We know that the roughly same amount is coming in, but less is going out, so if it’s not our CO2 that is trapping it, what is it? (note: recently, the sun’s output has decreased slightly, but it is not enough to explain the decrease in IR, and the pattern hasn’t been consistent; i.e., the sun’s output does not correlate well with the decrease in IR, but our CO2 does).

In conclusion, let’s put all of the pieces together. We know that CO2 traps heat and warms the planet, we know that increases in CO2 caused past climate change, we know that we have greatly increased the CO2 in the atmosphere, and we know that CO2 is currently causing the planet to trap excess heat. The conclusion from that is inescapable: we are causing the planet to warm. We do not need more data to be confident in that conclusion.

Note: To be clear, I’m not suggesting that we should stop studying the climate, we certainly should and there is still much to learn, but the evidence that we have so far is sufficient to be highly confident that we are causing the climate to change. To use an analogy, there is still a lot to learn about the human body and we should keep studying it, but we have plenty of data for me to confidently state that your liver isn’t the organ that is responsible for pumping blood.

Literature Cited

  • Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
  • Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
  • Cowtan et al. 2015. Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters 42:6526–6534.
  • Frame and Stone 2012. Assessment of the first consensus prediction on climate change. Nature Climate Change 3:357–359.
  • Gerlach 2011. Volcanic versus anthropogenic carbon dioxide. EOS 92:201–202.
  • Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
  • Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
  • Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. 308:1431–1435.
  • Hansen et al. 2006. Global temperature change. Proceedings of the National Academy of Sciences 10314288–14293.
  • Huang et al. 2010. Separation of longwave climate feedbacks from spectral observations. Journal of Geophysical Research 115:D07104
  • Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
  • Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
  • Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
  • Marotzke and Firster 2015. Forcing, feedback and internal variability in global temperature trends. Nature 517:565–570.
  • Martin et al. 2005. Role of deep sea temperature in the carbon cycle during the last glacial. Paleoceanography 20:PA2015.
  • Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
  • Rahmstorf et al. 2012. Comparing climate projections to observations up to 2011. Environmental Research Letters 7:044035.
  • Schmittner and Galbraith 2008. Glacial greenhouse-gas fluctuations controlled by ocean circulation changes. Nature 456:373–376.
  • Shakun et al. 2012. Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484:49–54.
  • Skinner et al. 2010. Ventilation of the deep Southern Ocean and deglacial CO2 rise. Science 328:1147-1151.
  • Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
  • Toggweiler et al. 2006. Mid-latitude westerlies, atmospheric CO2, and climate change during the ice ages. Paleoceanography 21:PA2005.
  • Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
Posted in Global Warming | Tagged , , | 6 Comments

You’re probably wrong

I want to begin this post with a simple thought experiment. It is not original with me, and unfortunately, I don’t recall where I first heard it, but I think that it is a useful exercise. I want you to think about all of the thousands of different views, beliefs, ideas, etc. that you hold. Think about all of the things that you think are true. Now, estimate the percentage of them that are actually true. In other words, if you are honest with yourself, you’ll have to admit that you are almost certainly wrong some of the time. It’s extremely unlikely that you are right 100% of the time. So how often do you think that you are right? 80% of the time? 90% of the time? 95%? Are you presumptuous enough to go to 99%? Regardless of what number you settled on, it was surely something less than 100% (unless you are hopelessly arrogant). You’re not correct 100% of the time, and neither am I, and neither is Stephen Hawking, and neither was Einstein, etc.

Now, having established that you are wrong about at least some of your views, I want you to divide your views into things you’re right about and things you’re wrong about. Go ahead and try it. You can’t do it, and neither can I. You inherently don’t know which of your views are wrong, because if you knew that a view was false, then you wouldn’t hold it as a true belief (as least I hope that is the case). In other words, we all hold the views that we hold because we think that they are true. So we inherently think that all of our views are right even though, if we are honest, we have to admit that at least some of them are almost certainly false.

At this point, we have established two fundamental and critically important points that should act as guiding concepts for you.

  1. Some of your views are wrong
  2. You don’t know which ones are wrong

The implication of those two facts is obvious and inescapable: you must always be willing to consider the possibility that you are wrong about any given topic. This is one of the guiding principle of skepticism, and it is one that everyone should take to heart. It does not matter who you are, it doesn’t matter how smart you are, and it doesn’t matter how well educated you are, you could still be wrong about any of your views. Therefore, you must always consider contrary evidence when you are presented with it (see note at the bottom).

In concept, this sounds simple enough, but actually practicing it is admittedly difficult. We like to think that we are right. We take comfort in thinking that we understand the way that the world works, and being faced with evidence that disagrees with our views is unpleasant. We don’t like being wrong. As a result, we tend to cling to any source that agrees with us, and blindly reject any source that disagrees with us. More often than not, when we are presented with a source or argument that disagrees with one of our views, we simply write it off as “fake news,” we claim that the author had an agenda, we invent elaborate conspiracy theories, we cherry-pick evidence to hurl back at it, etc. We are willing to do anything that it takes to avoid facing the harsh truth that we might be wrong. This is called motivated reasoning, and it is a very serious problem because it prevents you from actually considering evidence and making rational decisions. Fortunately, it is a problem that you can overcome.

You can train yourself to be skeptical. The first step in that is simply to admit that you might be wrong, and, following that, force yourself to actually consider contrary evidence. When you read an article that you like, don’t instantly assume that it is true and hit the “share” button. Rather, stop and think carefully about it. Look at the arguments that it is making, look at the sources that it is citing, fact check the article, read the opposing articles, talk to people who hold the opposing view, etc. Similarly, when you see an article that you disagree with, don’t automatically assume that it is false. Don’t assume that it is “propaganda” or “fake news.” Again, fact check it. There are lots of great fact checking websites out there, fact checking fake newsand you should use them. Try as hard as you can to find information from good sources that supports the opposing. In other words, you should try to disprove your own position. What most people do is simply look for information that supports their view, rather than looking for information that opposes it, and that’s a serious problem. Indeed, it leads to what is known as a confirmation bias. Most people “fact check” by looking for other articles that agree with them. They, of course, easily find them; then they instantly assume that those articles are true, and the existence of those articles further bolsters their view. Do you see why that is a problem? It is a feedback loop that will always leave you with the same view that you started with. There will always be other people who think you’re right, so if you only look at what they have to say, you will only ever see arguments that agree with you, and you will never have to actually consider the possibility that you are wrong. That is a monumental problem.

On that note, try to avoid echo chambers. Don’t unfriend everyone who disagrees with you, and don’t block sources that say you’re wrong. Rather, you should engage with those people and sources. See what they have to say, listen to their arguments, and always be willing to be wrong. I don’t want to get political here, but it politics present a very good example. In recent months, I have seen a large number of people reach the conclusion that only one major news company is trustworthy, and all of the others are agenda driven liars. Thus, they now get their news from only one source. That would be problematic regardless of which source they chose, but it is especially problematic since their chosen source (which won’t be named) is notoriously biased towards one extreme of the political spectrum. Thus, these people have created an echo chamber for themselves, where all of their “news” comes from people who agree with them, and any source that disagrees with them is automatically considered to be false. That is a tremendously huge problem regardless of which end of the political spectrum it falls along, and you should strive to avoid it. Get your information from multiple sources, fact check, and actually read/watch and consider opposing arguments. Also, make sure that the sources that you are using are actually legitimate sources. Although you shouldn’t automatically assume that a source is fake news just because it disagrees with you, there are, nevertheless, a lot of fake news sources out there, and you should be careful to avoid them. Similarly, be leery of sources that are known to have an agenda, because they tend to be biased.

This applies to everyone, regardless of what views you hold and what degrees and qualifications you posses.

This applies to everyone, regardless of what views you hold and what degrees and qualifications you posses.

In short, you should always consider the possibility that you are wrong, carefully fact check everything (especially if it agrees with you), listen to opposing arguments, avoid echo chambers, get information from multiple sources, and even try to discredit your own views rather than trying to support them. I realize that this isn’t easy to do. It is against our nature to be that willing to dismiss our views, but in our modern age of misinformation, this type of skepticism is invaluable and you should strive for it in all areas of life (science, politics, religion, ethics, etc.). I doubt that anyone has ever achieved perfect skepticism (I certainly haven’t), and we are all prone to confirmation biases (myself included), but this is something that you can get better at. You can train yourself to be a skeptic, and as with most things, the more that you do it, the easier that it will become. Make fact checking a habit, rather than something that you only do when you want to disprove someone, and above all else, be willing to be wrong.

Note: Some people may challenge my thought experiment by pointing out that not being able to know which of your views is false is not the same thing as not being able to know that some of your views are true. In other words, you could argue that it is possible to divide your views into things that you are 100% certain are true and things that you are less than 100% certain about. I agree that you can be more certain about some things than others, but I disagree that 100% certainty is ever possible. There are lots of philosophical reasons for this, some of which I discussed here (for example, look up Descartes, the brain in a vat argument, etc.). More practically, however, I think that claiming 100% certainty is the epitome of arrogance. By claiming that you are 100% certain about something, you are saying that you are so knowledgeable, intelligent, and infallible that it is impossible for you to be wrong on a given topic, and that is a rather extreme claim to make. I don’t care who you are, you aren’t perfect enough to achieve 100% certainty. It is always possible that some piece of evidence has alluded you or some cognitive bias has crept in. Thus, you should always be willing to be wrong. To be clear, I’m not suggesting that you can’t be highly certain about some things. You can reach strong conclusions based on the available evidence, but you should always be willing to challenge those conclusions when presented with new evidence (to be clear, I mean actual, proper scientific evidence, not anecdotes, assumptions, conjecture, conspiracy theories, etc.).

Note 2: As several people have pointed out in comments, mathematics are a legitimate exception to my statements regarding absolute certainty (as are the rules of logic) because they are abstract and do not rely on observations of the physical universe. In other words, 2+2 must always equal 4 even if we are living in the matrix, being deceived by an evil genius, etc. Nevertheless, that is clearly quite different from the types of views and beliefs that I was addressing in the post.

Related posts

Posted in Uncategorized | Tagged | 10 Comments

Debunking “The doctors case for homeopathy” by WDDTY: A case study in critically evaluating internet articles

I am very pleased to publish my blog’s first guest post. A reader contacted me a little while ago to see if I would take a look at an essay he had written debunking some of the nonsense that What Doctors Don’t Tell You had written about homeopathy, and although I don’t generally do guest posts, I thought that his piece would be useful to others as an exercise in spotting false information, a rebuke of WDDTY, and a reference for the flaws in many arguments in favor of homeopathy. Therefore, I offered to post it here. Enjoy.

vial of homeopathic medicineI want to explain the problems with an article found in What Doctors Don’t Tell You (WDDTY) titled “The doctors’ case for homeopathy.” This piece is full of misinformation, and it serves as an ideal case study of what I’ve observed to be WDDTY’s typical behavior. Here’s the full list of problems I found with the article, which I will elaborate on below:

  • Cherry picking throughout
  • Making arguments that WDDTY itself admits are invalid
  • Huge over-reliance on anecdotes
  • Outright lies
  • Referencing bad data/bad science
  • Straw man fallacies
  • Deceitful rhetoric, ad hominems, and hyperbole
  • Missing references (that is, unsubstantiated claims)
  • Non sequiturs coupled with hypocrisy
  • Massive internal contradictions
  • Misrepresenting information and quotes from trusted sources

…but all written in a manner that at first appears to be balanced, transparent, and thorough. It is none.

What follows should serve as:

  1. A clear demonstration that the evidence WDDTY provides in favor of homeopathy is unreliable or wrong
  2. A thorough indictment of WDDTY
  3. A cursory lesson in recognising dishonest or fallacious arguments
  4. A cursory lesson in good scientific practice, and good scientific writing

Let’s take a look at each of the problems:

 

Making arguments that WDDTY itself admits are invalid

The broad pattern of the article is:

  1. Make a lengthy argument
  2. Swiftly concede that the argument made is invalid
  3. Move on to another (or, even, repeat the same) bad lengthy argument
  4. Back to step 2.

Most of the article’s content is rendered useless by these two admissions alone:

  • “Such stories can be dismissed as examples of a particularly dramatic placebo effect or even coincidence”
  • “of course, the fact that pharmaceutical medicine has a poor evidence base doesn’t necessarily make a stronger case for homeopathy.”

With these two remarks the writers have dismissed most of their own arguments. But they know full well that readers will ingest the broad points they’ve made, and gloss over the admissions of their worthlessness. In fact, the subtle admission only makes them (falsely) appear so much more reasonable and balanced. Clever.

This article also claims that scientific studies are thoroughly unreliable, before trying to make a case for homeopathy, using… scientific studies! It’s a neat example of the kind of shameless cherry picking that WDDTY repeatedly uses.

 

Overreliance on anecdotes

“GPs who prescribe homeopathy acknowledge that their medicines’ mode of action is difficult to explain scientifically, but they cannot deny the often startling evidence of their own eyes.”

Yes, we can deny the evidence of our own eyes. We have to question such evidence, because we know how susceptible it is to (conscious or unconscious) bias, error and misinterpretation. Anyone who’s read Daniel Kahneman’s “Thinking, Fast and Slow” will know that our own interpretations of our own limited experiences are unreliable. This is why we perform blinded, randomized, controlled trials. The “evidence of our own eyes” regarding homeopathy is perfectly well explained by proven phenomena: the placebo effect, external confounders, and regression to the mean (a phenomenon whose named absence in a lengthy article pretending to be balanced about homeopathy is notable). WDDTY’s assertion here shows just how happy the magazine is to accept completely unreliable information — anecdotes — so long as it supports WDDTY’s position.

Remember, the anecdotes that make up most of this article “can be dismissed as examples of a particularly dramatic placebo effect or even coincidence”. So including them was just dishonest.

“…but they’re not uncommon.”

It’s true that anecdotes are not uncommon. They’re everywhere, and in support of anything you can name. Commonality of a belief doesn’t make it any more true.

 

Outright lies

Sadly, this is where WDDTY comes into its own.

“from the very beginning, there’s been a turf war between two opposing medical systems: the pharmaceutical doctors and the homeopathic doctors.”

There are no such things as “pharmaceutical doctors.” GPs use an extensive array of different treatments (including no treatment); some of those treatments are drugs. (Those same doctors would embrace homeopathic treatments if only the evidence supported them.) The obsessively black-and-white approach that WDDTY takes is insidious and only makes its advocates look simple.

“[In Europe] studies have shown that primary-care practices offering homeopathy have better patient outcomes than pharma-only practices, and often at lower cost.”

To repeat: there are no “pharma-only practices”.

“the monoculture of pharmaceutical drugs”

To repeat: there is no “monoculture of pharmaceutical drugs”.

“In 2010, [the British] Science and Technology Committee report recommended … banning any further homeopathic research”

Here’s what the report actually said:

“We would challenge Professor Harper’s comment that research funding should be directed towards exploring theories that are not scientifically plausible. Research funding is limited and highly competitive. The Government should continue its policy of funding the highest quality applications for important scientific research determined on the basis of peer review.” Then: “We recommend that the Government Chief Scientific Adviser and Professor Harper, Chief Scientist at the DH, get together to see if they can reach an agreed position on the question of whether there is any merit in research funding being directed towards the claimed modes of action of homeopathy.” Also: “There has been enough testing of homeopathy and plenty of evidence showing that it is not efficacious. Competition for research funding is fierce and we cannot see how further research on the efficacy of homeopathy is justified in the face of competing priorities. It is also unethical to enter patients into trials to answer questions that have been settled already.”

In the entire 275-page report, there is absolutely no mention of a research “ban”. WDDTY has taken the report’s clear, balanced and rational recommendation — to direct what limited funding is available towards new research that has at least a chance of yielding positive results — and turned it into “banning any further homeopathic research”: not just spin, but a complete lie.

(By the way, the report is a very good read for anyone sincerely interested in a clear, fair and logical analysis of homeopathy in the UK)

“It is depressing to see the interests of patients being threatened by a small posse of poorly informed and discourteous critics, who mix a little science with denigration and abuse”

The interests of patients are benefited, not threatened, by people highlighting that a popular treatment is ineffective. It’s no “small posse” of people who recognize the failure of homeopathy, and they’re not “poorly informed” – it’s practically the entire community of scientific and medical experts. And they don’t use “a little science” – they refer to the overwhelming consensus of the best available evidence.

WDDTY then claims that studies on animals and even plants have shown homeopathy treatments working better than placebos.

“All of this non-human evidence inherently rules out the dismissive ‘placebo-effect’ explanation.” / “homeopathic remedies have repeatedly been shown to have measurable effects on non-human biological systems, which inherently rules out the placebo effect”

No, it doesn’t. There are plenty of reasons to believe that placebo effects are in evidence in animals; not least that the effects of treatments are recorded by placebo-susceptible humans.

As for plants, this is arguably the low point of the article: WDDTY’s single stated reason to believe homeopathy benefits plants is a study in which 26 out of 30 homeopathic products failed; then when the test was repeated, three of the four remaining homeopathic products failed as well (in one of the preparations, the disease even got worse). And the single successful treatment (with “a small to medium effect size”)? A product for which the study’s authors state, “the exact composition … is not known” – in fact, it appears not to actually be homeopathic at all (a homeopathic version of that same product was one of the 29 that failed). WDDTY fails to mention any of these truths (as well as failing to mention that the journal the study is published in is notoriously shoddy). Citing this as favourable evidence is called p-hacking, and it’s notoriously deceitful (un)scientific behaviour.

To be as clear as possible: the data from this study suggest that homeopathy doesn’t work on plants, and WDDTY has used it (and as its sole reference) to claim that homeopathy does work on plants. It’s clear that WDDTY’s researchers are either completely incompetent or thoroughly dishonest, both to reference this journal and to misconstrue the information within it. Is this demonstrably dreadful magazine really a source that anyone would want to refer to or rely on?

“The war is not about evidence”.

The “war” – such as it is – is all about evidence. The evidence overwhelmingly shows that homeopathic remedies are nothing but placebos – if you’re prepared to look at all of the evidence, and separate the reliable evidence from the unreliable, rather than just referencing the (bad) studies that (do or can be spun to) support homeopathy.

“Because homeopathic medicines often contain not a single molecule of an active ingredient, opponents mock them as an affront to rationality—and indeed, a threat to the whole of science.”

Homeopathy isn’t “a threat to the whole of science” – the whole of science is a threat to homeopathy. If the evidence supported homeopathy, science would evolve to accommodate it. The evidence does not support homeopathy.(Here, for curiosity, is a non-exhaustive list of occasions when the scientific consensus moved on because of new evidence)

“Opponents make much of the ‘consistent failure to demonstrate effect beyond placebo’ when trials of homeopathy are studied; this is untrue.”

It only appears untrue if you accept poor evidence from bad studies, ignore good studies, and pretend anomalies are less statistically likely in good studies than they are. Which is to say, the claim that it’s untrue is itself fundamentally dishonest.

“However, after 70 years of being part of the NHS, homeopathy now faces extinction.”

Come on, no it doesn’t. Homeopathy is a booming business. (I admit this is speculation – I’ve found it extremely hard to come by reliable data on revenue or use of homeopathy; but I see no evidence that homeopathy is facing “extinction” or anything close to it. Boiron alone made a profit of €73.9 million in 2015)

“Clearly, both homeopathy and conventional medicine are on a par with each other when it comes to evidence-based medicine”

See the “bad data” section below for detail on why this is a lie.

“the NHS needs more homeopathy, say Britain’s 400 GPs who currently prescribe it.”

In this article, four – not 400 – GPs are quoted appearing to support homeopathy. As you’ll see in the “unsubstantiated claims” section below, GP referrals for homeopathy are “almost always at the patient’s request rather than as the result of a clinical decision”. It’s yet another outright lie to claim that “Britain’s 400 GPs who currently prescribe it [say] the NHS needs more homeopathy.”

This is WDDTY’s USP: helping people who believe in things disproven by science to convince themselves that they’re actually in good company, and that experts, evidence and science support their beliefs. In reality it’s lie after lie after lie.

“such stories [are] not uncommon”

It seems they are uncommon, since in a 3,700-word piece, WDDTY has only managed to find four GPs apparently prepared to tell such stories. (We know now that that’s because most of the 400 GPs WDDTY mentions probably don’t have such stories at all – they are, in fact, uncommon.) On top of this, I’m giving WDDTY the benefit of the doubt and assuming that the quotes from GPs are real; as you’ll see in the footnote at the end of this document, WDDTY is perfectly happy to make up quotes to support the most despicable lies.

“homeopathy—with its zero side-effects and arguably adequate reliability”

A serious examination shows that, despite trying really hard to do so, this article completely fails to demonstrate that homeopathic remedies have “arguably adequate reliability.” Indeed, the fact that the writers have to resort to the lies, misinterpretations, ad hominems and fallacies that they do suggests (correctly, as it happens) that there isn’t any good evidence that homeopathy has even arguably adequate reliability.

 

Referencing bad data/bad science

“An analysis of the responses to the homeopathic medicines revealed…”

The audit cited here (Tom Robinson in 2006) bears little resemblance to a scientific study. There’s no control, no placebo, no blinding. It’s not a study at all, just another series of anecdotes.

“Only 3 per cent of his [homeopathy] patients found their condition worsened following treatment”

Pointing out that ineffective remedies don’t usually make things worse is redundant. It’s not a point in favor of homeopathy.

“[Critics’] attacks echo those of the medieval Vatican against Galileo: it cannot be true, so it’s not.”

“It cannot be true, so it’s not” is a perfectly sound argument. It was wrong in the case of Galileo (if anyone even said it to him) because “it cannot be true” was a false claim. In the case of homeopathy: it cannot be true. So it’s not. (Also, and far more significantly: when it’s tested, it’s shown not to be true.)

By the way, the “Galileo Gambit” is a famous (and famously bad) debating technique.

“[Homeopathy’s] ultra-low doses are prepared by ‘serial dilution’ in water—adding a drop of the original medicine to 100 drops of water … homeopaths not only say such extreme dilutions work, but actually work better than the original medicine.”

Even WDDTY’s own writers appear not to have understood the basic theory of homeopathy. There is no “original medicine” to be diluted; only a substance that causes similar symptoms to those the patient is suffering.

“By the end of 2014, homeopathy had been tested in 104 RCTs for 61 different medical conditions: 41 per cent were positive; 5 per cent were negative; and 54 per cent were inconclusive. This track record is strikingly similar to conventional medicine’s. A 2007 analysis of a “large random sample” of RCTs of conventional treatments revealed that 44 per cent were ‘likely to be beneficial’, 7 per cent were ‘likely to be harmful’ and 49 per cent were inconclusive”

This statement about homeopathy studies is not, however, from a proper scientific review. It’s a claim from the Faculty of Homeopathy. WDDTY is comparing a peer-reviewed paper in the prestigious Journal of Evaluation in Clinical Practice to a dubious claim from the Faculty of Homeopathy. Putting the two in the same space and trying to draw a comparison is amateurish. Luckily the Lancet has performed a fair, direct comparison. Homeopathy came out worse.

Indeed, when serious meta-analyses are performed, it’s shown that “there are no health conditions for which there is reliable evidence that homeopathy is effective.”

Most of the studies WDDTY references appear to be any of:

  1. not scientific studies at all, just more anecdotes
  2. lacking controls, lacking blinding, lacking placebos
  3. performed on worthlessly small sample sizes
  4. published in poor quality journals – or even not published in a journal at all
  5. demonstrating different (sometimes opposite!) results to the results WDDTY claims.

As ever, they’re all cherry picked to support WDDTY’s position, ignoring the bigger picture: that the results of hundreds of studies – the vast majority of good studies – overwhelmingly show homeopathy doesn’t work.

And a word about animal studies: animal studies, like case reports and case series (and like anecdotes), are first steps. They’re dipping your toes in the water to see if it’s warm. You use their results to determine whether further, human studies are worth running (and safe to do so). To reference an animal study with positive results when human studies have already shown negative results is thoroughly dishonest (just as referencing individual studies when there are meta-analyses demonstrating the opposite results is dishonest). It’s cherry picking, again. WDDTY’s writers are either incompetent not to know this, or dishonest to ignore it.

More on the hierarchy of scientific evidence can be found here.

 

Straw man fallacies

“Only a very few critics confine themselves to what they regard as scientific principles—people who believe that science knows everything about everything, and nothing remains to be explained—scientific ‘fundamentalists’, perhaps.”

There is literally nobody in existence “who believes that science knows everything about everything, and nothing remains to be explained”. The threat of “scientific fundamentalism” continues to be massively overstated. Pretending that the people highlighting homeopathy’s flaws are simple-minded “fundamentalists” is a cheap straw man. See the rhetoric section below for more on WDDTY’s attempt to paint a childish caricature of its opponents.

 

Deceitful rhetoric, ad hominems, and hyperbole

WDDTY pretty shameless in their use of absurdly melodramatic language (some of these are quotes that WDDTY simply features, but quotes that they fail to challenge):

  • “propaganda campaign”
  • “The Good Thinking Society (a name eerily close to Orwell’s dystopian “Ministry of Truth”)”
  • “attacks echo those of the medieval Vatican”
  • “tide of hostility”
  • “vitriolic opponents”
  • “the supposedly respectable British Medical Journal”
  • “that arch-critic of homeopathy”
  • “a small posse of poorly informed and discourteous critics”
  • “scientific ‘fundamentalists’”

This shouldn’t be necessary in a serious scientific publication that can rely on the strength of its arguments. But when you strip out the misinformation, this childish rhetoric is all WDDTY has left. Of course, painting a paranoid picture of people who don’t share your beliefs as one-dimensional, conspiratorial villains makes it far easier to dismiss those people and the valid points they make.

 

Missing references (that is, unsubstantiated claims)

This is just bad practice: another reason to steer clear of the magazine (which on its Facebook page calls itself a journal; another lie):

  • “Homeopathic medicines appear particularly effective at the extremes of age…”
  • “Many GPs prescribe antibiotics for such conditions”
  • “[In Europe] studies have shown that primary-care practices offering homeopathy have better patient outcomes than pharma-only practices”
  • “One area where homeopathy has scientifically proved more effective than conventional drugs is with upper respiratory tract infections like flu, coughs, colds and sore throats.”
  • “an appropriate homeopathic remedy, now shown to be effective according to ‘outcome studies’”
  • “Britain’s 400 GPs who currently prescribe [homeopathy]”

I sought out the source of that final statistic, that 400 GPs (that’s all but 99.3% of them, by the way) currently prescribe homeopathy. The stat turned out to be yet another unsubstantiated claim from the Faculty of Homeopathy. Along my journey, I discovered that “a review carried out by West Kent NHS Primary Care Trust in 2007 found that [a referral for homeopathy] was almost always at the patient’s request rather than as the result of a clinical decision” (quote taken from Wikipedia; not the source, as the link was broken at time of writing). Even WDDTY’s attempts at appeal-to-authority fallacies are failures.

 

Non sequiturs coupled with hypocrisy and massive internal contradictions

“But over the past decade, conventional medicine itself has come under fire for having an equally poor evidence base…”

Conventional medicine allegedly having a poor evidence base is not an argument in favour of homeopathy (as the article itself admits moments later, thus demonstrating that this section was a waste of everyone’s time – except, again, for those people who will soak up the points presented and ignore the admission that they’re irrelevant). Ironically, WDDTY uses this section to suggest that scientific studies are practically worthless, then goes on to reference a load of scientific studies in homeopathy’s favor (two typical behaviors of proponents of flawed arguments: claiming evidence is unreliable except when it appears to be favorable; and trying to deflect attention to the flaws of others in order to avoid scrutiny of itself).

“Why should homeopathy be singled out?” asks Dr Kaplan. “To use evidence-based medicine to attack homeopathy exclusively and call for legislation against it, while huge swathes of conventional medicine lack evidence, is rather strange behaviour,” he says. “Double standards? That would be putting it euphemistically.””

Sure, if other treatments lack evidence, we should be acting against them too (and in many cases, we do, in contradiction of Kaplan’s lie that homeopathy is “exclusively” “attacked”). That’s not a reason to give homeopathy a free pass. Homeopathy gets so much attention focused on it because it’s so prominent. It gets attention focused on it because of all these anecdotes!

 

Misrepresenting information and quotes

“As a result, says Horton, “much of the scientific literature, perhaps half, may simply be untrue”.”

Logic of Science has addressed misuse of this quote specifically. Ironically the kind of papers that Horton suggests are “simply untrue” are the very ones that WDDTY repeatedly references. The unreliability of scientific papers is the very reason we strip out bad data using meta-analyses and systematic reviews – which are the very evidence that WDDTY is careful to ignore.

“That view is shared by the late Professor David Sackett, widely regarded as one of the fathers of clinical epidemiology: “Evidence-based medicine is not restricted to RCTs and meta-analyses,” he wrote 20 years ago. “It involves tracking down the best external evidence with which to answer our clinical questions.””

The best evidence. Not poor evidence cherry picked because it appears to support a position. And when the best evidence is RCTs and meta-analyses, we pay attention to them, rather than ignoring them in favor of poor quality studies and anecdotes. See my earlier link on hierarchy of evidence.

“Who said this? Ironically, no less a figure than that arch-critic of homeopathy, Edzard Ernst.”

(Interestingly in a related piece Ernst suggests that for some problems Acupuncture, Aromatherapy and Hypnosis can do more good than harm. Homeopathy isn’t on the list.)

These quotes are from people whom WDDTY vilifies when they disagree; but the magazine is happy to quote them when their words do (or can be spun to) support their position. Cherry picking. It’s cheap and dishonest and a mark of the gutter press.

 

The truth at last

“Opponents of homeopathy claim [outcome] studies are worthless as they cannot exclude either the placebo effect or the fact that many conditions are ‘self-limiting’—in time, patients get better anyway. The only way to establish whether homeopathy really works, they say, is through randomized controlled trials (RCTs) testing a homeopathic pill against an identical-looking placebo. According to British pressure group Sense About Science, “over 150 clinical trials have failed to show that homeopathy works. Some small-scale studies have yielded positive results, but this is due to poor methodologies or random effects.””

At last, WDDTY speaks the truth (broadly, with some exaggerated turns of phrase thrown in). These 90 words are just about the only content of any real value in the whole article.

 

Conclusion

In this article, WDDTY admits (correctly) that anecdotes don’t constitute good evidence, but spends most of its word count using anecdotes to back up its claims. It claims (falsely) that scientific studies are unreliable, then tries to back up its claims using scientific studies. The studies it does reference are awful. Sometimes those studies even draw the very opposite conclusions to that which WDDTY claims. WDDTY highlights an (alleged) poor evidence base for modern medicine in defense of homeopathy, then admits that an (alleged) poor evidence base for modern medicine does not constitute a defence of homeopathy. It entirely ignores the best available evidence, because that evidence doesn’t suit WDDTY’s argument — indeed, the best available evidence shows WDDTY’s position to be wrong. Finally, WDDTY makes a series of claims that are outright lies, a further series of claims that are unsubstantiated, and a series of suggestions that are childish, hyperbolic and absurd.

WDDTY doesn’t exist to tell the truth. It exists to tell its readers what they want to hear, and pretends to have evidence to back it up. With the deepest irony, I paraphrase their own accusation: “it’s not about evidence, but ideology.”

WDDTY’s staff are very good writers, who know how to cherry pick any apparent evidence that suits them (and omit that which doesn’t), dress it up to appear more reliable and balanced than it is – or even to make claims that it doesn’t really suggest at all – and throw in some incendiary hyperbole to excite its readers. I wish I could say the writers at WDDTY are just incompetent – but the high quality of their writing, and the fact that they admit their arguments are invalid and publish them anyway, suggests to me that the lies that fill their magazine are quite intentional. It doesn’t really matter — either way, it’s a thoroughly unreliable source.

There’s no global media conspiracy: if something in the media is important and true, you should almost always be able to find it cited by a credible source. If your best or only source is What Doctors Don’t Tell You (or Joseph Mercola, or Green Med Info, or Andrew Wakefield, or Natural News, all of whom use similar tactics), it’s not because those people and publications are beacons of truth trailblazing in a world of media corruption and deceit; it’s almost certainly because they’re wrong. And time after time, when their claims are scrutinized, they’re shown to be exactly that: wrong.

For a demonstrably rubbish article like this, WDDTY deserves to be avoided. When they publish vile, dangerous lies about the safety and effectiveness of lifesaving interventions like vaccines and cancer treatments (which they do, using the same cheap tricks that I’ve demonstrated above)*, anyone should be embarrassed and ashamed to have anything to do with them.

 

*Here’s one example. WDDTY, Jan 2013: “Chemotherapy isn’t only useless against cancer–it even encourages the tumour to grow, researchers have discovered… They say that chemotherapy is ‘completely worthless’ and that cancer sufferers would do better by avoiding the drugs altogether.”

The “completely worthless” quote is a fiction; the paper doesn’t recommend avoiding chemotherapy drugs; and when a skeptical reader subsequently contacted the paper’s authors for comment, they said: “The paper says nothing of the sort… Our study has been misquoted and misinterpreted – I believe on purpose…”

Posted in Vaccines/Alternative Medicine | Tagged , , , ,

Climate change and a flat earth (I can’t believe we’re still dealing with this argument)

I'm not sure where this meme originated. if anyone knows, please tell me so that I can give credit.

Image via Terry Pratchett’s “Discovery World.”

When I started this blog, I wanted it to be entirely about science and the fundamental concepts of logic. As such, I decided that topics like religion and politics where off limits. However, anytime that a politician makes a statement about science, they enter my territory and I feel obliged to deal with their comments (by the same token, I only talk about religion when people use it to make claims about science). One of the most common and irritating ways that this occurs is when politicians try to compare modern science (usually climate research) to the idea that the earth is flat and/or we are the canter of the universe. This argument is used ad nauseum and I have dealt with it before. Nevertheless, it is once again featuring prominently in our political dialogue, so I feel the need to explain why it is utter nonsense yet again.

I am, of course, referring to the recent comments by Trump advisor Anthony Scaramucci on CNN. When asked about the scientific consensus on climate change, he made several hopelessly ignorant comments like, “There was an overwhelming science that the Earth was flat, and there was an overwhelming science that we were the center of the world.” Statements like that are, however, absurd for numerous reasons.

First, and most obviously, they are blatantly false. Science is an extremely careful, empirical, and systematic process of making observations, forming hypotheses, and empirically testing those hypotheses by using repeatable, controlled* experiments, and that system is an extremely recent one. It has only existed in its modern form for the last few centuries. In contrast, we have known that the earth was round since the ancient Greeks! So there can’t possibly have been a scientific consensus that the earth was flat because science as we know it did not even exist then! Similarly, the idea that we are the center of the universe was discredited long before modern science, and the idea was largely based on religion. Indeed, although Galileo slightly predates modern science, when he demonstrated that the earth was not the center of the universe, he did so by applying the same basic type of systematic observations and reasoning that scientists use today. Thus, although modern science never said that the earth was the center of the universe, scientific reasoning was the thing that discredited that idea.

I want to elaborate on that last point for a second, because I want to be clear that I am not suggesting that there were no scientists before the modern era. There certainly were, but science was not the formalized discipline that it is now. In other words, although there were people who were making observations, testing hypotheses, etc., there was not a standard for the type of reasoning that qualified as science, nor was there the vast body of philosophical thought about the nature of science that we have today. My point is that the “science” that said that the earth is flat and the earth is the center of the universe is not actually the same thing that we know as science today, thus the comparison is invalid (i.e., it is an equivocation fallacy because it uses the word “science” to mean two different things).

Further, so far I have only been talking about the general concept of what science is, but we also have to take into account the tools that are available to scientists. For example, the statistics and mathematical formulas that we use today have only been around for about 100 years (sometimes much less than that). Those mathematics are extremely important because they are what allow us to rigorously examine our data and assign confidence values to our results. Similarly, today we can use computers to analyze millions of data points, run massive simulations, and do analyses that were unimaginable to scientists just a few decades ago. Further, our ability to accurately make the measurements that we use in those analyses has also increased astronomically. For example, we now have satellites that can continuously collect very precise data from all of the world. Tools like that give us incredible power for understanding our planet and the universe in general, and they give modern science a major advantage over past scientific endeavors.

There is also another problem here, and it is a more fundamental one. If this argument actually worked, then you could use it any time that you wanted, rather than simply for climate change. In other words, if the fact that people used to think that the earth was flat actually gave you cate blanche to reject science, then you would never have to accept any scientific result that you didn’t like. For example, I could say, “scientists say that washing your hands helps prevent the spread of disease, but they were wrong about the earth being flat, so I don’t have to listen to them when they say that we should wash our hands.” That is obviously absurd, but that is exactly the same thing that Scaramucci did. His argument was logically identical, which means that if either of those arguments worked, then both should work, and the fact that the hand washing example is clearly absurd means that the climate change argument must also be absurd. That’s how logic works.

Nevertheless, you might be tempted to think, “but scientists aren’t completely, 100% sure that we are causing climate change, so it is always possible that they are wrong.” Technically, that is true, but here is the important qualifier: science never gives us 100% certainty. It is inherently a skeptical process, and it only tells us what is most likely true given the current evidence, not what is absolutely true (details here). So sure, it is true that I can’t be 100% sure that we are causing climate change, but I also can’t 100% sure that the earth is round. Indeed, I can’t be 100% sure that you even exist, nor can I be 100% sure that Donald Trump isn’t an alien squid from another universe. However, the fact that I can’t be 100% sure about those things does not mean that I get to have any form of practical doubt about them. It would clearly be ridiculous for me to actually think that Trump is an alien squid, and politicians should not make laws based on that notion. Even so, the fact that we can’t be 100% sure about climate change does not mean that you get to have any practical doubt about it, nor should we be basing laws on that doubt.

In technical terms, what I have been describing is known as an argument from ignorance fallacy. It can occur in many ways, but one of its forms happens when you say, “we can’t be sure that X is true, therefore X is false” or “we can’t be sure that X is true, therefore it is reasonable to believe that it isn’t true.” This is not logically valid. The fact that we can’t be completely certain of a conclusion does not mean that you can assume that all of the evidence supporting that conclusion is false. This comes back to one of my favorite topics: the burden of proof. The person making a claim is always responsible for providing the evidence to support that claim. In other words, if you want to claim that we aren’t causing climate change, then you must provide actual evidence that we aren’t. You can’t simply assert that we can’t be 100% certain, the models might be wrong, etc. If you want to claim that all of the scientific evidence is wrong, then you have to provide evidence to support that claim, not speculations. Further, although it is true that science never proves anything with 100% certainty, there are some topics that are so well supported by so many lines of evidence that it is almost unthinkable that they could actually be totally wrong, and anthropogenic climate change is one of those topics (others include atomic theory, the germ theory of diseases, etc.).

This brings me to the final topic that I want to discuss here: the actual evidence for climate change. You see, so far I have been talking in pretty general terms about science, but Scaramucci’s statements are particularly absurd and misguided when we focus in on climate change, because there is so much evidence supporting it. As I have explained in more detail elsewhere, we have tested the sun, volcanoes, and other natural drivers of climate change, and they can’t explain the current warming by themselves (Meehl, et al. 2004; Hansen et al. 2005; Wild et al. 2007; Lockwood and Frohlich 2007, 2008; Lean and Rind 2008; Foster and Rahmstorf 2011; Imbers et al. 2014). However, including our greenhouse gases in the calculations does explain the warming trend (Stott et al. 2001; Meehl et al. 2004; Allen et al. 2006; Lean and Rind 2008; Imbers et al. 2014). In other words, we know that our emissions are driving the warming, because we have tested the natural drivers of climate change, and none of them (even when combined) can account for the current increase in temperatures. Further, we know that CO2 traps heat and is largely responsible for both our climate and past climate changes (Lorius et al. 1990; Tripati et al. 2009; Shakun et al. 2012), we know that we have greatly increased the CO2 in the atmosphere (Bohm et al. 2002; Ghosh and Brand 2003;Wei et al. 2009), and we have used satellites to directly measure the amount of heat energy that CO2 in our atmosphere is trapping, and (just as expected) it is trapping an increasing amount of heat (Harries et al. 2001; Griggs and Harries 2007). In other words, we have direct evidence that our CO2 is causing the earth to trap more heat (i.e., warm). The evidence is overwhelming, and when you look at the scientific literature, you will find thousands of papers saying that we are causing climate change, and only a tiny handful that disagree. The scientific consensus on this topic is incredible (details here), and the consensus among scientists exists because of the evidence. Scientists do not “believe” in climate change (as Scaramucci erroneously asserts), rather we accept it as fact because of the overwhelming evidence. There is simply no serious debate on this topic among actual scientists.

To illustrate this a different way, a few weeks ago, I attended the annual meeting of the Ecological Society of Australia, and there were lots of talks about the effects that climate change is having and will have on plants and animals, but not one of those talks hedged their comments with statements like, “if climate change is real.” Rather, all of them treated anthropogenic climate change as a scientific fact because that is what it is, and it is absolutely crazy to suggest that we should act as if it isn’t settled just because some non-scientists over 2000 years ago thought that the earno matter what crackpot notion you believeth was flat! Further, the fact that a handful of scientists today disagree doesn’t help that argument either because there will always be a few who disagree on any topic (this is an appeal to authority fallacy). Having an advanced degree does not guarantee that you know what you are talking about, and you can sadly find plenty of crackpots even among scientists (note: before you accuse me of committing an appeal to authority fallacy by citing the consensus, please read this post on the difference between appealing to authority and deferring to experts).

To be clear, there is disagreement about the extent of climate change that we are going to cause. There are lots of variables that have to be considered, and there are admittedly still topics that we are actively studying in order to refine out models. However, although the models may be imprecise about the extent of the warming, it is almost unthinkable that we are wrong about the warming itself. Indeed, the idea that we are causing the climate to change is not based on the predictive models, rather it is based on research like the papers I presented earlier. Further, out models have actually done a pretty good job of predicting the current warming, so although they aren’t perfect, they probably aren’t that far off (Hansen et al. 2006; Frame and Stone 2012; Rahmstorf et al. 2012; Cowtan et al. 2015; Marotzke and Firster 2015).

In short, the science that said that the earth is flat or that we are the center of the universe was not the same thing as what we know as science today. Further, the past few decades have furnished us with a fantastic array of amazing tools for collecting and analyzing massive amounts of data, and when we apply those tools to the topic of climate change, we get an extremely consistent result: it is real and we are the main cause of it. Because of that evidence, there is no serious debate about this among scientists. Yes, there are a few dissenting voices, but those will always be there. Further, yes, it is always technically possible that we are wrong, but it is also technically possible that Donald Trump is a space squid, and the fact that something is technically possible does not mean that it is rational to think that it is actually true.

*Note: When I said that science uses controlled experiments, I was not referring specifically to randomized controlled trials (RCTs). Although they are a particularly powerful design, they are far from the only one, and they often aren’t applicable in fields outside of medicine. In climate research, for example, we generally can’t do RCTs, but we can still control our experiments and analyses by measuring all of the confounding factors and including those factors in out analyses.

Related posts

Literature cited

  • Allen et al. 2006. Quantifying anthropogenic influence on recent near-surface temperature change. Surveys in Geophysics 27:491–544.
  • Bohm et al. 2002. Evidence for preindustrial variations in the marine surface water carbonate system from coralline sponges. Geochemistry, Geophysics, Geosystems 3:1–13.
  • Cowtan et al. 2015. Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters 42:6526–6534.
  • Foster and Rahmstorf 2011. Global temperature evolution 1979–2010. Environmental Research Letters 7:011002.
  • Frame and Stone 2012. Assessment of the first consensus prediction on climate change. Nature Climate Change 3:357–359.
  • Ghosh and Brand. 2003. Stable isotope ratio mass spectrometry in global climate change research. International Journal of Mass Spectrometry 228:1–33.
  • Griggs and Harries. 2007. Comparison of spectrally resolved outgoing longwave radiation over the tropical Pacific between 1970 and 2003 Using IRIS, IMG, and AIRS. Journal of Climate 20:3982-4001.
  • Hansen et al. 2005. Earth’s energy imbalance: confirmation and implications. Science 308:1431.
  • Hansen et al. 2006. Global temperature change. Proceedings of the National Academy of Sciences 10314288–14293.
  • Harries et al. 2001. Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997. Nature 410:355–357.
  • Imbers et al. 2014. Sensitivity of climate change detection and attribution to the characterization of internal climate variability. Journal of Climate 27:3477–3491.
  • Lean and Rind. 2008. How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006. Geophysical Research Letters 35:L18701.
  • Lockwood and Frohlich. 2007. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the National Academy of Sciences 463:2447–2460.
  • Lockwood and Frohlich. 2008. Recently oppositely directed trends in solar climate forcings and the global mean surface air temperature. II. Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the National Academy of Sciences 464:1367–1385.
    Lorius et al. 1990. The ice-core record: climate sensitivity and future greenhouse warming. Nature 139–145.
  • Marotzke and Firster 2015. Forcing, feedback and internal variability in global temperature trends. Nature 517:565–570.
  • Meehl, et al. 2004. Combinations of natural and anthropogenic forcings in the twentieth-century climate. Journal of Climate 17:3721–3727.
  • Rahmstorf et al. 2012. Comparing climate projections to observations up to 2011. Environmental Research Letters 7:044035.
  • Shakun et al. 2012. Global warming preceded by increasing carbon dioxide
  • Stott et al. 2001. Attribution of twentieth century temperature change to natural and anthropogenic causes. Climate Dynamics17:1–21.
  • Tripati et al. 2009. Coupling CO2 and ice sheet stability over major climate transitions of the last 20 million years. Science 326:1394–1397.
  • Wei et al. 2009. Evidence for ocean acidification in the Great Barrier Reef of Australia. Geochimica et Cosmochimica Acta 73:2332–2346.
  • Wild et al. 2007. Impact of global dimming and brightening on global warming. Geophysical Research Letters


Posted in Global Warming | Tagged , , , , | 18 Comments