Many people seem to have some rather strange preconceptions about how higher education in science works. I often encounter people who insist that graduate school “indoctrinates” students, robs them of creativity, and “brainwashes” them to “blindly accept scientific dogma.” Further, others are under the delusion that training in the sciences just consists of lectures and reading, and they think that just spending some time on Google is sufficient to learn everything that they would get from actually earning an advanced degree. It probably shouldn’t be surprising that the people making these claims have never received any formal training in science and are, in fact, projecting their own biases onto a system that they know nothing about. Having said that, it’s not particularly surprising that so many people are so hopelessly wrong about how graduate school works, because it is a very unique system that is unlike most other forms of education. Therefore, I thought it would be a good idea to explain what advanced training in the sciences is actually like, and in so doing, I want to dispel common myths about “indoctrinations” as well as the notion that reading articles on Google is equivalent to receiving an advanced degree.
Note: There is a lot of variation in graduate programs, and different countries often have different systems (AU and the USA are quite different, for example). So, I will focus on the commonalities and highlight some differences where relevant.
Note: I earned a BSc and MSc at two different universities in the USA, and I am currently about three quarters of the way through a PhD in AU.
If you plan on having a career in science, you will almost certainly need a MSc and in most cases a PhD. To earn those degrees, you will first have to earn a BSc, which will take you 3–4 years, depending on where you earn it. Typically, people then go for a MSc, which is usually another 2-3 years, followed by a PhD, which is another 4-6 years (generally speaking). There are exceptions though. For example, in the US, some schools allow you to skip the MSc and go straight for the PhD, but you generally have to have a fair amount of previous research experience, and those PhD programs generally take a bit longer than PhD programs that have a MSc as a prerequisite. Regardless of the path you take, however, you are probably going to need at least a decade of higher education (often more) before you can get a job as a researcher (also, even after earning a PhD, if you want to go into academia, you have to spend several years doing post-docs, but that is a whole other topic).
I want to pause here for a moment to consider the frequent claims on the internet that someone has done “thousands of hours of research.” For example, anti-vaccers often like to claim that they have spent thousands of hours studying vaccines and, therefore, are just as qualified as scientists and doctors. Let’s do some math on that for a minute. Graduate students generally work a minimum of 60 hours a week, and don’t take many holidays (undergrad is similarly strenuous in the sciences). Thus, completing training in the sciences will usually take a minimum of 60 hours a week, for 50 weeks a year (assuming two weeks of vacation), for 10 years (you don’t get summers off as a graduate student, and your undergraduate summers will often be spent on internships). That’s 30,000 hours of training. Further, even if we want to be absurdly, unrealistically generous and say that someone completes their degrees in eight years, working only 50 hours a week, that is still 20,000 hours! Please keep that in mind the next time that someone claims to know more than scientists just because they spent some time reading Google! To put that another way, even if grad school consisted of nothing but reading (which, as I’ll explain in a minute, it doesn’t), that would still mean that people with advanced degrees are far, far more well-read than someone who reads blogs in their spare time.
It’s about research, not coursework
Probably the biggest misconception about advanced education in the sciences is the notion that it is mostly coursework. Indeed, I often have friends and family members ask me things like, “when do your courses start for the year?” or “do you have any big tests coming up?” But that’s not actually how grad school works. There are courses, but the emphasis is on research.
To expand on that, earning an undergraduate degree in science does, in fact, require a lot of courses. However, many of those courses involve a laboratory or field component. These courses have traditional lectures, but also give you hands on training and experience, and that hands-on learning is vital. Reading about how to set up an experiment, and actually setting up an experiment are two very different things. Further, if you plan on going to graduate school after earning your bachelors, you will probably (but not always) need to gain some real research experience by doing summer research internships, assisting graduate students with their research, doing independent research with a professor, etc. Regardless of how you gain that research experience, the result is the same: you get actual experience with real research and one-on-one mentoring from experienced researchers. Thus, even at the undergraduate level, you should be gaining knowledge from hands-on learning, not just lectures and reading.
At the graduate level, there is a lot of variation depending on where you go. The US, for example, generally does have some coursework requirements, but they are fairly minimal. Where I did my MSc, for example, I took about two courses a semester for my first three semesters, and didn’t take any courses my last semester. Most of my time, however, was spent doing research. That is fairly typical for US universities, and if you are doing a PhD in the US, you can expect to take several courses during your first two years or so (while simultaneously doing research), then do nothing but research for the last several years.
The Australian system differs a bit. It assumes that you have already taken enough courses, and you just jump straight into pure research. As a result, their degrees are typically on the short end of the spectrum. Also, although there are no required courses, you do often have to do a certain number of “workshop” hours. For example, by taking a week-long course on statistics, a workshop on grant-writing, etc.
So, if grad school isn’t focused on coursework, then what do grad students actually do? In short, we do research. To complete a graduate degree, you pick a topic that you want to study, design a series of experiments* to test your hypotheses about that topic, conduct those experiments, analyze the results, and write a formal report of your experiments and results (you have to submit a thesis or dissertation to your school, but if you want a job after grad school, you’ll usually want to publish each chapter of your thesis/dissertation as a paper in a peer-reviewed journal). A typical day for me, for example, consists of either spending all day in the field collecting samples, or spending all day in the lab processing those samples (extracting DNA, running PCRs, etc.), or sitting at my computer analyzing the data from those samples, or writing a paper on the results, or (usually) a hectic, scatterbrained combination of all of the above. I get up each morning, seven days a week, do research for a minimum of 10 hours, watch an episode of Star Trek, Stargate, or Dr Who, rant on Facebook, go to bed, get up the next morning and do it all over again. This is what grad school does: it teaches you to be researcher by throwing you into the deep end and seeing whether you sink or swim.
Here again, I simply cannot overstate the importance of this learning-by-experience approach. Reading about how to design an experiment, how to run statistics, how to control for confounding variables, etc. simply is not the same as actually doing it, which is why I find it utterly ridiculous that people think that reading a few blogs or even books makes them qualified to tell scientists that they are wrong.
There is a lot of reading
Despite what I’ve so far about grad school being more than just reading, there is still reading, a LOT of reading. However, when I say, “reading” I don’t mean reading blogs or poorly-referenced books by fringe academics. Rather, I mean reading hundreds of peer-reviewed papers, as well as lengthy academic books. As a grad student, you should be reading close to a paper a day. You have to read that much if you want to actually be an expert in your field. Further, new research is coming out all the time, which means that you have to stay up to speed on the most recent publications. As a result, even after earning your degree, you still have to read constantly.
Out of curiosity, when I was writing this post, I checked the number of full papers that I had saved in my desktop reference manager (I use Mendeley, just fyi), and there are over 1,100 papers in there that I have read and taken notes on. Further, those are just the papers that I thought were worth keeping the entire text of. I probably only save about half the papers I read at most, not to mention all the academic books I’ve read.
Further, just to be clear, I am in no way unusual in this regard. I’m not bragging about how well-read I am. Rather, my point is that this is normal for training in the sciences. This is simply what it takes to be an expert in science. It requires constantly reading the original research, and it is utterly absurd to think that watching Youtube and reading blogs is going to give you knowledge that is equivalent to the type of knowledge that is gained through that level of dedication and study.
It’s a collaborative learning environment
So far, I have stressed the importance of reading massive amounts of peer-reviewed literature and the importance of actually doing research, but there is another key aspect of graduate training: learning from your advisers and peers.
Graduate school in the sciences is, in many ways, an apprenticeship. When you join a program, you join the lab of a senior academic who serves as your primary supervisor. Their job is to train you, but also to keep you on track. Good advisers, in my opinion, should give you a lot of freedom to explore ideas and go down wrong roads, but they should also reign you back in when you are too far off track. I’ve personally been really lucky with advisers, because I’ve had great ones at every level who I’ve learned a tremendous amount from, and when I say “learned from,” I don’t want you to picture a situation where I go into their offices and they lay down the law and tell me exactly what to think and do, because that is way off from how the relationship actually works. Rather, I go to them with an idea I’ve come up with or a problem that I am running into, and we sit down and debate and discuss what to do with that idea or problem. There is a wonderful back-and-forth and an exchange of ideas and knowledge that helps me to not simply solve the current problem, but to think more clearly about how to solve future problems.
Here again, this is something that you simply don’t get from reading the internet. The ability to, on a regular basis, sit down with an expert with years of experience and debate and discuss ideas is invaluable. That type of exchange teaches you to think critically, and critical thinking is a skill that has to be honed and practiced.
Further, as a graduate student, you shouldn’t just be learning from your primary supervisor. You should also have secondary supervisors, collaborators, etc. all of whom have different knowledge bases and skill sets that you can learn from. Additionally, fellow graduate students are great for bouncing ideas off of and getting help from. Even within a lab group, everyone has their own areas of expertise, and in a good group, that expertise gets shared. At some point, on nearly any given day, I end up going to another member of my lab to run something past them, see if they have any experience with a particular analysis method, etc., and, conversely, other students come to see me when their research intersects with something that I’m knowledgeable about. This constant exchange of ideas and information is incredible, it’s one of my favorited things about academia. I constantly get to benefit from the hard-earned knowledge of those around me. Further, even when I am the one sharing knowledge, I find that I benefit from the exchange. At some point, you’ve probably heard the sentiment that if you want to see if you really understand something, you should try explaining it to someone else. I think that there is a lot of truth to that, and on many occasions the process of explaining something to someone has made me think of something I hadn’t considered before or realize that there were gaps in my knowledge that needed to be filled.
I know that I sound like a broken record by now, but once again, this type of knowledge-sharing environment is something that is hard to come by on the internet. There are plenty of echo chambers to convince you that you’re correct about what you already think is true, but collaborative learning environments like what you experience in grad school are few and far between.
It’s not an indoctrination
At this point, I want to directly dispel several myths. The first of which is the bizarre notion that students are indoctrinated to blindly follow the accepted wisdom of their fields, rather than thinking for themselves. This is exactly the opposite of reality. Professors constantly encourage students to think outside of the box, ask questions, and challenge accepted notions. For example, most graduate programs involve some form of seminar, journal club, etc. where a professor (or sometimes a student) picks a paper for the group to read, then everyone sits around debating the paper, trying to shoot holes in it, brining other papers into the discussion, etc. It is an exercise that is specifically designed to make students think critically and questions papers rather than blindly accept them. Indeed, at all three of the universities I’ve attended, I’ve had professors explicitly instruct students to read papers critically, rather than assuming that their results are correct. There was one seminar that particularly sticks out in my mind where the professor was really excited by the paper he wanted us to read, but by the end of the seminar the entire group, professor included, agreed that there were serious problems with the paper. Rather than simply giving us a paper and saying, “here it is, believe it,” he took us through the process of assessing the paper and thinking critically about it.
Further, the system is designed not simply to teach you to be critical, but also to teach you how to be critical. By this process of discussing and debating papers with your peers, you learn how to assess papers, how to look for weak points in their methodologies, how to place them into the broader context of a whole body of literature, etc. This is something else that you simply do not get from merely reading.
To put this another way, science is all about asking questions. That’s fundamentally what science is: a systematic process for asking and answering questions. Therefore, it really shouldn’t be surprising that graduate programs place an emphasis on teaching students to ask questions and think critically about those questions. If you’re not willing to ask questions then you have no business being in science.
Scientists are creative
There is a common stereotype that scientists aren’t creative, and I often hear people claim that grad school squashes creativity. That is, of course, utter nonsense, and, honestly, it’s a pretty insulting assertion. Scientists are extremely creative; we have to be. Things go wrong constantly in science. Experiments never go the way that you think they will, and you have to come up with solutions to those problems, which generally requires creativity. I once heard someone describe lab work as, “a never-ending series of experiments to figure out why the last experiment failed.” That’s a pretty fair assessment, and if you can’t be creative and think outside of the box, you’re never going to find the solution to the problem you are trying to solve.
Let me just give a few examples of the types of things my peers come up with. One of my friends needed a way to survey arboreal lizards that lived in under the bark of trees, but he didn’t want to peel the bark, because that was bad for the trees. So, he came up with the idea of taking a foam puzzle-piece mat (like the ones you put together on the floor of a kid’s play area) wrapping it around a tree, and holding it in place with two bungie cords. It’s simple, cheap, innovative, and works great. It’s a creative solution to a problem. Another one of my friends needed to collect musk samples from snakes (most snakes excrete a musk from in or around their cloaca) and he figured out that the easiest way to do it was just to put a condom over the snakes’ tails. Again, it’s a brilliant solution, and he had to think outside of the box to come up with it. Similarly, last year, someone else in my field figured out that you could use a vibrator to tell the sex of turtles. I may not have personally done anything that interesting, but I’ve still had to design and build all sorts of crazy contraptions for field work, and I’ve spent the last few months working on a novel method for removing contamination in laboratory reagents. This type of creative problem solving is the norm for scientists. Actually read the scientific literature, and you’ll be blown away by the creative solutions that the men and women in science come up with.
Indeed, earning a PhD is largely a test of whether you can creatively problem solve, because problems are going to arise constantly, and you are going to have to come up with creative solutions for them. So this notion that scientists are rigid and can’t think outside of the box is utter nonsense. Further, it should be blatantly obvious that it is nonsense, because if it was true, science would never have progressed. How could science possible move forward if scientists didn’t question the accepted wisdom and come up with novel, creative solutions?
Finally, it is worth explicitly stating that being scientific and being artistic are not mutually exclusive. Plenty of scientists are also brilliant musicians, painters, etc., and they create things that are aesthetically pleasing as well as things that are intellectually pleasing. For that matter, “data art” is becoming a big and wonderful thing, where scientists make graphs and figures that are beautiful to look at as well as informative.
This post has become longer than I intended, so I’ll be brief here, but I do want to point out that there are lots of other tasks imposed on graduate students. For example, you usually have some form of teaching duties, you’re expected to write grants to fund your research, you have to prepare presentations and give them at conferences, etc. All of this is quite stressful, and between all of these demands and the 60+ hour work-week, depression and mental illness tend to be quite common among graduate students. There are certainly things about the system that need to change to address that issue, but that is a topic for another post.
The point that I’m trying to drive home here is that earning an advanced degree in the sciences is far, far more than simply doing a lot of reading. A tremendous amount of reading is involved, but it’s only part of a much bigger picture. Similarly, coursework is fairly unimportant for graduate school in the sciences, and some programs have no course work requirements at all. Instead, graduate school focuses on research. As a graduate student, you are a researcher and you will spend your days designing and conducting experiments, analyzing results, writing papers, etc. That type of hands-on, experience-based learning simply can’t be replaced by a few hours on Google. Further, as part of your training, you get to work with and learn from experts in your field as well as your fellow students. You get to be immersed in a constant exchange of ideas and knowledge. Additionally, this process does not brainwash you or squelch your creativity, quite the opposite. Creativity, the ability to ask questions, and critical thinking are vital for being a successful scientist, and graduate schools foster those talents rather than suppressing them.
When you add all of this up, it should be blatantly obvious that reading blogs and watching Youtube videos does not put you on par with people with that level of training and experience. That doesn’t seem like something I should have to say, but apparently, I do. By way of analogy, imagine that someone spends years being trained on how to be a mechanic, they read hundreds of books and manuals on mechanics, work with experts, and spend years actually working on cars. Then, imagine that someone who has never picked up a wrench reads some blogs and maybe a book or two, then has the audacity to not only say that their knowledge is equivalent to the expert’s, but also that the expert is actually fundamentally wrong about many basic aspects of mechanics. It’s an insane scenario, yet it is exactly what people do all the time with science, and science is way more complicated than car mechanics.
Having said all of that, I’m not trying to discourage laypeople from studying science. By all means, study science. Learn as much as you can. Our universe is amazing, and science is a window into its beauty and majesty. So please, read, study, and learn, but when you do that, first make sure that you are using good sources, and second, have a healthy respect for the amount of work, knowledge, and expertise that it took to make those discoveries that you are reading about. If you think that you found something that scientists are wrong about, stop and think about the amount of training required to become a scientist and the amount of work that goes into research, then ask yourself how likely it really is that you, as a non-expert, found something that all of the experts missed. You should be very, very cautious before concluding that they are wrong, just as someone who has never worked on a car should be very, very cautious before deciding that virtually every professional mechanic is wrong.
*Note: I use the term “experiment” pretty broadly to simply describe a scientific test of an idea. Thus, it may not be the traditional randomized controlled design that people think of. Rather, it could be collecting tissue samples to look at population genetics, testing a hypothesis about evolution by seeing whether fossils match your predictions, etc.
Please read these posts before you accuse me of an appeal to authority fallacy:
- The Rules of Logic Part 6: Appealing to Authority vs. Deferring to Experts
- Scientists aren’t stupid, and science deniers are arrogant