Wednesday, November 1, 2017

Science vs Conspiracy Theories

Pizzagate. Seth Rich and Wikileaks. The evil (reptilian?) “deep state” plotting a coup against Donald Trump.

Never before have baseless conspiracy theories played such a big role in American politics. At the same time, we're seeing more and more evidence that there really was some sort of “conspiracy” by Russia to influence the 2016 election.

How do we tell the difference between an honest-to-goodness conspiracy and a bogus conspiracy theory? It’s easy to say “just look at the evidence,” but if you’ve tried to argue with a conspiracy theorist before, you know that doesn’t usually work.

People who believe in these crazy theories will come at you with an endless parade of “facts” and “data” that they say prove they are correct:

If you try to point out that these “facts” are made up or wrong, they’ll just say the same thing about the “facts” you get from the “mainstream media”

The crazy thing is that they’re not (totally) wrong about this! The “mainstream media” does get things wrong, and sometimes it does have a liberal bias. How can we be sure that we’re not falling victim to the same “brainwashing” as the people we’re arguing with?

The best way to separate the bogus conspiracy theories from the real ones isn’t by trying to get better data, but by using the scientific method.

That may sound lame, but when they taught you about the scientific method in school, they left out the weirdest and most important part - in science you have to start by assuming that you are wrong.

The Part of the Scientific Method They Didn’t Teach You in School

When most people talk about the “scientific method” they think of something like this:

That's close. But when real scientists do science, there’s something really important we do between steps 2 and 3.

Let’s say we have a hunch that cell phones cause brain cancer, and we want to use science to figure out if that’s true.

Step 1: We ask the question:

Do cellphones cause brain cancer?

Step 2: We turn our question into a “research hypothesis:”

I hypothesize that cellphones cause brain cancer.

Now, before we can actually get to collecting data we have to do something weird. We have to take our research hypothesis and flip it around to create what scientists call a “null hypothesis” which says the exact opposite:

I hypothesize that cell phones don’t cause brain cancer. 

Any time you think that something is going on, the null hypothesis is that nothing is going on.

Now, instead of collecting data to try and prove the research hypothesis, our goal is actually to find data that disproves the null hypothesis. If we’re able to do that, then we can say our original research hypothesis is supported by default.

In other words: if we can disprove the hypothesis that cellphones don’t cause cancer, that that means we can be pretty sure that they do.

The null hypothesis is always the starting point for scientific research. You have to start by assuming that whatever you think is true is actually bogus.

In science “probable cause” isn’t good enough

It may seem backwards and confusing but there’s a reason we do it this way. Just like a trial where the defendant is “innocent until proven guilty” the null hypothesis is true until disproved, and we need to disprove it “beyond a reasonable doubt.” Just proving that the null hypothesis is “more likely than not” to be false isn’t enough. You have to really destroy it.

One number that gets thrown about in science a lot is “95%” - as in we need to be 95% sure that the null hypothesis is wrong before we can say anything interesting is going on, but even that number is controversial - some people think we shouldn’t say we’ve found anything unless we’re 99.5%, or 99.99% sure.

95% may seem like a high bar, but this is what we scientists actually do - and look at all the great stuff we’ve discovered! Modern breakthroughs in medicine, physics, and even the social sciences all make use of this method.1 We know that this method helps us discover real facts about the world.

Back to Conspiracy Theories

What makes conspiracy theorists different from scientists is that they never do this crucial step.

 They start from the assumption that their “research hypothesis” (the conspiracy) is already true.  And then they go out and try to find “data” that makes them even more certain that it's true.

So here’s a "Seth Rich was murdered to cover up sending DNC emails to Wikileaks" conspiracy theorist pointing out that Rich was with Imran Awan, a DNC staffer currently under investigation for bank fraud, the night of his murder:


If you’re already sure that Rich’s death was set up by the Democratic National Committee, than the “coincidence” of Rich being with a (in retrospect) shady member of the DNC the night he died might make you even more confident that you are plays right into the narrative you have already constructed in your head.

But if you’re being a good scientist and starting from the assumption that the DNC wasn’t involved in Rich’s death, then this isn’t an argument at all! How much of a “coincidence” is it for a DNC staffer (Seth Rich) to be at a bar with another DNC staffer?  Has this person never heard of "coworkers going out for a drink after work?"

Here's a fun exercise: the next time you spot an "argument" for a conspiracy theory on social media, try to figure out what the "null hypothesis" of the conspiracy is, and think about how convincing the argument would be if, like a real scientist, you made the null, rather than the conspiracy, your starting point.

Bad Arguments


You can see how just changing your “starting point” can turn a bad argument into a good one, or vice versa. This is why it’s usually impossible to use data or evidence to argue with conspiracy theorists. It’s like trying to run a race against someone who insists that they get to start just a few steps away from the finish line.

This isn’t the only thing conspiracy theorists get wrong - they also tend to be victims of confirmation bias, and sometimes their arguments don’t even make logical sense. But their rejection of the scientific method is usually pretty easy to spot. 

In fact, if you’ve ever argued with a conspiracy theorist before, you’ll find that they have trouble even considering the possibility that they might be wrong. It just seems so “obvious” to them that they’re right, that it’s impossible for them to think otherwise. Often this is because, when you've dedicated your life to a conspiracy theory it can really hurt to admit that you’ve been deluded the whole time.

It’s the method, not the question, which separates science from conspiracy theories

The great thing about the scientific method, is that you can use it on almost anything. There’s nothing wrong with using science to investigate “crazy” conspiracy theories, as long as you follow the right method and start from the right place:

If you think the earth is flat, start by assuming it is round.

If you think Donald Trump colluded with the Russians - start by assuming he didn’t collude with them

If you think Hillary Clinton personally gave Russia 20% of America's uranium - start by assuming she didn't do that.

If you think NASA faked the moon landing - start by assuming that NASA didn’t fake the moon landing.

All of these questions are legitimate topics for scientific inquiry, as long as you investigate them in a scientific way. Of course, some of them have (a lot!) more evidence than others. Good luck trying to find enough evidence to be 95% sure that moon landing was faked. 

Stuff like this isn't gonna get you there:

Once you've already bought into a conspiracy theory, you lose the ability to think up any other possible explanation for the data you see around, you besides the conspiracy.

You're so sure you are right that you forget about really basic stuff, like the fact that we can fold things:

 Being sure we are right can blind us. In contrast, admitting we might be wrong can help us to see the world more clearly. That's why it's such an important part of doing science.


The "real" scientific method is a pain, for sure. But it’s the reason that we can have more confidence in scientific results than in the ravings of some guy on Youtube. Of course, scientists can be wrong too - in fact we’re wrong all the time! But because scientists start by assuming that we’re wrong, it’s easier for us to admit it when we actually are wrong. Being wrong is what science is all about - it’s nothing we should be ashamed of.


1 Ok, this is the long boring footnote where I say that this is all a massive oversimplification. Not all scientists always use a null hypothesis (some scientists reject the idea altogether), and even when we do, we sometimes just do it in a pro forma way, without really considering the deep implications of what we're doing. And this whole "95%" thing is really just the tip of a huge and complicated controversy involving the way we use statistics in scientific research. For example, even though many scientists think that the statistical analysis they are using is telling them that they can be "95% sure they are right," that's not what it's telling them at all, because the kind of statistics that most scientists use today (called "frequentism") can't actually handle the idea of being "X% sure" about anything. There's actually a growing movement in statistics, called "Bayesianism," which does allow us to say that we're "95% sure about things" but which also doesn't really deal in null hypotheses either for good but complicated reasons. So it's all a big mess. But the important thing about the scientific method isn't the exact procedure you use, it's what scientists would call your basic "epistemological stance" - the way we view the process of "figuring things out." The key to science is being able to admit that you don't know things, and that doing research can help teach you things you didn't know before. So when a good scientist, whether she's a physicists, or chemist or anthropologist, starts to investigate a new problem, she always starts by at least considering the possibility that what she believes might be wrong. For some scientists that involves formulating a null hypothesis, for others  it involves "checking our priors" (that's a Bayesian term), and for others it involves considering the way our economic position, race, gender, or social and cultural identity might impact the way we see the world. But it's the same basic process, and it's essential for separating good science from the people typing "wake up sheeple!" in youtube comments. 

No comments:

Post a Comment