In 2020,as the Covid-19 pandemic rampaged across the globe, the World Health Organization declared that we had plunged into a second, simultaneous catastrophe: an infodemic. This global crisis was characterized by the rapid spread of false information, or misinformation, mostly in digital spaces. The fear was that such inaccuracies would leave the public unmoored, adrift in a sea of untruth. Eventually, this mass disorientation would cause people to harm themselves and one another.

In an effort to combat the rising tide of misinformation, certain agencies, including the U.S. Department of Health and Human Services and the U.K. Parliament’s Culture, Media and Sport Committee, have poured resources into quantifying its spread and impact online. Some of the resulting reports have spawned legislation aimed at limiting online fake news.

But some psychologists and sociologists aren’t convinced that misinformation is as powerful as all that—or that it is a substantially different issue now compared with in the past. In fact, they think that we may be prematurely whipping ourselves into a misinformation moral panic.

“It seems to me that we start from the conclusion that there is a problem,” said Christos Bechlivanidis, a psychologist and causation researcher at University College London. “But I think we need to think about this a little bit closer before panicking.”

Studying misinformation can be extremely slippery. Part of the reason is semantic. Even the scientific community does not have a good consensus on what constitutes misinformation.

“It’s such a weak concept,” said cognitive psychologist Magda Osman at the University of Cambridge. Misinformation is most commonly defined as anything that is factually inaccurate, but not intended to deceive: in other words, people being wrong. However, it is often talked about in the same breath as disinformation—inaccurate information spread maliciously—and propaganda—information imbued with biased rhetoric designed to sway people politically. Some lump misinformation under the same umbrella as disinformation and other forms of intentionally misleading material (though for her part, Osman draws a clear distinction between misinformation and propaganda, which is both better defined and much more clearly harmful). But this is where things start to get dicey: Even under its common definition, practically anything could qualify as misinformation.

Ever since the first humans developed language, we’ve been navigating an information landscape pitted with lies, tall tales, myths, pseudoscience, half-truths, and plain old inaccuracies.

Take, for example, a weather forecast that claims a particular day will have a high of 55 degrees Fahrenheit. If that day comes and temperatures rise to 57 degrees, does the forecast qualify as misinformation? What about a newspaper story that inaccurately reports the color of someone’s shirt? Or a scientific hypothesis that was once widely accepted but is later updated with newer, better data—a cycle that played out in real time throughout the Covid-19 pandemic? The trouble is, research that seeks to quantify or test susceptibility to misinformation will often include relatively innocuous inaccuracies alongside things like dangerous conspiracy theories.

It’s worth noting that misinformation—by any definition—has been around for a long time. Ever since the first humans developed language, we’ve been navigating an information landscape pitted with lies, tall tales, myths, pseudoscience, half-truths, and plain old inaccuracies. Medieval European bestiaries, for instance, described creatures like bears and weasels alongside unicorns and manticores. Anti-vaccine groups have been around for over 200 years, well before the internet. And in the age of yellow journalism around the turn of the 20th century, many reporters made up stories out of whole cloth.

“I don’t like this whole talk of ‘we’re living in a post-truth world,’ as if we ever lived in a truth world,” said Catarina Dutilh Novaes, a researcher who studies the history and philosophy of logic at the Vrije Universiteit Amsterdam.

Standards for journalism and books have, on the whole, improved since the yellow journalism days. But casual conversation isn’t held to the same rigorous standards—you’re probably not likely to pull out a reference book and start fact-checking your grandma at the dinner table. Today, a lot of this type of interpersonal discussion has moved online. Simply quantifying the amount of misinformation in a given online space, then, is virtually impossible, because “everything that we’re saying is inaccurate,” Osman said. And proving that wrong information has a direct impact on a person’s behavior can be even muddier.

Most of the rationale for quantifying misinformation and determining who is susceptible to it stems from the assumption that consuming it will alter people’s beliefs and cause them to behave irrationally. The quintessential example is misinformation surrounding Covid-19, which was blamed for many people’s subsequent hesitancy in getting a vaccine to protect against the virus. There are a wealth of studies demonstrating a correlation between consuming misinformation and vaccine hesitancy. But it is deceptively tricky to prove a causal link; for example, evidence suggests a lot of vaccine-hesitant folks were skeptical of the science well before the Covid-19 pandemic began. They may have sought out misinformation to justify their pre-existing bias—but that doesn’t mean consuming incorrect information caused the distrust. Other studies suggest that factors like in-group solidarity and national identity are stronger predictors of whether or not someone will get vaccinated against Covid.

In fact, a recent study showed that simply exposing people to Covid misinformation had little to no impact on their decision to get vaccinated and, in certain cases, may have even made them slightly more likely to get a Covid vaccine.

Attempts to pinpoint a particular group that is most likely to buy into misinformation—be it elders, young people, poor folks, the less educated, or some other identity—often have patronizing overtones as well. We’re all susceptible to believing things that aren’t true; it just depends on how they’re presented.

Osman compares the panic to that over violent video games in the last few decades. Despite a slew of headlines and politicians proclaiming that games like Grand Theft Auto and Call of Duty were making teenagers more aggressive, research hasn’t really demonstrated that one causes the other.

“I don’t like this whole talk of ‘we’re living in a post-truth world,’ as if we ever lived in a truth world.”

Osman argues that our collective concern over misinformation is, in some ways, a moral panic about the internet—which would place it in a long history of similar worries about every new way in which information gets shared. Virtually every form of communication technology has been met with its very own public outcry. In mid-15th century Europe, people destroyed dozens of print shops in a wave of anti-Gutenberg sentiment. The rise of radio in the 1930s led some American parents to fret about its corrupting influence on their children. Even the ancient Greek philosopher Socrates wasn’t immune to the moral panic of his day. “He didn’t like writing at all. It was suspicious,” said Dutilh Novaes.

At a certain level, these fears are perfectly reasonable. Until we know how a new technology will change our lives, it makes sense to proceed with caution. And lately, we’ve barely had time to do that. The last three decades have seen extremely rapid shifts in information-sharing technologies—from cell phones to email to social media—that culminate with the smart phone, which allows us access them all in one sleek, portable package. It’s overwhelming and, in many cases, scary.

“I think what people are still coming to grips with is realizing that actually there was a lot of optimism in the beginning of the internet,” Dutilh Novaes said. We expected that more freely available information would lead to more transparency and less confusion. Instead, we’ve been disappointed to discover that even in an information golden age, people can still be wrong.

Of course, none of this means that the spread of misinformation online is always benign, or that we shouldn’t attempt to regulate it in any way. It’s just that if we’re going to respond with sweeping new legislation—or let tech moguls impose their own limitations—we need to be sure of what the problem actually is, Osman said.

The silver lining is that fake news, false beliefs, and moral panics are not new phenomena—society has thousands of years of experience with them, for better or worse. “I would argue that we are pretty capable of dealing with lies,” Bechlivanidis said.


Joanna Thompson is a science journalist, insect enthusiast, and Oxford comma appreciator based in New York. In her spare time, she tries to run fast.

This article was originally published October 26, 2023, on Undark. Read the original article.