Why Smart People Are So Good at Believing The Wrong Things
Why strong reasoning skills don’t lead to neutral conclusions.
You’re about to hit the share button. The headline confirms exactly what you’ve been saying for months. Stop. Your bullshit detector just went MIA. Not because you’re gullible—because the content feels right. And that feeling? That’s your bullshit detector being gently tucked in for a nap.
Researchers asked 34,000 people across 16 countries how important it is to only share accurate news. 79% said “very” or “extremely” important. Then the researchers watched what they actually shared. 77% of them proceeded to share at least one false story in a subsequent test.
This isn’t a story about how naive other people are. It’s happening to you too, and probably more often than you realize.
Your Skepticism Works. It’s Just Badly Calibrated
In 1979, researchers at Stanford ran an experiment that should probably be taught at the high school level at every school in America. They gathered 48 people with strong opinions about capital punishment—half in support, half in opposition—and gave everyone two studies to read. One study suggested the death penalty deterred murder. The other study suggested it didn’t.
Here’s the thing: both studies were made up. The researchers pulled the data out of thin air. They were careful to make both equally rigorous and equally flawed—same quality of evidence, same methodological problems, just opposite conclusions.
Both groups rated the study that confirmed their existing view as significantly better conducted. After reading both studies, both sides walked away more convinced they’d been right all along.
This is called motivated reasoning. Your brain selects which cognitive tools to deploy based on whether you want something to be true. When a headline pisses you off, questions come: Who funded this? What’s the sample size? Is this even a real source?
But when a headline confirms something you already believe? Nothing. Crickets. Your brain has left the building.
How Better Reasoning Leads to Worse Conclusions
You might think education fixes this. It doesn’t.
Research shows people fit their perceptions of risk to their values, not to evidence. If your community prizes individual liberty, you’re more likely to dismiss environmental risks because accepting them might mean regulations. If your community values collective action, those same risks feel obvious and urgent, because they reinforce a shared moral logic.
And here’s where it gets really uncomfortable: a 2013 study found that political polarization on contested issues was greatest among people who scored highest on tests of cognitive reflection—those that test the ability to override fast, intuitive thoughts with slower and more deliberate ones. The better you are at reasoning, the better you are at justifying what you already believe. High intelligence doesn’t cure bias. It gives it a podcast, a white paper, and a confident tone.
Meanwhile, false news spreads about 70% faster than accurate news. Not because people are stupid, but because surprising claims feel more important to share.
The information you feel most confident about is often the information you’ve scrutinized least.
Six Questions for Headlines That Feel Right
The good news: research shows that simple interventions work. Prompting people to think about accuracy before they engage with content can reduce false sharing by around 10%.
At scale—hundreds of millions of users—that’s tens of millions of shares that don’t happen. And you don’t need new skills. You already know how to evaluate evidence. You just need to point that ability to content you agree with.
These six questions should help.
1. Would I believe this if I supported the other side?
Take any claim you’re about to share. Now imagine it’s being used to support the position you most despise. If your immediate reaction is “that’s cherry-picked” or “that source is biased” you just identified motivated reasoning in real time.
Remember the Stanford study? Both groups evaluated identical evidence and came to opposite conclusions about the quality of the studies based entirely on whether the results matched what they already believed.
That doesn’t mean both sides of every issue are equally valid. We’re testing whether you’re evaluating the claim or your allegiance to it. If changing the tribal affiliation of the source would change your assessment of its credibility, you’re participating in team sports with footnotes. It’s epistemology-flavored tribalism. All the satisfaction of being right, none of the burden of actually checking.
2. Am I reacting to the claim or how it makes me feel?
Emotional responses happen first. You feel vindication or righteousness, and then your brain goes looking for reasons why that feeling is justified. This happens faster than conscious thought.
There’s another problem: familiarity increases perceived accuracy for both true and false headlines. The more you’ve encountered a claim, the more true it feels. This is how propaganda works. It’s also the blueprint of most social media algorithms.
If your immediate reaction is “finally, someone said it”—be suspicious. That recognition isn’t evidence the claim is true. It’s evidence the claim is compatible with things you already believe, which is precisely when you stop asking questions.
3. Can I explain what would prove this wrong?
Think of a political belief you hold strongly. Now complete this sentence: “I would change my mind if…” Take your time. This is surprisingly difficult.
Can’t do it? Or can you only finish it with something vague like “if there were compelling evidence”?
If you can’t articulate a specific, observable piece of evidence that would change your mind, you’re not holding a falsifiable believe. You’re holding an identity marker. Actual reasoning includes knowing what kind of evidence could prove you wrong.
4. Have I spent more than 10 seconds evaluating this?
About half of false sharing happens because of inattention, not ideology. People aren’t carefully weighing evidence—they’re scrolling, reacting, and sharing. The platform design of social media buries questions about accuracy under engagement metrics.
This is why accuracy prompts work. They don’t teach people anything new. They just shift attention from “will my followers like this?” to “is this actually true?”
Ten seconds is enough time to google the source. To check whether that quote is real. To notice you’re looking at a screenshot of a screenshot of a tweet.
5. Who benefits if I share this?
False news spreads faster than true news partly because people share surprising information to look informed. Being “in the know” has social value completely independent of accuracy. The currency is “seeming informed,” not “being correct.”
The more outrageous the claim, the more share-worthy it becomes. The more perfectly it confirms your bias, the more useful it is for signaling your values.
Research finds that veracity has surprisingly little effect on sharing intentions, despite having a huge effect on judgments of accuracy. You can think something is probably false and still share it because it’s socially useful.
6. Am I confusing “this sounds true” with “this is true”?
When information aligns with your preferences, you subject it to less scrutiny. When it contradicts them, you hold it to impossible standards. And the same evidence that might move you toward an unwelcome conclusion will instead strengthen your original belief if that belief is important to your identity.
If you’re absolutely certain about something you learned from a headline—if even considering doubt feels like betrayal—that’s the moment to be most suspicious. Not of the claim necessarily. Of your own certainty.
The stronger you wish that something is true, the higher your standard of evidence should be.
The Part Where This Still Goes Wrong
Let’s be clear: these questions have limits. A framework that’s been circulating in journalism school for decades isn’t a guarantee of accuracy.
These interventions are less effective when identity protection is the primary motivation. Even knowing about motivated reasoning doesn’t make you immune to it. You can read this entire essay, nod along, and still share the next headline that confirms your beliefs. In that moment it never feels like bias. It feels like spotting something that’s plainly, undeniably true.
The goal isn’t perfection; it’s catching yourself slightly more often. Taking a bite of the elephant.
Why the Share Button Rewards the Wrong Instincts
Here’s what’s actually happening when you share something: you’re not making a statement about truth, you’re making a statement about loyalty.
Research shows a stark disconnect between what people judge to be accurate and what they choose to share. You can know something is questionable and share it anyway because it signals the right values. The content you amplify says less about your relationship with truth and more about what you’re protecting. Studies across 16 countries, remember, found the same pattern of people failing to consider accuracy. Not because they can’t, but because platform design makes other things more salient—likes, comments, shares.
The intervention is simple in theory and excruciating in practice: redirect the skepticism you already have. You know how to evaluate sources. You know how to spot logical fallacies. You do all of this constantly, relentlessly, to claims you dislike.
Now do it to the ones you like, too.
Especially to the ones you like.
The headlines that feel most right—the ones that perfectly capture what you’ve been saying—those are the ones that need the most scrutiny. Not because they’re more likely to be false. But because that feeling of rightness is your brain saying “this protects something important.” And when protection becomes the goal, truth-seeking becomes optional.
Before you share something that feels like vindication—pause. Ten seconds. Six questions.
The most dangerous information, after all, isn’t what you disagree with. It’s what you never thought to question.


