Academia has failed, not because people suddenly grew stupider (no, that would be too simple, too comforting), but because of peer review and that solemn, almost liturgical phrase, “evidence-based” science.
And yet… did it truly “fail”?
I suspect it did not fail at all.
It succeeded, succeeded at the very thing it was designed to do, though no one will say it plainly.
It learned, with exquisite discipline, to protect established positions from challenge.
It learned to protect consensus instead of truth.
Start with peer review.
People treat it like absolute truth, as if truth could be stamped and filed and made respectable.
But it isn’t truth.
It is a consensus filter with a prestige weighting function, and the weight is not distributed as justice would distribute it, but as fear does, fear of error, fear of embarrassment, fear of falling out of favor with the invisible tribunal.
The reviewers are often the very ones who built their careers on the current model.
How could it be otherwise?
And so anything that threatens that model doesn’t get “debunked” (that would require courage, that would require real confrontation); it gets methodology’d to death.
They do not say, “You are wrong.” They say, “Your sample, your controls, your assumptions…” and with that a living idea is slowly smothered under a death pillow of procedure.
It’s not conspiracy.
It’s the incentive.
You can almost hear it humming beneath the fancy sentences.
And the replication problem isn’t a weird failure mode either.
It is simply what happens when incentives reward publication and prestige instead of verification.
When the rewards go to flashy new papers and status, and not to checking whether the results actually hold up, bad findings survive, good challenges get filtered out, and the “approved story” stays protected.
The machine is just doing its job.
And what is most disturbing is that it does its job with a clean conscience.
Then we slap words like “evidence-based” on top, as though naming the thing purifies it.
It doesn’t; it merely hides it.
Because the weak link is almost always what counts as evidence in the first place.
Even the word choice “evidence” tells you a lot. Evidence! As if it were a single, hard object you could place on the table and say, “There, look!”
But there are major issues with how often the “evidence” has been observed in a given sample, and under what constraints; and beyond that, there is the terrible human fact that scientists are often funded by special interests and have self-protection maximizing incentives.
They have not necessarily become wicked; they have become cautious. And caution, when institutionalized, becomes blindness.
They’ve lost the ability to critically examine “evidence” and instead think more in terms of consensus.
They do not let the data modify the theory.
They let the theory decide what the data is allowed to mean.
And because of this they are repeatedly wrong, and those errors can take years to self-correct, years in which the wrong idea sits on a throne, wearing the robe of legitimacy, while anyone who doubts it is treated as a nuisance, a heretic, or worse, an uneducated charlatan who must be silenced for the public good.
This pattern shows up not only in first-order domain sciences like physics, chemistry, and biology, but also in second-order inferential sciences like econometrics, statistics, and data science.
The epistemic difference between the orders is significant, and the tragedy is that we pretend it isn’t.
In physics, chemistry, and biology, the bottleneck is usually experimental ingenuity, theory, and money. Reality pushes back, and the constraints are hard. If your model is wrong, the world eventually humiliates you. The humiliation is clean. It is almost merciful. The planet does not care about your résumé.
But in econometrics, statistics, and data science, the bottleneck is usually identification and assumptions, plus incentives. You can’t randomize. The “laws” change constantly. The results get shaped by funding, career risk, and what your bosses are willing to see or accept. The degrees of freedom are enormous, so you can often “prove” what your incentives request.
In quant terms, the posterior is rarely just “data-driven.” It is incentive-conditioned. And this is where the soul of inquiry becomes most vulnerable, because one can be “right” on paper while being wrong in the world, and be massively rewarded for it.
That difference gives you a rule of thumb.
Use hard-anchored domains, physics, engineering, clear RCTs, as strong priors. Treat soft-anchored domains, economics, public health, nutrition, policy, as noisy hints, collections of hypotheses, and maps of what is socially safe to say, not just what is true.
Then run the incentive overlay, because you must, if you are honest.
- Who loses money or status if this finding is true?
- Who gets money or status if this finding is true?
- Does it contradict mechanistic understanding, or just last decade’s consensus?
Once you see bias, groupthink, and special interests as structural features of the knowledge pipeline (not just individual morality), you stop treating “the evidence” as one category. You start weighting it by domain complexity + feedback speed + incentive distortion. You begin, at last, to think in the way a sober person thinks, without romance, without the need to be comforted by credentials.
The contagion of unanimity really slows progress down in the hard sciences, and today it spreads like a mind virus with a DOI.
In the human sciences, groupthink and institutional fear can define what even counts as evidence in the first place. In one realm, the herd delays discovery; in the other, the herd decides what is permitted to be discovered.
And it gets worse.
The pipeline is layered.
Evidence is produced upstream in the first-order world, examined through incentives, then sent downstream to second-order systems where it gets “refined” by assumptions, models, and publication incentives. And refined, what a word! As though distortion were purification. As though the more hands touch the thing, the cleaner it becomes.
So when you hear someone say they “follow the science,” what they often mean is they’re just waiting to see what the herd decides it’s safe to repeat. They follow not the world, but the permission structure around it.
That’s why the “show me peer-reviewed proof” crowd is so predictable. They demand “evidence” as a gotcha, but if the analysis is contaminated, as it often is, the PDF is just a permission slip from the masters. It is a certificate that says, “This belief is allowed.”
They do not ask for truth, they ask for permission. They do not want to see, they want to be absolved of seeing. They hold out the word “evidence” the way a frightened man holds out his papers to a policeman, trembling, not because he loves the law, but because he fears being left alone with his own mind.
Yes, controlled studies matter.
Good RCTs matter.
Blinding matters.
Statistical power matters.
Mechanism matters.
But tell me, what kind of thinking is it when a man sees his experiment work on a thousand real humans and still refuses to believe it until a distant committee grants him permission to do so?
What is this need for a peer-reviewed blessing before you are allowed to trust your own eyes?
I don’t need a multi-year, multi-million dollar study to validate what already happened. Waiting ten years for a sanctioned verdict is not scientific thinking. It is outsourced courage. It is the moral abdication of the intellect.
Real progress has always looked like small, messy tests, uncomfortable signals in the data, sharp inferences from imperfect samples, and then aggressive model updates. It has always been a little indecent, a little humiliating, because it forces you to admit you were wrong in public.
But we flipped it.
Now consensus gets treated like ground truth, and real-world results get treated like “noise” until the right journal prints a PDF, until the priests of the approved have spoken, and only then are you permitted to believe what you already saw.
But the main problem with worshipping “the evidence” is that “evidence” isn’t a single, clean object. It’s a socially filtered artifact that arrives with priors, incentives, and blind spots already baked in, as though truth itself must pass through a confessional before it may enter the world.
And then they forget the other half of the story: how often the official record was wrong, how slowly it self-corrects, and how much operator knowledge gets sneered at as “anecdote” even when it beats the playbook in live conditions.
They call it “anecdote” the way a respectable man says “sin,” not to describe it, but to dismiss it, to keep it outside the walls, because it was not spoken by an anointed prophet of the ruling order.
At that point you are not a scientist.
You are a customer support agent for the machine.
And the machine’s main job is to run a DDoS attack on your brain’s ability to notice and trust the reality you see and experience every day, until you are too tired to argue with your own eyes, and you finally ask the masters for permission to believe what you already know.
If you like The Unconquered Mind, sign up for our email list and we’ll send you new posts when they come out.
If you liked this post, these are for you too:

