Tuesday, 24 February 2026

Battling Bullshit with Bayes and Brandolini

We can give fringe views far too much credence by applying false equivalence rather than proper Bayesian reasoning. We should assign prior probabilities to hypotheses, update them with new evidence to reach posterior probabilities, and let strong data drive the probability of evidentially unsupported ideas close to negligible levels. Refusing to do so inverts the hierarchy of evidence and grants nonsense unearned legitimacy.


Brandolini’s law, the “bullshit asymmetry principle”, compounds the problem. Producing a confident but unsupported claim is quick and easy; refuting it properly requires time, expertise, and careful explanation. Weak ideas can therefore saturate public discussion faster than they can be dismantled. The Dunning–Kruger effect adds a further distortion: people with limited domain knowledge often lack the background needed to recognise the limits of their understanding, making them resistant to updating beliefs even in the face of contrary evidence. This dynamic is particularly visible in alternative archaeology.

Academic publishing is polite and cautious; traditionally high costs filtered low-quality material. Online publishing has removed those barriers, allowing superficially “scientific” nonsense to spread freely. Academia must raise its game: use robust public language, rebut directly and quickly, and call falsified ideas exactly what they are. Politeness is no substitute for rigour.

Refusal to debate is sometimes valid, but it carries risks if unexplained. Silence may be interpreted as uncertainty or evasiveness. The goal is not to engage endlessly with committed proponents, but to inform the broader audience. Brief, evidence-based correction is often more effective than performative debate, which can create the illusion of a live controversy where little exists.

Open-mindedness is not refusing to close questions. As Walter Kotschnig warned in 1939, "don’t keep your mind so open that your brains fall out". It is letting strong evidence close them so better ones can be asked. When data drives a hypothesis near zero, we must say plainly: “Tested. Probability now vanishingly small. Move on.”

No comments:

Post a Comment

Comments welcome on fresh posts - you just need a Google account to do so.