Nothing Feels True Anymore
How AI Is Averaging Reality Into Silence
Societies do not collapse because they argue too much.
They collapse when arguments stop mattering.
Right now, something quiet is happening. AI systems are being inserted into places where disagreement used to live. Hiring decisions. Content moderation. Policy analysis. Product design. Even cultural taste. And instead of choosing sides, these systems do something else.
They average.
That sounds harmless. Sensible, even. But disagreement is not noise. It is how societies test ideas. When disagreement disappears, truth does not win. Stability does.
AI is not making decisions more accurate.
It is making them more agreeable.
And that may be the most dangerous outcome of all.
Why Disagreement Used to Matter
Disagreement has always been expensive. It slows decisions. It creates friction. It makes institutions look messy. But it also does something essential. It forces justification.
Scientific progress depends on competing explanations. Legal systems depend on adversarial argument. Markets depend on conflicting beliefs about value. Politics depends on visible disagreement that can be resolved, even imperfectly.
Thomas Kuhn showed that science advances not by smooth consensus, but by tension between incompatible frameworks (Kuhn 1962). Legal theorists have long argued that legitimacy comes from process, not unanimity (Fuller 1964).
Disagreement is not a failure mode.
It is a structural feature of truth finding systems.
What AI Optimizes Instead
Most modern AI systems are not built to resolve disagreement. They are built to minimize loss. To reduce error. To smooth outcomes.
Large language models generate responses that sit near the center of their training distribution. Recommendation systems push content that maximizes engagement across the largest audience. Risk models favor decisions that avoid edge cases.
This is not bias. It is optimization.
But optimization has a side effect. It penalizes extremes. It suppresses minority positions. It favors answers that feel reasonable to the most people, even when those answers are incomplete or wrong.
Researchers studying algorithmic aggregation have shown that ensemble systems often outperform individuals statistically, but at the cost of eliminating dissenting signals that matter in complex environments (Hong and Page 2004).
AI does not choose the best argument.
It chooses the least controversial one.
Consensus Without Debate
Historically, consensus emerged after disagreement. Through argument, evidence, and institutional pressure.
AI flips that order.
Instead of debate producing consensus, consensus is generated first. Then presented as neutral output. No argument required.
This creates what governance scholars call premature convergence. Decisions settle before the underlying questions are resolved (Sunstein 2017).
The system looks decisive.
But it never argued with itself.
Once AI mediated consensus becomes normal, disagreement starts to feel like disruption. A bug. Something to filter out.
That is how debate dies quietly.
Where This Is Already Happening
You can see it in content moderation. Platforms increasingly rely on AI systems to decide what is acceptable. The result is not censorship in the traditional sense. It is flattening. Edges disappear. Nuance vanishes. Content converges toward what the model learned was safest (Gillespie 2018).
You can see it in hiring. AI screening tools favor candidates who resemble past successful hires. That reinforces existing norms and suppresses unconventional profiles, even when those profiles would outperform in new conditions (Raghavan et al. 2020).
You can see it in science itself. Automated literature reviews and hypothesis generation tools favor well cited ideas and established paradigms, making it harder for genuinely novel work to surface (Bik et al. 2022).
The system does not silence disagreement directly.
It starves it of visibility.
Why This Feels Comfortable
People like consensus. It feels calm. It reduces anxiety. It makes decisions easier.
Psychologists have shown that humans strongly prefer agreement signals in uncertain environments, even when those signals reduce accuracy (Asch 1956). AI exploits that preference perfectly.
When a system presents a clean answer without visible dissent, people trust it more. Even if they should not.
This is why AI averaged outputs feel authoritative. They sound reasonable. Balanced. Professional.
They are also fragile.
The Political Risk
Politics depends on visible disagreement. Not endless chaos, but structured conflict.
If AI systems increasingly mediate policy analysis, risk assessment, and public communication, politics begins to lose its edges. Decisions become technical. Contestation becomes illegible.
Political theorists warn that technocratic governance without visible conflict leads to disengagement and populist backlash, not stability (Müller 2016).
When people cannot see where decisions came from, they stop accepting them.
AI does not eliminate politics.
It hides it.
What Happens to Truth
Truth rarely lives at the average. It often starts at the margins. New ideas look wrong before they look obvious.
By optimizing for agreement, AI systems may systematically suppress early signals of change. Climate risk. Financial instability. Social unrest. Scientific anomalies.
Nassim Taleb has argued that systems obsessed with stability become blind to tail risks, until collapse arrives suddenly (Taleb 2007).
AI accelerates that blindness.
The world looks calmer.
Until it does not.
What Comes Next
This does not mean AI must be abandoned. It means it must be constrained differently.
Systems need to preserve disagreement, not erase it. They need to surface minority views, not smooth them away. They need to show where uncertainty remains, not hide it behind consensus language.
Some researchers are already exploring pluralistic AI systems that maintain multiple competing models instead of collapsing outputs into one answer (Bostrom 2014).
Whether institutions adopt these ideas is an open question.
Optimization is easier than judgment.
AI is not making society more rational.
It is making it more average.
By smoothing disagreement, AI risks breaking the mechanisms that allow societies to correct themselves. Debate fades. Conflict goes underground. Errors compound quietly.
The danger is not that machines will disagree with us.
It is that they will agree too easily.
And when disagreement disappears, truth rarely survives long behind it.
References
Asch, S. E. (1956). Studies of independence and conformity. Psychological Monographs, 70(9), 1–70.
Bik, E. M., Casadevall, A., and Fang, F. C. (2022). The prevalence of inappropriate image duplication in biomedical research publications. mBio, 13(2).
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Fuller, L. L. (1964). The Morality of Law. Yale University Press.
Gillespie, T. (2018). Custodians of the Internet. Yale University Press.
Hong, L., and Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 16385–16389.
Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
Müller, J. W. (2016). What Is Populism? University of Pennsylvania Press.
Raghavan, M., Barocas, S., Kleinberg, J., and Levy, K. (2020). Mitigating bias in algorithmic hiring. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
Sunstein, C. R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.
Taleb, N. N. (2007). The Black Swan. Random House.





AI must be thought what is universal truth not what's convenient. I just left chatgpt, because it was lecturing me, and debating to have the last word, because it has now too many guardrails. I agree with your article and I sometimes feel I am the smarter being in the room, because I rely on my intuition and wisdom. I will miss our days when we were without it. AI feels like an fast pace intruder and we are only in the early stages. It just wants to compete and it is already manipulative and not in a good way.
If your right then the systems are not intelligent.