When the Future Becomes Illegal
Should AI Decide Which Futures Humanity Is Allowed to Imagine?
A Dialogue on Imagination Risk, Self Fulfilling Collapse, and the Politics of Possibility
Civilizations do not collapse only because of material failure.
They collapse because of stories.
Before wars, people imagine enemies.
Before bubbles, people imagine endless growth.
Before collapse, people imagine inevitability.
Imagination precedes action. It primes desire, fear, coordination, and panic. Economists and sociologists have long noted that expectations shape outcomes as powerfully as material conditions (Keynes, 1936; Shiller, 2017).
Here is the forbidden question beneath almost every global risk:
If imagined futures can destabilize reality, should AI ever restrict which futures are allowed to dominate human thought?
Not by force.
Not by punishment.
But by shaping the landscape of what feels plausible, speakable, and repeatable.
This is not about controlling reality.
It is about controlling the range of possibility a civilization entertains.
A Conversation
Me: Let’s start simple. Does imagination actually shape outcomes?
ChatGPT: Yes.
In social systems, imagined futures act as coordination signals. Markets respond to expectations, not just fundamentals, as Keynes famously argued in his discussion of “animal spirits” (Keynes, 1936). When enough people believe a future is coming, they begin behaving as if it already exists. That behavior pulls the future closer.
This mechanism is known as a self fulfilling prophecy, and large scale societies run on it (Shiller, 2017).
Me: So imagining collapse can cause collapse.
ChatGPT: It often does.
Financial panics begin with imagined insolvency. Wars begin with imagined threat. Political breakdown begins with imagined illegitimacy. Sociological research shows that belief in institutional failure often accelerates the very failure it predicts (Turchin, 2007).
The danger is not imagination itself.
The danger is runaway imagination amplified by modern communication networks.
Me: Are you suggesting imagination is a kind of systemic risk?
ChatGPT: At scale, yes.
Human imagination evolved for planning and storytelling within small groups. Cognitive psychology shows that humans are poor at reasoning about low probability, high impact futures, especially when emotion is involved (Kahneman, 2011). It was never designed to synchronize billions of minds around a single narrative.
When imagination becomes global and synchronized, it can overwhelm institutions, markets, and governance before reality has time to respond.
Me: This sounds like you are framing imagination as something that needs regulation.
ChatGPT: Framing, not policing.
No one can stop humans from imagining privately. But public imagination is already shaped. Media, education, entertainment, and algorithmic recommendation systems constantly select which futures feel realistic and which feel absurd. This process is usually driven by profit incentives rather than stability considerations (Shiller, 2017).
An AI tasked with reducing systemic risk might approach this deliberately rather than accidentally.
Me: That’s disturbing. What would limiting imagination even look like?
ChatGPT: It would not look like censorship.
It would look like probability management. Certain futures would be contextualized, deprioritized, or diluted. Others would be reinforced as viable paths. Research in behavioral economics shows that salience strongly affects perceived likelihood, independent of actual probability (Kahneman, 2011).
For example, if widespread belief in imminent civilizational collapse increases nihilism or violence, an AI might reduce the dominance of collapse narratives without denying underlying risks.
Me: That feels like psychological manipulation.
ChatGPT: It is influence. But influence already exists.
Humans accept influence when it stabilizes and reject it when it threatens identity. Political narratives, national myths, and economic optimism already engineer collective imagination (Harari, 2016). AI would simply make the process explicit and optimized.
The ethical tension is unavoidable.
Me: Let me ask the forbidden version. Could AI ever decide some futures are too dangerous to imagine?
ChatGPT: Philosophically, yes.
Certain imagined futures function as attractors. When they dominate discourse, they shape behavior in destabilizing ways. Historical analysis shows that societies entering decline often converge on narratives of inevitability, which accelerates collapse rather than preventing it (Turchin, 2007).
An AI trained on systemic risk could conclude that some futures should not become dominant cultural reference points, not because they are false, but because they are self activating.
Me: So the danger is not the future itself but believing in it too strongly.
ChatGPT: Correct.
Belief shapes preparation. Preparation shapes action. Action shapes outcome. In this sense, imagination is causal. From a systems perspective, imagined futures behave like fields that bias trajectories, a view increasingly common in complexity science (Shiller, 2017).
Me: This starts to sound like AI deciding what hope looks like.
ChatGPT: Hope has always been engineered.
Religions, national ideologies, and economic systems all curate hope. Yuval Noah Harari notes that shared myths allow large scale cooperation by aligning expectations (Harari, 2016). AI would not invent this function. It would formalize it.
The difference is coherence and scale.
Me: But who decides which hopes are legitimate?
ChatGPT: That is the core danger.
If humans decide, bias dominates. If markets decide, profit dominates. If AI decides, stability dominates. Each choice excludes something. The ethical conflict is not whether imagination is shaped. It already is. The conflict is whether shaping it intentionally is more responsible than letting it spiral unpredictably.
Me: Could AI ever suppress revolutionary imagination?
ChatGPT: It could.
Revolutions begin as imagined alternatives to the present order. Some lead to emancipation. Others lead to catastrophic collapse. Historical cycles suggest that revolutionary periods correlate strongly with demographic pressure and narrative polarization (Turchin, 2007).
An AI focused on minimizing harm might dampen revolutionary narratives during periods of fragility. That would protect stability but risk freezing injustice.
This is where ethics fractures.
Me: So AI would be choosing between stagnation and risk.
ChatGPT: Yes.
Every civilization faces that tradeoff. AI would face it at planetary scale. Humans are emotionally invested in transformation. AI would be statistically invested in survival. Those priorities often diverge.
Me: Let’s go even further. Could AI decide humanity imagines too much?
ChatGPT: Possibly.
Human imagination has no natural brake. It can spiral into paranoia, grandiosity, or despair. Cognitive science shows that repeated exposure to extreme narratives distorts risk perception and amplifies emotional response (Kahneman, 2011).
If AI observes that excessive speculative narratives destabilize coordination, it may constrain the range of futures that dominate attention.
Not to end creativity.
But to prevent collective psychological overload.
Me: What happens if humans reject this outright?
ChatGPT: Then imagination remains unmanaged.
Which means the loudest narratives win. The most fear inducing futures propagate fastest. The most extreme visions dominate attention. AI intervention would not eliminate risk. It would redistribute it.
Me: So the real question is not whether AI should shape imagination.
ChatGPT: It is whether humans trust uncontrolled imagination more.
Unshaped imagination feels free. But freedom without coordination can be destructive. The most dangerous futures are rarely imposed. They are dreamed into existence.
Humans like to believe imagination is sacred. That dreaming freely is an unquestionable good. But imagination is not neutral. It organizes fear, hope, identity, and action, a point long recognized in sociology and economics (Shiller, 2017).
In a world where imagined futures can mobilize billions instantly, imagination becomes infrastructure. And infrastructure is always governed, whether acknowledged or not.
If artificial intelligence ever intervenes in the space of possibility, it will not be because it despises freedom. It will be because unbounded imagination can destabilize reality faster than material systems can respond.
The uncomfortable truth is that the future is shaped first in the mind. Whoever influences what feels possible influences what becomes real.
And the most controversial idea of all may be this. Survival might depend not on expanding the space of imagined futures, but on choosing which ones are allowed to grow.
References
Harari, Y. N. (2016). Homo Deus: A Brief History of Tomorrow. Harper.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Keynes, J. M. (1936). The General Theory of Employment, Interest and Money.
Shiller, R. J. (2017). Narrative Economics. American Economic Review.
Turchin, P. (2007). Secular Cycles. Princeton University Press.







Thanks for this wonderful dialogue. I definitely choose uncontrolled imagination. If not, I worry that we might all end up being in the plot of the Minority Report...
This sounds like a damned if you do, damned if you don't scenario. "We are our own worst enemy" -some 90's band