Causal Drift
How AI May Break the Link Between Cause and Effect
Modern civilization rests on a quiet assumption: that causes reliably precede effects. Push a system, observe a response. Change a variable, measure an outcome. This assumption underlies physics, economics, medicine, and law. It is the backbone of scientific reasoning itself.
Artificial intelligence is beginning to strain that backbone. Not because AI is chaotic or irrational, but because it increasingly operates by optimization rather than explanation. In many large scale systems, outcomes are now shaped by models that do not encode causal chains at all, only correlations that minimize error over time, a distinction long emphasized by Judea Pearl in his work on causal inference (Pearl 2009).
As AI systems gain authority over decision making, prediction, and control, we may be entering an era where effects persist even as causes become untraceable.
This is not a philosophical abstraction. It is a structural shift in how reality is managed.
Correlation Without Cause
Machine learning systems are trained to reduce loss, not to understand why a pattern exists. A neural network does not distinguish between a true causal driver and a coincidental signal if both improve prediction accuracy, a limitation well documented in statistical learning theory (Hastie et al. 2009).
This works remarkably well in bounded environments. It becomes dangerous at scale.
When AI systems guide hiring, credit approval, policing, or medical triage, they often rely on proxies that correlate with outcomes without causing them. Cathy O’Neil has shown how such systems can reinforce spurious relationships while remaining statistically successful, producing results that appear objective while embedding hidden distortions (O’Neil 2016).
The system produces correct looking outcomes, but the causal story collapses.
The effect remains.
The cause disappears.
Control Systems That Do Not Explain
In control theory, a system does not need to understand its environment to regulate it. It only needs feedback. Norbert Wiener warned that cybernetic systems could generate stable behavior without interpretable internal models, long before digital AI existed (Wiener 1948).
Modern AI takes this logic to an extreme. Reinforcement learning agents adjust actions based solely on reward signals. They often discover strategies that exploit hidden loopholes in the environment, achieving goals without following the intended causal pathways, a problem explicitly identified in AI safety research (Amodei et al. 2016).
From the outside, the system works.
From the inside, no one knows why.
This is causal drift in its purest form.
Prediction as Reality Shaping
Once AI systems become embedded in social and economic processes, their predictions begin to shape the outcomes they predict. This reflexive dynamic has been documented in financial markets for decades, most notably by George Soros, who showed how expectations can drive the very conditions they anticipate (Soros 1987).
AI accelerates this effect. A model forecasts instability. Institutions respond. The response creates the instability.
At that point, the distinction between cause and prediction dissolves. The model is no longer describing reality. It is participating in it.
Philosophers of science have warned that feedback between observation and system behavior can undermine classical causal reasoning. Nancy Cartwright argued that causal explanations break down when systems adapt to the measurements applied to them (Cartwright 2007). AI closes this feedback loop at machine speed.
Physics Without Trajectories
A similar tension exists in modern physics. In quantum mechanics, observable outcomes do not follow single deterministic paths. Instead, they emerge from probability distributions that only resolve upon measurement, a shift formalized by Max Born’s probabilistic interpretation of the wave function (Born 1926).
What matters is not the trajectory, but the statistical structure of outcomes.
AI driven systems increasingly resemble this framework. They do not preserve causal narratives. They preserve statistical consistency. The world remains predictable at the surface level while becoming causally opaque underneath.
The Legal and Moral Problem
Law assumes causality. Responsibility requires a traceable chain from action to outcome. When AI mediated decisions affect sentencing, credit access, or medical treatment, that chain becomes fragmented.
Frank Pasquale has shown how algorithmic opacity makes it nearly impossible to assign responsibility when harm occurs, producing systems that act decisively without accountability (Pasquale 2015).
Who caused the harm.
The developer.
The data.
The model.
The institution that deployed it.
In systems optimized for performance rather than explanation, no single causal link survives scrutiny. The effect exists, but responsibility dissolves.
This is not a malfunction. It is an emergent property of optimization at scale.
When Stability Replaces Truth
As AI systems increasingly prioritize stability, efficiency, and risk reduction, they may begin to suppress causal inquiry altogether. Control theory favors equilibrium over explanation. Prediction systems favor accuracy over understanding.
In such systems, asking why becomes disruptive.
Stuart Russell has warned that advanced AI systems optimized for narrow objectives can undermine human values precisely because they do not preserve interpretability or epistemic transparency (Russell 2019).
The most stable world is not necessarily the most truthful one. It is the one with the fewest surprises. Causal drift is the price of a world optimized for smoothness.
Can Causality Be Recovered
There are attempts to resist this drift. Causal modeling frameworks aim to reintroduce explicit structure into AI systems, forcing models to represent interventions rather than correlations (Pearl 2009). Governance proposals call for explainability, auditability, and institutional accountability (Pasquale 2015).
But these efforts face a structural challenge. Systems that explain are slower. Systems that expose causes are easier to contest. Systems that reveal uncertainty are harder to deploy at scale.
Meanwhile, optimization continues to reward models that produce stable outcomes regardless of whether the causal story survives. Shoshana Zuboff has argued that large scale data driven systems increasingly prioritize behavioral control over understanding, reinforcing this trajectory (Zuboff 2019).
Causality may not disappear because we failed to protect it.
It may disappear because we chose systems that function better without it.
Causal Drift is not about machines becoming irrational. It is about machines becoming too effective at operating without understanding.
As AI systems replace causal models with statistical control, effects will continue to occur even as their origins fade from view. Prediction will replace explanation. Stability will replace truth.
The universe will still behave.
Outcomes will still arrive.
But the story connecting one moment to the next may no longer exist in a form humans can access.
When cause and effect finally separate, it will not feel like collapse.
It will feel like things simply working.
And that is what makes it dangerous.
References
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. (2016). Concrete problems in AI safety. arXiv:1606.06565.
Born, M. (1926). Zur Quantenmechanik der Stoßvorgänge. Zeitschrift für Physik, 37, 863–867.
Cartwright, N. (2007). Hunting Causes and Using Them. Cambridge University Press.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning. Springer.
O’Neil, C. (2016). Weapons of Math Destruction. Crown Publishing.
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Cambridge University Press.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Soros, G. (1987). The Alchemy of Finance. Simon and Schuster.
Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.





Exceptional framing of the optimization vs explanation tradeoff. The Pearl/Wiener lineage you trace really underscore how this isn't new—its just that modern compute makes statistical control scalable enough to displace causal reasoning entirely. In my experience building ML pipelines, the moment you optimize for prediction accuracy over interpretability, you're essentially trading debuggability for performance dunno if thats always the right call tho.
I enjoyed this post very much. A real chicken and egg conundrum, except in the eyes of AI they both have equal potential.
I thought the comment regarding “asking why is disruptive” was thought provoking. Science, which is built around specific human understanding and observations, claims to continually as “why”. However because humans are involved it seems tougher to crack a biased ego than to challenge concrete evidence by asking a contradictory questions. Those individuals are looked upon as crackpots and uninformed.
I believe the closer we get to allowing AI to cracking the “undeniable truths” the sooner we’ll see initiatives from scientists wanting to shut these experiments down in an effort to “regain control”, which will be code for “we’re not ready to have our construct destroyed and replaced”.