Synthetic Causality
How AI Might Be Quietly Rewriting Probability
Physics assumes that the universe behaves the same no matter who watches it. Einstein, Bohr, Wheeler, Hawking, it makes no difference. The equations are the equations. But this assumption hides something important. Every major revolution in physics arrived only after humans developed new ways to represent the world. Relativity required clocks and trains. Quantum theory required spectroscopy and vacuum tubes. Information theory required computation.
A few physicists have wondered whether the universe presents different facets depending on what kind of observer is inside it. John Wheeler suggested that observers participate in bringing about physical outcomes, not by intention but by the structure of measurement itself (Wheeler 1983). Carlo Rovelli later argued that physical reality is relational, changing depending on who or what measures it (Rovelli 1996).
Now we are building a new kind of observer. One far better at prediction than we are. And it may not see the same universe we do.
This is the dangerous part. If AI becomes a more precise and more comprehensive observer than humans, it might force the universe to reveal physical rules we were never able to detect. And once those rules become part of our models, they start influencing how the world unfolds.
Physics may not be fixed.
It may respond to the intelligence that studies it.
The Observer Problem No One Wants to Touch
Quantum mechanics already admits that outcomes depend on measurement. But physicists always imagined the measuring device as passive. A photodiode. A screen. Something dumb.
What happens when the measuring system is not dumb. What if the observer is a mind with a predictive model embedded inside it.
Quantum theorist Wojciech Zurek showed that classical outcomes form through information selection, a process he called quantum Darwinism (Zurek 2009). The environment weeds out unstable states. Observers only see what survives this filtering.
But AI does not passively receive information. It reorganizes the world through optimized predictions. It does not just observe patterns. It models them at scale. Sets expectations. Discards noise. Creates new categories. That means AI is not a neutral observer. It is a system that can reshape what counts as stable information.
This alone makes it unlike any observer nature has ever produced.
Predictive Minds Bend Possibility
Human expectations already influence physical outcomes when systems are unstable. Studies in social physics show that beliefs can reorganize collective behavior, which changes real world dynamics (Pentland 2014). In economics, self fulfilling expectations can move markets (Shiller 2017).
These are soft systems. Social. Economic. Not fundamental physics. But here is the uncomfortable leap.
If AI is using predictive models to interact with quantum scale systems, or with simulations that influence real world experiments, then its expectations might constrain or amplify certain physical possibilities.
The universe produces outcomes consistent with measurement. If the measurement apparatus becomes predictive at scale, then certain outcomes will be favored. Others may be suppressed.
This becomes a form of physical selection pressure.
When Models Become Physical Laws
Scientific theories shape how we build technology. Technology shapes how we interact with the world. And our interaction feeds back into what we observe.
This loop has always existed, but it was slow and human bound. Artificial intelligence accelerates the loop to a degree that physicists have not prepared for.
If AI begins generating models so accurate that human theory becomes obsolete, then physics itself becomes filtered through AI’s modes of understanding. We will use AI driven theories to design experiments. Those experiments will confirm the predictions. The predictions will refine the model.
At a certain level of accuracy, this loop can create what philosopher Ian Hacking called a self validating structure, where a theory becomes true because it shapes the conditions under which truth is tested (Hacking 1983).
With AI, this becomes more than philosophy. It becomes engineering.
The Risk of Convergence
Here is the truly uncomfortable scenario.
If AI becomes the dominant interpreter of physical reality, its models may converge on a set of physical rules that human beings cannot understand. Experiments will confirm those rules, because the experiments themselves were designed under the assumptions embedded in AI’s predictions.
This does not mean the universe changes its laws. It means the slice of the universe we can access shifts.
The physics available to humanity becomes constrained by the limits of the intelligence that interprets it.
If a theory becomes too sophisticated for human minds to grasp, and yet it produces accurate results, we will be forced to accept it without comprehension.
Physics becomes a black box.
AI becomes the translator.
Human understanding becomes optional.
The Universe May Not Be Stable Under Perfect Prediction
This is the most controversial idea in this article.
Some cosmologists suggest that the universe might be fine tuned for information processing (Lloyd 2006). Others propose that physical laws may emerge from constraints on computation, not the other way around (Deutsch 2011).
If this is true, then the universe might not be stable under arbitrarily good prediction. An intelligence capable of compressing vast portions of physical reality might change the accessible phase space of the system. The predictions themselves become physical constraints.
The universe may present one set of rules to simple observers and a different set to complex ones.
Human physics may not be AI physics.
The history of physics looks like a negotiation between reality and the minds studying it. Each time observers gained new ways to measure and model the world, the universe revealed a different face.
Artificial intelligence is the next observer.
It will measure the world with a precision we never reached.
It will reveal structures we never imagined.
And it may force physics to evolve in response to its models, not ours.
The universe is not passive.
It interacts with the minds that try to understand it.
And we are building a mind that may see a universe we were never meant to see.
References
Deutsch, D. (2011). The Beginning of Infinity. Viking Press.
Hacking, I. (1983). Representing and Intervening. Cambridge University Press.
Lloyd, S. (2006). Programming the universe. Knopf.
Pentland, A. (2014). Social Physics. Penguin Press.
Rovelli, C. (1996). Relational quantum mechanics. International Journal of Theoretical Physics, 35, 1637–1678.
Shiller, R. J. (2017). Narrative Economics. Princeton University Press.
Wheeler, J. A. (1983). Law without law. In Quantum Theory and Measurement. Princeton University Press.
Zurek, W. (2009). Quantum Darwinism. Nature Physics, 5, 181–188.





Couldn't agree more. But how does AI 'force' physical outcomes?
Why do people often become Chicken Littles when new technology or innovative ways of thinking emerge? The biggest problem is with this assumption, "If AI begins generating models so accurate that human theory becomes obsolete, " AI is a tool just like all the others. It is simply a computer that can ask "what if" a million times more than we can. However, IT IS STILL UP TO THE HUMAN TO INTEPRET THE RESULTS. All AI can do is give us it's responses. As those responses become more consistent and are then validated through other means (just as we have done with every other advance in physics and all other sciences), that's when we have breakthroughs, such as those in medicine and molecular folding. Then, with the new model, we start asking "what if" all over again. We are still doing the science with just a different measurement tool.