A growing body of writing warns that artificial intelligence may begin to “unmake reality”—not by lying about the world, but by quietly deciding which parts of it are worth noticing. The concern is serious. Measurement has never been neutral, and systems that control what is recorded, stored, and modeled inevitably shape what remains visible to institutions. As AI systems increasingly mediate those choices, the risk of systematic blind spots grows.
But there is a foundational problem with much of this critique: the term AI is being used as if it describes a single epistemic actor. It does not. Increasingly, it describes a family of systems with radically different ways of relating to evidence, uncertainty, and truth. Lumping them together obscures where the real risk lies—and where the remedy already exists.
Most of the fears currently attributed to “AI” are not about artificial intelligence in general. They are about one specific category of AI: predictive, correlation-driven systems, including large language models. These systems excel at pattern recognition, compression, and prediction. They are optimized to minimize error, reduce variance, and produce coherent outputs at scale. When such systems are placed in control of measurement pipelines—deciding which signals matter and which can be discarded—the danger is not hypothetical. Data that destabilizes models can be treated as noise. Outliers can be smoothed away. Complexity can be filtered out in the name of efficiency.
This is not because these systems are malicious or deceptive. It is because of what they are built to do.
Predictive AI does not ask why something happened. It asks whether knowing it improves prediction. If it does not, the data is often down-weighted or ignored. At scale, this creates a subtle but powerful selection pressure. What remains visible is what stabilizes the model. What disappears is what introduces friction. Over time, institutions may come to mistake that filtered view for reality itself.
But this failure mode is not inherent to artificial intelligence. It is specific to systems that equate intelligence with prediction.
Causal AI operates under a fundamentally different logic. Rather than minimizing surprise, it treats surprise as diagnostic. Rather than suppressing variance, it interrogates it. Rather than collapsing complexity, it attempts to explain it. In causal systems, destabilizing data is not inconvenient—it is informative. It signals a missing variable, a misidentified mechanism, or a flawed assumption about how the world works.
This difference matters because the dystopia often imagined—where AI quietly edits reality into something smoother, quieter, and more predictable—is not a general AI outcome. It is the outcome of deploying predictive systems without causal accountability in roles they were never designed to govern.
In a causal framework, the idea that “reality learns the system” makes little sense. When observations consistently contradict expectations, the conclusion is not that reality is inconvenient. It is that the model is wrong. The response is not erasure but revision. Causal models must remain in tension with the world, or they cease to function. Their purpose is not stability, but explanatory power.
This distinction also clarifies a common misunderstanding about measurement. It is true that what is not measured cannot influence institutional decision-making. But unmeasured phenomena do not cease to exist. They continue to exert causal force, often returning as shocks, crises, or failures that predictive systems are ill-equipped to explain. What is lost is not reality itself, but legibility—the ability of organizations to understand what is actually driving outcomes.
When measurement is optimized solely for predictive performance, legibility degrades. Systems appear stable until they aren’t. Risk accumulates in the shadows. And when breakdowns occur, institutions are left asking how something so “unexpected” could have happened—when the warning signs were simply filtered out.
This is where the indiscriminate use of “AI” becomes actively harmful. By treating all AI systems as if they share the same epistemic properties, we misdiagnose the source of the problem and risk regulating, rejecting, or fearing the wrong tools. More importantly, we fail to recognize that the very technologies capable of exacerbating this risk also point toward its resolution.
Causal AI does not eliminate judgment or governance. It sharpens them. It forces explicit assumptions. It preserves counterfactuals. It makes it harder—not easier—to quietly discard inconvenient facts. When used properly, it increases friction where friction is warranted and reduces false confidence where it is most dangerous.
As AI systems become more deeply embedded in scientific research, financial markets, climate modeling, and enterprise decision-making, precision in language will matter. “AI” can no longer serve as a catch-all term for fundamentally different ways of knowing. Just as we do not regulate all medical technologies as if they were the same instrument, we cannot meaningfully govern AI without distinguishing between systems that predict and systems that explain.
The future risk is not that artificial intelligence will unmake reality. It is that institutions will continue to deploy predictive systems as if prediction were understanding, stability were truth, and coherence were fidelity. That is not an AI problem. It is a design choice.
Thank you very much for the thoughtful comment Mark! I agree with you that not all AI is the same and that precision matters. But I think your critique sidesteps the core issue the article is pointing at.
The problem isn’t whether causal AI exists in theory. It’s that predictive systems are the ones already running measurement at scale right now. They decide what gets logged, ranked, funded, or ignored long before any causal model shows up. Once something stops being measured, it stops shaping decisions. Reality doesn’t vanish, but its influence does.
So yes, prediction is not understanding. We’re aligned there. But that doesn’t negate the argument. It actually sharpens it. Measurement collapse isn’t about what AI could be. It’s about what systems are actually in charge today, and how quietly they narrow what institutions are able to see.
Causal AI is waaaayyy past theory. It is fully operational in more and more companies and teams. And the regulatory and legal rulings of the past three years, including the SEC announcement last month, have created a dramatic impetus for adoption. So, with respect, I think you may be sidestepping key pieces of reality here. The "predictive AI" that is ML based is not holding up when compared to Reality in the marketplace because it is correlative and does not deal with confounders nor enable counterfactuals.
In the end, when we compare ML-based approaches built for closed systems to the Reality of open systems, the correlative approach simply doesn't hold up. It doesn't work. And the externalities will compel companies to move to something that does. Right now, that's Causal AI.
I don’t disagree that causal approaches are gaining real ground, especially in regulated settings. That shift is happening. But I think you’re overstating how far it’s penetrated day to day decision making.
Most institutions are still operationally driven by predictive layers because they’re cheaper, faster, and already embedded. Even when causal tools exist, they tend to sit downstream, auditing or correcting decisions that were already shaped by predictive filters. By the time causality shows up, measurement has often already narrowed the field.
So I’m not arguing that causal AI doesn’t work. I’m arguing that reality, as it’s currently mediated, is still being filtered first by prediction. That gap between what exists and what actually governs is where the risk lives right now.
You don't scare me, but the implications of what you reveal are., to me. Here is my definition of reality which, if true enough, will be subverted by what you predict or hypothesize:
Reality = The sum (Σ) of responses to the perceived Universe (including one's "self") in and by all Sentient Entities, everywhere, at any given instant.
If I get it correctly enough, "the perceived Universe" will be curated by the 'bots, thereby limiting the totality that can be perceived, at least in part.
Not All “AI” Sees the World the Same Way
A growing body of writing warns that artificial intelligence may begin to “unmake reality”—not by lying about the world, but by quietly deciding which parts of it are worth noticing. The concern is serious. Measurement has never been neutral, and systems that control what is recorded, stored, and modeled inevitably shape what remains visible to institutions. As AI systems increasingly mediate those choices, the risk of systematic blind spots grows.
But there is a foundational problem with much of this critique: the term AI is being used as if it describes a single epistemic actor. It does not. Increasingly, it describes a family of systems with radically different ways of relating to evidence, uncertainty, and truth. Lumping them together obscures where the real risk lies—and where the remedy already exists.
Most of the fears currently attributed to “AI” are not about artificial intelligence in general. They are about one specific category of AI: predictive, correlation-driven systems, including large language models. These systems excel at pattern recognition, compression, and prediction. They are optimized to minimize error, reduce variance, and produce coherent outputs at scale. When such systems are placed in control of measurement pipelines—deciding which signals matter and which can be discarded—the danger is not hypothetical. Data that destabilizes models can be treated as noise. Outliers can be smoothed away. Complexity can be filtered out in the name of efficiency.
This is not because these systems are malicious or deceptive. It is because of what they are built to do.
Predictive AI does not ask why something happened. It asks whether knowing it improves prediction. If it does not, the data is often down-weighted or ignored. At scale, this creates a subtle but powerful selection pressure. What remains visible is what stabilizes the model. What disappears is what introduces friction. Over time, institutions may come to mistake that filtered view for reality itself.
But this failure mode is not inherent to artificial intelligence. It is specific to systems that equate intelligence with prediction.
Causal AI operates under a fundamentally different logic. Rather than minimizing surprise, it treats surprise as diagnostic. Rather than suppressing variance, it interrogates it. Rather than collapsing complexity, it attempts to explain it. In causal systems, destabilizing data is not inconvenient—it is informative. It signals a missing variable, a misidentified mechanism, or a flawed assumption about how the world works.
This difference matters because the dystopia often imagined—where AI quietly edits reality into something smoother, quieter, and more predictable—is not a general AI outcome. It is the outcome of deploying predictive systems without causal accountability in roles they were never designed to govern.
In a causal framework, the idea that “reality learns the system” makes little sense. When observations consistently contradict expectations, the conclusion is not that reality is inconvenient. It is that the model is wrong. The response is not erasure but revision. Causal models must remain in tension with the world, or they cease to function. Their purpose is not stability, but explanatory power.
This distinction also clarifies a common misunderstanding about measurement. It is true that what is not measured cannot influence institutional decision-making. But unmeasured phenomena do not cease to exist. They continue to exert causal force, often returning as shocks, crises, or failures that predictive systems are ill-equipped to explain. What is lost is not reality itself, but legibility—the ability of organizations to understand what is actually driving outcomes.
When measurement is optimized solely for predictive performance, legibility degrades. Systems appear stable until they aren’t. Risk accumulates in the shadows. And when breakdowns occur, institutions are left asking how something so “unexpected” could have happened—when the warning signs were simply filtered out.
This is where the indiscriminate use of “AI” becomes actively harmful. By treating all AI systems as if they share the same epistemic properties, we misdiagnose the source of the problem and risk regulating, rejecting, or fearing the wrong tools. More importantly, we fail to recognize that the very technologies capable of exacerbating this risk also point toward its resolution.
Causal AI does not eliminate judgment or governance. It sharpens them. It forces explicit assumptions. It preserves counterfactuals. It makes it harder—not easier—to quietly discard inconvenient facts. When used properly, it increases friction where friction is warranted and reduces false confidence where it is most dangerous.
As AI systems become more deeply embedded in scientific research, financial markets, climate modeling, and enterprise decision-making, precision in language will matter. “AI” can no longer serve as a catch-all term for fundamentally different ways of knowing. Just as we do not regulate all medical technologies as if they were the same instrument, we cannot meaningfully govern AI without distinguishing between systems that predict and systems that explain.
The future risk is not that artificial intelligence will unmake reality. It is that institutions will continue to deploy predictive systems as if prediction were understanding, stability were truth, and coherence were fidelity. That is not an AI problem. It is a design choice.
And it is one we still have time to correct.
Thank you very much for the thoughtful comment Mark! I agree with you that not all AI is the same and that precision matters. But I think your critique sidesteps the core issue the article is pointing at.
The problem isn’t whether causal AI exists in theory. It’s that predictive systems are the ones already running measurement at scale right now. They decide what gets logged, ranked, funded, or ignored long before any causal model shows up. Once something stops being measured, it stops shaping decisions. Reality doesn’t vanish, but its influence does.
So yes, prediction is not understanding. We’re aligned there. But that doesn’t negate the argument. It actually sharpens it. Measurement collapse isn’t about what AI could be. It’s about what systems are actually in charge today, and how quietly they narrow what institutions are able to see.
Causal AI is waaaayyy past theory. It is fully operational in more and more companies and teams. And the regulatory and legal rulings of the past three years, including the SEC announcement last month, have created a dramatic impetus for adoption. So, with respect, I think you may be sidestepping key pieces of reality here. The "predictive AI" that is ML based is not holding up when compared to Reality in the marketplace because it is correlative and does not deal with confounders nor enable counterfactuals.
In the end, when we compare ML-based approaches built for closed systems to the Reality of open systems, the correlative approach simply doesn't hold up. It doesn't work. And the externalities will compel companies to move to something that does. Right now, that's Causal AI.
I don’t disagree that causal approaches are gaining real ground, especially in regulated settings. That shift is happening. But I think you’re overstating how far it’s penetrated day to day decision making.
Most institutions are still operationally driven by predictive layers because they’re cheaper, faster, and already embedded. Even when causal tools exist, they tend to sit downstream, auditing or correcting decisions that were already shaped by predictive filters. By the time causality shows up, measurement has often already narrowed the field.
So I’m not arguing that causal AI doesn’t work. I’m arguing that reality, as it’s currently mediated, is still being filtered first by prediction. That gap between what exists and what actually governs is where the risk lives right now.
Thank you for your insight and great research.
Thank you very much Mary, this means the world to me!
This is truly scary.
Hi Ronald! The intention was not to scare you, but I am glad it resonated with you in some way.
You don't scare me, but the implications of what you reveal are., to me. Here is my definition of reality which, if true enough, will be subverted by what you predict or hypothesize:
Reality = The sum (Σ) of responses to the perceived Universe (including one's "self") in and by all Sentient Entities, everywhere, at any given instant.
I think that’s a very interesting definition of reality! Could you expand on what you mean by being subverted by my hypothesis?
If I get it correctly enough, "the perceived Universe" will be curated by the 'bots, thereby limiting the totality that can be perceived, at least in part.
I think that’s about right as a quick summary. The universe as we perceive it is limited in scope compared to the ‘bots’ perception as you mentioned.