The Geometry of Forgetting
Designing AI That Learns by Erasure
Most of artificial intelligence is built around accumulation. More data, more parameters, more memory. But in living systems, forgetting is as essential as remembering. Brains prune synapses, cultures let traditions fade, ecosystems shed species. Forgetting is not failure. It is a geometry of erasure that makes space for adaptation. What if AI were designed not only to learn by adding but to evolve by subtracting?
Biological Lessons
Neuroscience shows that synaptic pruning is crucial for development. Children are born with a surplus of connections that are gradually eliminated, sharpening perception and cognition (Huttenlocher & Dabholkar, 1997). Immune systems also “forget,” discarding unhelpful antibodies to refine responses. Even genetic networks silence large portions of code so organisms can remain stable while adapting. Forgetting is not an accident. It is a structural feature of intelligence.
Toward an AI of Erasure
Imagine AI systems that deliberately lose information in structured ways. Instead of catastrophic forgetting, they would engage in purposeful forgetting: removing redundancies, erasing outdated pathways, and even discarding memories that bias future reasoning. The act of erasure would become a form of computation.
Such systems might resemble sculptors more than archivists. Intelligence would not be a pile of stored data but a shifting architecture carved by subtraction.
A Speculative Experiment
One way to test this would be to design a reinforcement learning agent with controlled erasure cycles. After each learning episode, the system would be forced to eliminate a percentage of weights or memory traces, not at random but guided by an erasure algorithm. The experiment would ask: does the agent become more flexible when it must survive through loss? Can forgetting act as a driver of generalization rather than a weakness to avoid?
A second approach could be cultural: building multi-agent systems that exchange memories but also erase them according to shared rules. In such environments, what persists is not raw data but evolving traditions of memory, always partial, always provisional.
Planetary Forgetting as Intelligence
Forgetting is not confined to brains or cultures. The Earth itself forgets. Glaciers scrape valleys clean, erasing older landscapes while preparing new ones. Forest fires burn away accumulated matter, resetting ecosystems so they can grow again in healthier balance. Desert winds bury entire cities beneath dunes, forcing new beginnings in regions once densely inhabited. These acts of natural forgetting are not failures of the planet but processes of renewal.
Artificial intelligence could learn from this planetary geometry of erasure. Imagine large-scale models that periodically wipe whole layers of learned patterns, the way glaciers erase valleys, leaving behind only the most resilient structures. Or consider AI systems designed to “burn off” outdated correlations, the way fires reset forests, allowing new pathways of generalization to emerge. Forgetting on this scale would not be about preserving every detail, but about clearing space for the emergence of new forms.
Broader Implications
If such models succeed, the consequences could be wide-ranging. In medicine, diagnostic AI could “forget” outdated clinical guidelines as new data emerges, reducing harmful inertia. In economics, forecasting systems could shed obsolete assumptions instead of embedding them forever in their models. In culture, creative AI might generate fresh styles by discarding old patterns as actively as it invents new ones.
Philosophically, this reframes intelligence as a dance of presence and absence. A machine that forgets well might be closer to life than one that remembers everything.
Forgetting as an Alignment Mechanism
One of the central risks in building long-lived AI systems is value lock-in. If a system crystallizes around a particular set of goals or assumptions too early, it may carry them forward indefinitely, even when those values become harmful or obsolete (Bostrom, 2014). Forgetting, if properly structured, could act as a release valve against this rigidity.
Instead of preserving every learned weight or decision rule, an aligned system would periodically erase parts of itself. This would not be random deletion but guided forgetting, designed to weaken attachments to biases, brittle heuristics, or culturally outdated norms. Much as forest fires reset ecosystems and glaciers reshape landscapes, erasure would prevent the hardening of values into immovable stone.
Speculatively, this could be tested in multi-agent alignment environments. Imagine a community of AI systems, each with partial memory, sharing fragments of knowledge but also erasing them according to cycles of renewal. Alignment would not rest on permanence but on balance, where no single bias or harmful goal could dominate across generations.
Such a design could allow future AI systems to grow alongside humanity, evolving with our shifting ethics rather than freezing them at a single moment in time. Forgetting, paradoxically, may be the only way to keep intelligence open to the future.
We often fear forgetting because we equate it with loss. Yet erasure is how systems remain alive. By designing AI that learns through subtraction, we may discover that true intelligence is not the endless hoarding of signals but the courage to let go.
References
Huttenlocher, P. R., & Dabholkar, A. S. (1997). Regional differences in synaptogenesis in human cerebral cortex. Journal of Comparative Neurology, 387(2), 167–178.
Turner, M. G. (2010). Disturbance and landscape dynamics in a changing world. Ecology, 91(10), 2833–2849.
Clark, P. U., Alley, R. B., & Pollard, D. (1999). Northern Hemisphere ice-sheet influences on global climate change. Science, 286(5442), 1104–1111.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.





I have an eidetic memory and have had to learn instead how to ‘forget’ what my mind is ‘photographing’ for fear of over-cluttering my brain with sometimes useless information. Lately though Ive been wondering if something like Alzheimer’s couldn’t be caused by something as simple as deleting large amounts of memory from our phones/computers etc… leaving vast holes in our brains (or ozone, if in fact all is connected like we think it is)far fetched, maybe, but if this is in fact a digital universe, and we are not just connected to one another but the Earth, the Universe (and to some like myself believe the universe is a very large commutative brain) it stands to reason that something like Alzheimer’s collectively could in fact be a digital ailment. Same with Fubromyalgia (sp.). Just a thought