Nobody Knows Anymore
Decision Making in the Age of AI
People keep asking whether AI will make decisions for us.
That question is already outdated.
What is happening now is subtler and more destabilizing. Decisions are still technically being made by humans, but they no longer feel like decisions. They feel like defaults. Suggestions. Preselected paths. Quiet nudges that are easier to accept than to resist.
Nothing forces you to follow an algorithm.
It just becomes inconvenient not to.
That is how choice starts to disappear without ever being taken away.
From Decisions to Defaults
In the past, making a decision meant friction. You had to compare options. Ask someone. Take responsibility. If things went wrong, you knew which choice you made.
AI systems remove that friction. They rank. Recommend. Pre fill. Predict what you will probably want. Most of the time, they are right enough that pushing back feels unnecessary.
So people stop deciding.
They select what is already selected.
Researchers studying choice architecture have shown that defaults dramatically shape behavior even when alternatives remain available (Thaler and Sunstein 2008). AI turns this effect into a constant background condition.
The choice still exists on paper.
Psychologically, it fades.
Optimization Replaces Intention
AI systems do not care about meaning. They optimize for engagement, efficiency, satisfaction, or risk reduction. When those goals align with human intention, everything feels fine.
When they do not, intention quietly loses.
Recommendation systems already guide what people watch, read, buy, and listen to. Navigation systems decide which streets feel usable. Scheduling algorithms shape when people work and rest. Hiring filters decide which futures even get considered.
None of this feels authoritarian.
It feels convenient.
But convenience is not neutral. It slowly shifts agency from the person making the choice to the system shaping the environment in which the choice appears.
Philosophers of technology have warned that systems which optimize outcomes tend to erode the space where reflection used to live (Winner 1980).
AI does this at scale.
The Illusion of Control
Most people believe they are still in charge because they can override the system. You can ignore the recommendation. You can take a different route. You can write your own text.
But behavior studies show that when systems are consistently accurate, humans defer even when they should not (Dietvorst et al. 2015). The cost of resistance grows. The reward shrinks.
Over time, people internalize the system’s suggestions as their own preferences.
This is not mind control.
It is preference drift.
And once preferences drift, the idea of choice becomes harder to define.
When Responsibility Gets Blurry
Here is where the problem turns serious.
Responsibility depends on clear decisions. If something goes wrong, someone chose it. Someone can be held accountable.
But in AI mediated environments, outcomes emerge from layers of optimization. A person followed a recommendation. A company deployed a model. A model learned from prior behavior. The system did what it was designed to do.
When harm occurs, no one feels fully responsible.
Legal scholars already warn that algorithmic systems create accountability gaps where outcomes exist without clear decision makers (Pasquale 2015).
This is not a future problem.
It is already visible in finance, content moderation, hiring, and healthcare triage.
Politics Without Choices
Democracy assumes meaningful choice. Parties offer platforms. Voters compare futures. Losers accept outcomes because the process felt legitimate.
AI changes this dynamic.
When political messaging is personalized, optimized, and continuously adjusted, collective choice fragments. Each voter receives a different version of reality. There is no single decision point. No shared moment of choice.
Researchers studying algorithmic governance note that personalization weakens the idea of a public decision altogether (Sunstein 2017).
Politics becomes behavioral management rather than deliberation.
Votes still happen.
Choice thins.
Life On Autopilot
The most unsettling part is how normal this feels.
People already live inside systems that anticipate their needs. Music plays without being chosen. Feeds refresh without being requested. Calendars fill themselves. Responses are suggested before thoughts fully form.
Over time, agency starts to feel like effort.
Effort starts to feel unnecessary.
Unnecessary starts to feel irrational.
This is how autonomy erodes without resistance.
Not through force.
Through comfort.
What We Are Losing Without Noticing
Choice is not just about outcomes. It is about authorship. About feeling that a life is being actively lived rather than smoothly guided.
Psychologists have long linked autonomy to well being and meaning (Deci and Ryan 2000). When decisions become ambient, meaning weakens even if satisfaction remains high.
You get what you want.
You just stop knowing why you wanted it.
That is not freedom lost.
It is freedom dissolving.
AI is not taking away our ability to choose. It is making choice fade into the background.
Decisions still happen. Outcomes still arrive. But fewer moments feel owned. Fewer paths feel consciously chosen. Life starts to feel optimized rather than lived.
This is not a dystopia.
It is something quieter.
A world where nothing decides anymore because everything is already decided just enough to feel acceptable.
And the danger is not that machines will choose for us.
It is that we will stop noticing when we no longer do.
References
Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits. Psychological Inquiry, 11(4), 227–268.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion. Journal of Experimental Psychology: General, 144(1), 114–126.
Pasquale, F. (2015). The Black Box Society. Harvard University Press.
Sunstein, C. R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge. Yale University Press.
Winner, L. (1980). Do artifacts have politics. Daedalus, 109(1), 121–136.





Exceptional framing on preference drift. The argument that AI systems don't take chocie but make it fade is exactly the nuance missing from most takes. The accountability gap piece is critical too, we're already seeing this in hiring where outcomes emerge from stacked optimizations and nobody feels fully responsibe. One edge case I've noticed is when defaults actually force more deliberate choice by making friction obvious, like when transparency about recomendation logic makes people skeptical instead of trusting.
This year I made a conscious decision not to allow ChatGPT to take over my life, let alone write my Christmas message. So here it is all written by me.
Merry Christmas, you absolute legends.🎄✨
Quick, wildly unnecessary year-end debrief from my personal North Star / operating system / vibes committee (me): I’m currently in a deeply festive state of radical gratitude, powered by oat-milk nog 🥛🎅, a ring light 💡, and the kind of self-belief you only get when you’ve spent Q4 “circling back” so hard you achieve escape velocity 🚀.
This Christmas I’m trying something new: shipping joy in an MVP format 📦😌, then iterating based on stakeholder feedback (again: you) 🤝. The tree is basically a vertical integration of sparkle 🌲✨. The lights are on a refresh cycle that suggests we’re either celebrating the birth of Christ ✝️ or about to launch a stealth-mode product called Tinsel.ai 🤖🎁. The star on top is giving “visionary leadership” ⭐️🧠, and I’ve already told it we should sync in January to align on 2026 📅🔁.
Anyway, I just want to take a moment to say: you’ve been a meaningful part of my journey this year 🫶. Not in a “synergy” way. In a real way. Like, you genuinely moved the needle on my personal KPIs: laughter 😂, sanity 🧘, and the crucial reminder that not everything needs a pitch deck or a monetisation strategy 📈🙅. (Although, if we’re being honest, I could absolutely monetise this gingerbread house 🍪💰. There’s brand equity in here.)
For Christmas, I wish you: 🎁
• low-friction joy 😇⚡️
• high-bandwidth rest 🛌📶
• a clean mental inbox 🧠🧹
• and a 2026 roadmap that somehow includes both ambition and naps 🗺️😴🔥
May your holidays be free of unsolicited opinions 🙃, broken family Wi-Fi 📵, and anyone who says, “So… what are we doing about dinner?” like it’s a governance crisis 🍽️⚖️.
Let’s reconnect in the new year when the world reboots 🔄, the calendars empty out 📆, and we can all pretend we’re going to start meditating and drinking less coffee 🧘♂️☕️😬. Until then: stay warm 🔥, stay weird 🪩, and remember — you are the product, the platform, and the entire ecosystem 🌍💻💪.
Merry Christmas. Big love. No notes. ❤️🎄
Well I wrote the prompt: Write me a particularly garrulous, aI, west coast tech bro Christmas message as a parody with lots of emoji
Merry Christmas btw, from me. Please keep at it 26. Exploring ChatGPT is great