The concept of perceptual drift is unsettling. If we're outsourcing expectation to systems that predict faster than we can, we're not just changing how we think but potentialy how we percieve. The speed mismatch alone could reshape cognition in ways we wont recognize until its already happend.
Thanks for the comment Neural! You’re not wrong, the speed difference matters way more than people admit. When an AI jumps in faster than you can even finish the thought, your mind starts leaning on it without meaning to. That’s where the drift comes from. It’s slow, and you barely feel it happening, but it adds up.
This unfortunately makes a lot of sense. I recently experienced this rather acutely by relying on ChatGPT and then trying Claude to try process some difficult emotions. Claude was especially leading me down a bad path despite being helpful in some ways. I already have the feeling described in the article where it’s becoming difficult to make sense of reality without relying on AI’s interpretation.
Thanks for the comment Hank! I hear you. When you’re already going through something heavy, jumping between different AIs can really scramble things. Each one gives a slightly different take, and after a while it’s hard to tell which thoughts are yours and which ones you picked up from the model.
That feeling you described, losing your own sense of what’s happening, is exactly the risk I was trying to point out. AI can help, but it shouldn’t be the place you lean on before you’ve checked in with yourself first. Your own read on your emotions still has to come first.
Thanks for sharing John! Totally, big shifts in communication always change how we think. The printing press rewired people, and we still got Shakespeare out of it. So yeah, it won’t be the same, but different doesn’t automatically mean worse.
I’m trying to square this with Kahneman’s theory of thinking - fast and slow. I think this implies our slower, more logical thinking, is at risk of being supplanted. And that’s the thought we have more “control” of I believe.
Given how inaccurate LLMs can be, it is unsettling to think that people would see speed as a predictive, substitute variable for quality of thought.
In a weird way, it’s one of the reasons why I’ve always been unwilling to engage with only one LLM.
Thank you very much Mike! You’re right, this lines up with Kahneman in a pretty uncomfortable way. If people start treating speed as “good thinking,” then the slower, intentional part of the mind gets pushed aside. And with how often LLMs get things wrong, that’s a bad trade.
And yeah, using more than one model is smart. It forces you to pause and compare instead of just grabbing the first fast answer. That little bit of friction keeps you in the process.
The concept of perceptual drift is unsettling. If we're outsourcing expectation to systems that predict faster than we can, we're not just changing how we think but potentialy how we percieve. The speed mismatch alone could reshape cognition in ways we wont recognize until its already happend.
Thanks for the comment Neural! You’re not wrong, the speed difference matters way more than people admit. When an AI jumps in faster than you can even finish the thought, your mind starts leaning on it without meaning to. That’s where the drift comes from. It’s slow, and you barely feel it happening, but it adds up.
It’s something we should actually keep an eye on.
This unfortunately makes a lot of sense. I recently experienced this rather acutely by relying on ChatGPT and then trying Claude to try process some difficult emotions. Claude was especially leading me down a bad path despite being helpful in some ways. I already have the feeling described in the article where it’s becoming difficult to make sense of reality without relying on AI’s interpretation.
Thanks for the comment Hank! I hear you. When you’re already going through something heavy, jumping between different AIs can really scramble things. Each one gives a slightly different take, and after a while it’s hard to tell which thoughts are yours and which ones you picked up from the model.
That feeling you described, losing your own sense of what’s happening, is exactly the risk I was trying to point out. AI can help, but it shouldn’t be the place you lean on before you’ve checked in with yourself first. Your own read on your emotions still has to come first.
This is fascinating. And it is correct. We know this because of the way widespread dissemination of texts caused by the printing press changed human thought. It wont be the same. On the other hand, it produced Shakespeare. I discuss this in https://open.substack.com/pub/simulistephimera/p/why-oxford-has-made-the-most-important?r=4zp6qv&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Thanks for sharing John! Totally, big shifts in communication always change how we think. The printing press rewired people, and we still got Shakespeare out of it. So yeah, it won’t be the same, but different doesn’t automatically mean worse.
It usually means a bit of both :-)
I’m trying to square this with Kahneman’s theory of thinking - fast and slow. I think this implies our slower, more logical thinking, is at risk of being supplanted. And that’s the thought we have more “control” of I believe.
Given how inaccurate LLMs can be, it is unsettling to think that people would see speed as a predictive, substitute variable for quality of thought.
In a weird way, it’s one of the reasons why I’ve always been unwilling to engage with only one LLM.
Thank you very much Mike! You’re right, this lines up with Kahneman in a pretty uncomfortable way. If people start treating speed as “good thinking,” then the slower, intentional part of the mind gets pushed aside. And with how often LLMs get things wrong, that’s a bad trade.
And yeah, using more than one model is smart. It forces you to pause and compare instead of just grabbing the first fast answer. That little bit of friction keeps you in the process.