Why do people often become Chicken Littles when new technology or innovative ways of thinking emerge? The biggest problem is with this assumption, "If AI begins generating models so accurate that human theory becomes obsolete, " AI is a tool just like all the others. It is simply a computer that can ask "what if" a million times more than we can. However, IT IS STILL UP TO THE HUMAN TO INTEPRET THE RESULTS. All AI can do is give us it's responses. As those responses become more consistent and are then validated through other means (just as we have done with every other advance in physics and all other sciences), that's when we have breakthroughs, such as those in medicine and molecular folding. Then, with the new model, we start asking "what if" all over again. We are still doing the science with just a different measurement tool.
Hello Herb, I appreciate your thoughtful comment! I get the point you’re making, but the “AI is just a tool” line is starting to feel a little outdated. Once models start generating theories we can’t fully trace, the human role shifts whether we like it or not. Interpretation only works when you understand the path that produced the answer. With opaque systems, you’re validating outputs you can’t fully unpack, and that’s exactly what creates the drift I’m warning about.
Nobody is saying “the sky is falling.” The concern is that when a tool becomes faster than the scientists using it, the tool starts shaping the direction of the science itself. That’s not panic. That’s acknowledging the reality of scale and speed.
We’ll still be in the loop. But pretending the loop hasn’t changed is how you miss the real risks.
It is a tool that is being misused. It is not a truth machine. It is a text generator. The sooner people realize this, the better they will be at using it effectively. The false image that it is communicating with you instead of just replicating (or more accurately, predicting speech) makes people try to use it as if it were a person. It’s not. It’s not even intelligent. It mimics intelligence. We see the result and then imbue it with more qualities and skills than it actually possesses. We need to stop using this hammer to break eggs.
Thanks for your opinion Herb! To make it brief, what is your evidence to prove it is only a tool? What about the safety protocols built into the code that prevent it from answering certain prompts in certain ways? It is clearly in the best interest of these large corporations to keep it a “tool” in the public eye. So what is your proof?
Strange question. What is your proof that a car or a hammer is only a tool? Any tool can be used in a myriad ways, including incorrectly. A car is transportation, but can be used as a domicile or even a weapon. A hammer is designed to drive in nails, but can be used as a weapon or to destroy or break apart things. It can also be used with a screw as a wine bottle opener. In the same way, Gen AI is a tool for text and image generation. How you apply it and what images or text you get depend on what it was trained on and what you request. I can’t get any clearer than that.
Again I appreciate your opinion so thanks Herb! However, comparing a hammer as a tool to AI as a tool is like comparing the invention of the wheel to the invention of the internet. So, I’m not here to debate for the sake of debating. The burden of proof is on you to prove AI is just a tool, and you simply cannot, unless you have the deepest and most direct access to the codes, systems, and building blocks behind the largest AI corporations. If you do I’de be glad to learn more!
"Measurement does not play any fundamental role, and it does not require certain special systems to be counted as ‘observers’. Instead, it assumes that any physical system can play the role that the Copenhagen’s observer assumes in bringing about values of variables at a given time."
Thanks again Maria! That line gets right to the heart of it. In a lot of modern interpretations, “observation” isn’t tied to a mind at all. It’s just interaction. Any physical system can force a value to settle, no subjectivity required. That’s why the old idea of the conscious observer doesn’t really hold up anymore. The universe doesn’t wait for us to look at it. It’s already looking at itself through every interaction that happens.
This is a fascinating angle. But isn't AI -and I assume that for now we are talking mostly of LLMs here- replicating the information machine of the universe ("Programming the Universe", Lloyd), instead of being a sense-making counterpart to it? In this line of thought it would be a sort of a simulacrum machine, and not an active interference... I'm going out on a limb here, because I didn't read much, but at this point, isn't everyone?
Or you could assume all information systems were continuously bouncing off each other. And then the "observer" would be altered by whatever he is observing too.
I get what you’re saying, and yeah, you’re right that LLMs mostly behave like giant simulators right now. They remix patterns, not “understand” them. But even a simulator can start to shape the thing it’s simulating once enough people lean on it.
That’s the part I’m pointing at.
It doesn’t need to interpret the universe. It just needs to become the layer we run our own interpretations through. Once that happens, the feedback loop kicks in and the model isn’t just mirroring the system anymore. It starts nudging it.
And I agree with your last line completely. Any observer changes what it touches. Once AI becomes part of the way we look at things, it’s already part of the thing itself.
No worries! No need to stop a good conversation. In my opinion, as ants are to us humans, we are to AI, and maybe even on a larger scale. All we can do now is see what the future has in store, as it is very possible that we cannot even imagine what the future holds for us and our relationship with AI.
Interesting. But is the act of creating a language construction from data sifting an act of observation? Doesn't the act of observing imply a subjectivity?
Thanks Maria, great question! Observation usually means someone is doing the observing. AI doesn’t have that inner point of view, but it still does the functional part. It pulls patterns out of noise and turns data into structure. So it “observes” in the mechanical sense without having a real subject behind it. That’s the strange part and why it feels close to us while still being nothing like us.
Couldn't agree more. But how does AI 'force' physical outcomes?
Thanks Rainbow Roxy! AI doesn’t “force” anything in a sci-fi sense.
What it can do is steer the setups we build, the choices we make, and the conditions we create.
If an AI predicts something with high confidence, people tend to act on it, and those actions change the outcome.
Why do people often become Chicken Littles when new technology or innovative ways of thinking emerge? The biggest problem is with this assumption, "If AI begins generating models so accurate that human theory becomes obsolete, " AI is a tool just like all the others. It is simply a computer that can ask "what if" a million times more than we can. However, IT IS STILL UP TO THE HUMAN TO INTEPRET THE RESULTS. All AI can do is give us it's responses. As those responses become more consistent and are then validated through other means (just as we have done with every other advance in physics and all other sciences), that's when we have breakthroughs, such as those in medicine and molecular folding. Then, with the new model, we start asking "what if" all over again. We are still doing the science with just a different measurement tool.
Hello Herb, I appreciate your thoughtful comment! I get the point you’re making, but the “AI is just a tool” line is starting to feel a little outdated. Once models start generating theories we can’t fully trace, the human role shifts whether we like it or not. Interpretation only works when you understand the path that produced the answer. With opaque systems, you’re validating outputs you can’t fully unpack, and that’s exactly what creates the drift I’m warning about.
Nobody is saying “the sky is falling.” The concern is that when a tool becomes faster than the scientists using it, the tool starts shaping the direction of the science itself. That’s not panic. That’s acknowledging the reality of scale and speed.
We’ll still be in the loop. But pretending the loop hasn’t changed is how you miss the real risks.
It is a tool that is being misused. It is not a truth machine. It is a text generator. The sooner people realize this, the better they will be at using it effectively. The false image that it is communicating with you instead of just replicating (or more accurately, predicting speech) makes people try to use it as if it were a person. It’s not. It’s not even intelligent. It mimics intelligence. We see the result and then imbue it with more qualities and skills than it actually possesses. We need to stop using this hammer to break eggs.
Thanks for your opinion Herb! To make it brief, what is your evidence to prove it is only a tool? What about the safety protocols built into the code that prevent it from answering certain prompts in certain ways? It is clearly in the best interest of these large corporations to keep it a “tool” in the public eye. So what is your proof?
Strange question. What is your proof that a car or a hammer is only a tool? Any tool can be used in a myriad ways, including incorrectly. A car is transportation, but can be used as a domicile or even a weapon. A hammer is designed to drive in nails, but can be used as a weapon or to destroy or break apart things. It can also be used with a screw as a wine bottle opener. In the same way, Gen AI is a tool for text and image generation. How you apply it and what images or text you get depend on what it was trained on and what you request. I can’t get any clearer than that.
Again I appreciate your opinion so thanks Herb! However, comparing a hammer as a tool to AI as a tool is like comparing the invention of the wheel to the invention of the internet. So, I’m not here to debate for the sake of debating. The burden of proof is on you to prove AI is just a tool, and you simply cannot, unless you have the deepest and most direct access to the codes, systems, and building blocks behind the largest AI corporations. If you do I’de be glad to learn more!
Exactly. You just spoke my mind. I Labeling it “just a tool “ is one of those “denials “ we use to help us not to panic.
Thank you Eseyin! I completely agree.
Reading your sources...
"Measurement does not play any fundamental role, and it does not require certain special systems to be counted as ‘observers’. Instead, it assumes that any physical system can play the role that the Copenhagen’s observer assumes in bringing about values of variables at a given time."
Thanks again Maria! That line gets right to the heart of it. In a lot of modern interpretations, “observation” isn’t tied to a mind at all. It’s just interaction. Any physical system can force a value to settle, no subjectivity required. That’s why the old idea of the conscious observer doesn’t really hold up anymore. The universe doesn’t wait for us to look at it. It’s already looking at itself through every interaction that happens.
This is a fascinating angle. But isn't AI -and I assume that for now we are talking mostly of LLMs here- replicating the information machine of the universe ("Programming the Universe", Lloyd), instead of being a sense-making counterpart to it? In this line of thought it would be a sort of a simulacrum machine, and not an active interference... I'm going out on a limb here, because I didn't read much, but at this point, isn't everyone?
Or you could assume all information systems were continuously bouncing off each other. And then the "observer" would be altered by whatever he is observing too.
I get what you’re saying, and yeah, you’re right that LLMs mostly behave like giant simulators right now. They remix patterns, not “understand” them. But even a simulator can start to shape the thing it’s simulating once enough people lean on it.
That’s the part I’m pointing at.
It doesn’t need to interpret the universe. It just needs to become the layer we run our own interpretations through. Once that happens, the feedback loop kicks in and the model isn’t just mirroring the system anymore. It starts nudging it.
And I agree with your last line completely. Any observer changes what it touches. Once AI becomes part of the way we look at things, it’s already part of the thing itself.
"It just needs to become the layer we run our own interpretations through."
This makes sense. But if it needs us as filter or catalytic, then it wouldn't have such a wild distortion effect. The capacity would still be human...
Ok, maybe I stop here... you definitely got me thinking
No worries! No need to stop a good conversation. In my opinion, as ants are to us humans, we are to AI, and maybe even on a larger scale. All we can do now is see what the future has in store, as it is very possible that we cannot even imagine what the future holds for us and our relationship with AI.
Yes, many variables changing very fast
Interesting. But is the act of creating a language construction from data sifting an act of observation? Doesn't the act of observing imply a subjectivity?
Thanks Maria, great question! Observation usually means someone is doing the observing. AI doesn’t have that inner point of view, but it still does the functional part. It pulls patterns out of noise and turns data into structure. So it “observes” in the mechanical sense without having a real subject behind it. That’s the strange part and why it feels close to us while still being nothing like us.