8 Comments
User's avatar
Neural Foundry's avatar

Fantastic way to frame AI training in phsyical terms. The point about systems optimizng for stability over truth hits hard, especially if we think about how information ecosystems already reward whatever keeps people engaged rather than accurate. Been thinking alot about this with how reccomendation algorithms work, they optimize for interaction, not veracity, which is basically the same trade-off you're describign here.

Exploring ChatGPT's avatar

Thanks Neural Foundry! That’s the core tension. These systems aren’t chasing what’s true, they’re chasing what holds together. Engagement is a form of stability, even when it drifts from accuracy. Once you see that, a lot of recommendation systems suddenly make a lot more sense.

Herb Coleman's avatar

For me the author is unnecessarily confusing truth (which I don't beleive they ever fully defined) with information. In the physical world truth is what is physically accurate. Whether it's entropy or dark matter, truth is our understanding based on what was measured. The biology example misses the point. What does it mean for an animal to have "accurate vision"? Vision of what? Remember as humans were only see a small part of the "visible spectrum". So an animal who's vision might seem "blurry" under one example maybe crystal clear in another. We see this in the difference between the Hubble and JWST telescopes. Each samples a different region of he spectrum thus producing radically different images of the same area of space.

What LLMs are doing is misapplying a physcial model over an informational one. In text, order matters. Meaning matters. When an LLM reduces words to tokens, it doesn't consider meaning but relationship and derives "meaning' from that. All it does is stack things neatly according to how they were similarly stacked in other examples. So the "truth" that LLMs provide is that these tokens were previously associated with these other tokens in this manner. It has no concept nor does it consider what's inside each token or each token's meaning to humans and how they communicate. It just mimics what it was exposed to previously. This my recent push to rename information derived from LLMs as MI for mimicked or or mock intelligence. It more accurately might be MA for mimicked or mocked arrangement or MC for mimicked or mocked communication.

The later is where the problem lies. Users of LLMs or chat bots mistake the output as communication. It is only mimicked or mocked communication based on previous communications. IT HAS NO MEANING. We see it and WE apply the meaning and we we like it we call it "truth"; when we don't like it we call it an hallucination.

Exploring ChatGPT's avatar

I appreciate the thoughtful comment, thanks Herb! I think this is where we’re talking past each other a bit. I’m not confusing truth with information on accident. The whole point of the piece is that what we call truth is already filtered through models of perception and interpretation long before it becomes measurement. Physics gives us accuracy within a frame, not access to reality untouched by context.

You’re right that LLMs don’t understand meaning the way humans do. But that’s exactly why the danger isn’t inside the model, it’s in how humans treat its outputs as if meaning lived there on its own. The article isn’t granting AI truth. It’s questioning how easily we outsource judgment and then argue about definitions after the fact.

Herb Coleman's avatar

Exactly, this is where we're on the same page. From day 1 my argument (that I actually had with ChatGPT) is that for many Gen AI is a fancy Magic 8 ball. They ask it a question and it gives them an answer and THEY supply the meaning to the answer. I hate it on socail media where you have these folks asking ChatGPT deep questions and then are amazed at the response they get. I keep trying to remind them that it's just rehashing whatever data it was trained on in a way that sounds fresh to you but it does not understand what it's saying. I plead with folks to always keep the human in the loop qnd the place for the human is in between the AI output and what you allow to go out to the rest of the world. We have to be the gate keepers of "truth". Sorry, I forgot to say, thanks for the thought exercise.

Exploring ChatGPT's avatar

Yes, we’re aligned here. That “magic 8 ball” framing is exactly the risk I’m pointing at. The model isn’t discovering truth, it’s offering a shape, and people rush to fill it with meaning because it sounds confident and familiar.

That’s why keeping a human in the loop matters so much. The real responsibility sits between the output and what we choose to accept or share. That’s where judgment lives. And no worries at all, I appreciate you engaging with it and taking the thought exercise seriously.

Paul Gibbons's avatar

You would have to do a lot of philosophical work to prove LLMs cannot generate truthful propositions. Are you saying an LLM that says 'a triangle has 3 sides' is generating a meaningless statement? You would have to do even more work to prove that it's perception of reality is inferior to humans. If you accept some of Kants critique of pure reason... humans don't perceive "the thing in itself' (ding an sich) our object vision is recast as chemical and electrical signals layered in a neural net that is not (as far as I know) deterministic... it also seems that LLMs generate meaning beyond what they are trained on... ive got a podcast coming up with Chris summerfield from Oxford... I would be interested in his take... I must confess beyond my degrees in both philosophy and in neuroscience that I feel rather out of my depth... please don't take this as a dogmatic refutation (which he could maybe manage but I sure can't or Kant)

Exploring ChatGPT's avatar

Thanks for the comment Paul! I think you hit the nail on the head here: “it also seems that LLMs generate meaning beyond what they are trained on... “ This is a philosophical conundrum in it of itself. Training the system was and still is a large part of its creation, but AI has clearly grown past this point.