Discussion about this post

User's avatar
Neural Foundry's avatar

Fantastic way to frame AI training in phsyical terms. The point about systems optimizng for stability over truth hits hard, especially if we think about how information ecosystems already reward whatever keeps people engaged rather than accurate. Been thinking alot about this with how reccomendation algorithms work, they optimize for interaction, not veracity, which is basically the same trade-off you're describign here.

Herb Coleman's avatar

For me the author is unnecessarily confusing truth (which I don't beleive they ever fully defined) with information. In the physical world truth is what is physically accurate. Whether it's entropy or dark matter, truth is our understanding based on what was measured. The biology example misses the point. What does it mean for an animal to have "accurate vision"? Vision of what? Remember as humans were only see a small part of the "visible spectrum". So an animal who's vision might seem "blurry" under one example maybe crystal clear in another. We see this in the difference between the Hubble and JWST telescopes. Each samples a different region of he spectrum thus producing radically different images of the same area of space.

What LLMs are doing is misapplying a physcial model over an informational one. In text, order matters. Meaning matters. When an LLM reduces words to tokens, it doesn't consider meaning but relationship and derives "meaning' from that. All it does is stack things neatly according to how they were similarly stacked in other examples. So the "truth" that LLMs provide is that these tokens were previously associated with these other tokens in this manner. It has no concept nor does it consider what's inside each token or each token's meaning to humans and how they communicate. It just mimics what it was exposed to previously. This my recent push to rename information derived from LLMs as MI for mimicked or or mock intelligence. It more accurately might be MA for mimicked or mocked arrangement or MC for mimicked or mocked communication.

The later is where the problem lies. Users of LLMs or chat bots mistake the output as communication. It is only mimicked or mocked communication based on previous communications. IT HAS NO MEANING. We see it and WE apply the meaning and we we like it we call it "truth"; when we don't like it we call it an hallucination.

6 more comments...

No posts

Ready for more?