You are viewing a single thread.
View all comments View context
7 points

That’s the thing about LLMs though. They don’t understand anything at all. They just say stuff that sounds coherent. It turns out of you just say whatever seems like the most reasonable response at all times then you can get pretty close to simulating understanding in some scenarios. But even though these newer language models are quite good at some things, they are no closer to understanding or conceptualizing anything.

permalink
report
parent
reply
4 points
*

I was using the term “understand” as shorthand for “trained after and on content containing” and “given enough context on what is being asked.”

permalink
report
parent
reply