8 points
LLMs have a perfect track record of doing exactly what they were designed to, take an input and create a plausible output that looks like it was written by a human.
They just completely lack the part in the middle that properly understands what it gets as the input and makes sure the output is factually correct, because if it did have that then it wouldn’t be an LLM any more, it would be an AGI.
The “artificial” in AI does also stand for the meaning of “fake” - something that looks and feels like it is intelligent, but actually isn’t.