I for one think LLMs are more intelligent than an ant. The writer of this piece is using the movie definition of AI instead of the real world definition of AI.
I liked this article, and I think a lot of the commenters here are missing that the general public is treating LLMs as AGI. I have a whole 5-10 minutes I spend on why this is when I present about LLMs.
“The I in LLM stands for Intelligence” is a joke I read (and include in my presentation to hammer the point home). Laymen have no idea what AI or LLMs are, but they expect it to work similarly to human intelligence, since that’s the only model they know, and are surprised to learn it doesn’t work that way.
Edit: Forgot what I came to the comments to post, before I read everyone else’s complaints about this, lol.
A small correction: the Air Canada example wasn’t an LLM, it was just an old “dumb” chatbot that was likely sharing outdated policies.
“…AI” concerns me. I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.
Lost me right there. Not only was and is this AI, but the term gets narrower over time, not broader. If you want to go by “what the term once described,” you have to include computer vision, text to speech, optical character recognition, behavior trees for video game enemies, etc etc etc.
When I see people complain about calling LLMs “AI,” I think the only definition that would satisfy them is “things computers can do that we aren’t used to yet.”
I think it’s less of an issue of LLMs being drunk and more that ostensibly sober people put them behind the wheel totally aware of how drunk they are while telling everyone that they’re stone cold sober.
I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.
The field of AI has been around for decades and covers a wide range of technologies, many of them much “simpler” than the current crop of generative AI. What is often referred to as AI today is absolutely what the term once described, and still does describe.
What people seem to be conflating is the general term “AI” and the more specific “AGI”, or Artificial General Intelligence. AGI is the stuff you see on Star Trek. Nobody is claiming that current LLMs are AGI, though they may be a significant step along the way to that.
I may be sounding nitpicky here, but this is the fundamental issue that the article is complaining about. People are not well educated about what AI actually is and what it’s good at. It’s good at a huge amount of stuff, it’s really revolutionary, but it’s not good at everything. It’s not the fault of AI when people fail to grasp that, no more than it’s the fault of the car when someone gets into it and then is annoyed it won’t take them to the Moon.
People are not well educated about what AI actually is and what it’s good at.
And half the reason they’re not educated about it is that AI companies are actively and intentionally misinforming them about it. AI companies sell people these products using words like “thinking”, “assessing”, “reasoning”, and “learning”, none of which are accurate to AI, but would be to AGI.
AGI is the stuff you see on Star Trek.
Clarification. AGI describes Data, Moriarty, and Peanut Hamper, but it doesn’t describe the Enterprise’s computer. Which has speech recognition, but is less intelligent than an LLM.
I didn’t say that everything in Star Trek was AGI, just that you can find examples there.
The problem is that the average person and politician don’t know this difference, and are running around like skynet is about to kick off any second.