I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?
I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.
Here’s a secret. It’s not true AI. All the hype is marketing shit.
Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Until a llm can understand why it is wrong we won’t have true AI.
It is true AI, it’s just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term “AI” has been in use for this kind of thing since 1956, it’s not some sudden new marketing buzzword that’s being misapplied. Indeed, it’s the people who are insisting that LLMs are not AI that are attempting to redefine a word that’s already been in use for a very long time.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Reminds me of the classic quote from Charles Babbage:
“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”
How is the chatbot supposed to know that the information it’s been given is wrong?
If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?
That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.
Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.
LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.
People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.
Large language models like GPT, llama, and Gemini don’t create anything new
That’s because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?
They are creative, though:
They put things that are “near” each-other into juxtaposition, and sometimes the insights are astonishing.
The AI’s don’t understand anything, though: they’re like bacteria-instinct: total autopilot.
The real problem is that we humans aren’t able to default to understanding such non-understanding apparent-someones.
We’ve created a “hack” of our entire mental-system, and it is the money-profit-rules-the-world group which controls its evolution.
This is called “Darwin Award territory”, at the species-scale.
No matter:
The Great Filter, which is what happens when a world-species hasn’t grown-up, but gains adult-level technology ( nukes, entire-country-destroying-militaries, biotech, neurotoxins, immense industrial toxic wastelands like the former USSR, accountability-denial-mechanisms in all corporate “persons”, etc… )
you have a toddler with a loaded gun, & killing can happen.
“there’s no such thing as a dangerous gun: only a dangerous man”, as the book “Starship Troopers” pushed…
Toddlers with guns KILL people in the US.
AI’s our “gun”, & narcissistic-sociopathy’s our “toddler commanding the ship” nature.
Maybe we should rename Earth to “The Titanic”, for honesty’s sake…
_ /\ _
I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. It told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…
You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.
Disclaimer : I currently work in the field, not on the fundamental side of things but I build tooling for LLM-based products.
There are a ton of true uses for newer AI models. You can already see specialized products getting mad traction in their respective niches, and the clients are very satisfied with them. It’s mostly boring stuff, legal/compliance like Hypercomply or accounting like Chaintrust. It doesn’t make headlines but it’s obvious if you know where to look.
The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be mostly used for more controversial applications, when companies want to distance themselves from the potential output by pretending that their software tools have independent agency.
You’re falling into a no true Scotsman fallacy. There are plenty of uses for recent AI developments, I use them quite frequently myself. Why are those uses not “true” uses?
Because by design, once an AI implementation finds a use, it changes names. It has to, it’s just how marketing this stuff works. We don’t use writer AI, we have predictive text; we don’t have vision AI, we have enhanced imaging cancer diagnosis; we don’t have meeting’s AI, we have automatic transcription; we don’t have voice AI, we have software dictation. And this is not exclusive to AI, all fields of technology research follow the same pattern. Because selling AI is a grift. No matter how much you want to fold it, it’s the same thing as selling NFT or Blockchain or any of the previous tech grifts, solutions without problems. No one actually have a use for a fancy chatbot. And when they do and get a nice chatbot going, they won’t call it AI, because AI is associated with grifts and no one wants that perception problem. But when you actually make a product that solves a problem, you sell that product, you stop selling AI. Also AI is way larger than the current stream of LLMs.
“recent AI developments”
so, you just want to talk about the current batch of narrow AI LLMs?
or are you open to all the graphics/video editing stuff? (Topaz’s quality is pretty amazing)
it’s a lot better than “is hotdog”.
it’s also slow.
remember, all these systems do is take a bunch of data in and guess until they get it right, then based on that, process more data and so on.
Have you ever read the story about the AI tank from the 90s?
short version of the story is: computer was fed a bunch of pictures. some with tanks, some without. after a while, it got great at identifying them.
when they tried it out with a tank, it kept shooting at trees.
turns out, all the pics with tanks were taken in the shade.
now, like I said: story.
but the point is, this is something that’s been worked on for decades. it’s a problem as big as teaching as it is how to teach.
so, to be clear: there are LOTS of “true uses”. the issue is “they aren’t ready yet”.
we’re just playing around with beta versions (effectively) while still being amazed at how far they’ve come.