I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”
Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in artificial intelligence advanced chatbots
Then Hire cheap human intelligence to correct the AIs hallucinatory trash, trained from actual human generated content in the first place which the original intended audience did understand the nuanced context and meaning of in the first place. Wow more like theyve shovelled a bucket of horse manure on the pizza as well as the glue. Added value to the advertisers. AI my arse. I think calling these things language models is being generous. More like energy and data hungry vomitrons.
Calling these things Artificial Intelligence should be a crime. It’s false advertising! Intelligence requires critical thought. They possess zero critical thought. They’re stochastic parrots, whose only skill is mimicking human language, and they can only mimic convincingly when fed billions of examples.
“Many of the examples we’ve seen have been uncommon queries,”
Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.
“We don’t understand. Why aren’t people simply searching for Taylor Swift”
I mean…I guess you could parahrase it that way. I took it more as “Look, you probably aren’t going to run into any weird answers.”. Which seems like a valid thing for them to try to convey.
(That being said, fuck AI, fuck Google, fuck reddit.)
“I’m feeling depressed” is not an uncommon query under capitalism run amok. “One Reddit user recommends jumping off the Golden Gate Bridge” is not just a weird answer, it is a wholly irresponsible one.
So, no, their response is not valid. It is entirely user-blaming in order to avoid culpability.
There are currently a lot of fake screenshots since it quickly became a meme, pretty sure this is one.
Still a fuck up in general on their part.
The reason why Google is doing this is simply PR. It is not to improve its service.
The underlying tech is likely Gemini, a large language model (LLM). LLMs handle chunks of words, not what those words convey; so they have no way to tell accurate info apart from inaccurate info, jokes, “technical truths” etc. As a result their output is often garbage.
You might manually prevent the LLM from outputting a certain piece of garbage, perhaps a thousand. But in the big picture it won’t matter, because it’s outputting a million different pieces of garbage, it’s like trying to empty the ocean with a small bucket.
I’m not making the above up, look at the article - it’s basically what Gary Marcus is saying, under different words.
And I’m almost certain that the decision makers at Google know this. However they want to compete with other tendrils of the GAFAM cancer for a turf called “generative models” (that includes tech like LLMs). And if their search gets wrecked in the process, who cares? That turf is safe anyway, as long as you can keep it up with enough PR.
Google continues to say that its AI Overview product largely outputs “high quality information” to users.
There’s a three letters word that accurately describes what Google said here: lie.
At some point no amount of PR will hide the fact search has become useless. They know this but they’re getting desperate and will try anything.
I’m waiting for Yahoo to revive their link directory or for Mozilla to revive DMOZ. That will be the sign that shit level is officially chin-height.
Bummer. I like weird Al.