McDonald’s is removing artificial intelligence (AI) powered ordering technology from its drive-through restaurants in the US, after customers shared its comical mishaps online.
A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.
It has not proved entirely reliable, however, resulting in viral videos of bizarre misinterpreted orders ranging from bacon-topped ice cream to hundreds of dollars’ worth of chicken nuggets.
In one video, which has 30,000 views on TikTok, a young woman becomes increasingly exasperated as she attempts to convince the AI that she wants a caramel ice cream, only for it to add multiple stacks of butter to her order.
Lmao didn’t even know you could add butter to something at McDonald’s. If you can’t then it’s even funnier it decided that’s a thing.
They have butter for their hot cakes. Sounds like it was adding butter packets to the order.
Ahh I forgot about breakfast, that makes more sense. I was picturing butter drenched fires lol.
Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.
Yeah I’ve seen a lot of dumb LLM implementations, but this one may take the cake. I don’t get why tech leaders see “AI” and go yes, please throw that at everything. I know it’s the current buzzword but it’s been proven OVER AND OVER just in the past couple of months that it’s not anywhere close to ready for prime-time.
Most large corporations’ tech leaders don’t actually have any idea how tech works. They are being told that if they don’t have an AI plan their company will be obsoleted by their competitors that do; often by AI “experts” that also don’t have the slightest understanding of how LLMs actually work. And without that understanding companies are rushing to use AI to solve problems that AI can’t solve.
AI is not smart, it’s not magic, it can’t “think”, it can’t “reason” (despite what Open AI marketing claims) it’s just math that measures how well something fits the pattern of the examples it was trained on. Generative AIs like ChatGPT work by simply considering every possible word that could come next and ranking them by which one best matches the pattern.
If the input doesn’t resemble a pattern it was trained on, the best ranked response might be complete nonsense. ChatGPT was trained on enough examples that for anything you ask it there was probably something similar in its training dataset so it seems smarter than it is, but at the end of the day, it’s still just pattern matching.
If a company’s AI strategy is based on the assumption that AI can do what its marketing claims. We’re going to keep seeing these kinds of humorous failures.
AI (for now at least) can’t replace a human in any role that requires any degree of cognitive thinking skills… Of course we might be surprised at how few jobs actually require cognitive thinking skills. Given the current AI hypewagon, apparently CTO is one of those jobs that doesn’t require cognitive thinking skills.
Especially in situations like this where it’s quite possible it would cost less to go back to the basics of better pay and training to create willing workers. Maybe the initial cost was less than what they have to spend to improve things, but add in all the backtracking and cost of mistakes, I doubt it.
Should have gone with the real AI solution: Actually Indian
Wasn’t this just voice recognition for orders? We’ve been doing this for years without it being called AI, but I guess now the marketing people are in charge
Voice recognition is “AI“*, it even uses the same technical architecture as the most popular applications of AI - Artificial neural networks.
* - depending on the definition of course.
Well, given that we’re calling pretty much anything AI these days, it probably fits.
But I honestly don’t consider static models to be “AI,” I only consider it “AI” if it actively adjusts the model as it operates. Everything else is some specific field, like NLP, ML, etc. If it’s not “learning,” it’s just a statistical model that gets updated periodically.
New stuff gets called AI until it is useful, then we call it something else.
It’s more than voice recognition, since it must also parse a wide variety of sentence structure into a discreet order, as well as answer questions.
Honestly, it doesn’t need to be that complex:
- X <menu item> [<ala carte | combo meal>]
- extra <topping>
- <size> <soda brand>
There’s probably a dozen or so more, but it really shouldn’t need to understand natural language, it can just work off keywords.
You can do that kind of imposed structure if it’s an internal tool used by employees. But if the public is using it, it has better be able to parse whatever the consumer is saying. Somebody will say “I want a burger and a coke, but hold the mustard. And add some fries. No make it two of each.” And it won’t fit your predefined syntax.
You can tell the exec who greenlit this was a boomer because they went with IBM.
An AI drive through was always going to be difficult. IBM simply isn’t the company that can do stuff like that anymore, and they haven’t been for decades at this point.