McDonald’s is removing artificial intelligence (AI) powered ordering technology from its drive-through restaurants in the US, after customers shared its comical mishaps online.
A trial of the system, which was developed by IBM and uses voice recognition software to process orders, was announced in 2019.
It has not proved entirely reliable, however, resulting in viral videos of bizarre misinterpreted orders ranging from bacon-topped ice cream to hundreds of dollars’ worth of chicken nuggets.
Wasn’t this just voice recognition for orders? We’ve been doing this for years without it being called AI, but I guess now the marketing people are in charge
Voice recognition is “AI“*, it even uses the same technical architecture as the most popular applications of AI - Artificial neural networks.
* - depending on the definition of course.
Well, given that we’re calling pretty much anything AI these days, it probably fits.
But I honestly don’t consider static models to be “AI,” I only consider it “AI” if it actively adjusts the model as it operates. Everything else is some specific field, like NLP, ML, etc. If it’s not “learning,” it’s just a statistical model that gets updated periodically.
New stuff gets called AI until it is useful, then we call it something else.
It’s more than voice recognition, since it must also parse a wide variety of sentence structure into a discreet order, as well as answer questions.
Honestly, it doesn’t need to be that complex:
- X <menu item> [<ala carte | combo meal>]
- extra <topping>
- <size> <soda brand>
There’s probably a dozen or so more, but it really shouldn’t need to understand natural language, it can just work off keywords.
You can do that kind of imposed structure if it’s an internal tool used by employees. But if the public is using it, it has better be able to parse whatever the consumer is saying. Somebody will say “I want a burger and a coke, but hold the mustard. And add some fries. No make it two of each.” And it won’t fit your predefined syntax.