what continues to be absolutely fucking hilarious about this is that this sort of thing already fucking exists in mainland china, in the form of a boxshop you walk into where everything is tagged with RFID stickers. slap the basket into location, pay with phone, leave. and it exists based on 2010+ technology, that’s fairly reliable.
(minor note: I haven’t seen this in person but I’ve seen coverage of it, and I’ve worked with all the actual constituent technologies so I’m quite aware of how real this is and how well it can work, barring all the fuzzy in-practice biz-rule shit that inevitably has to be handled and solved (such as all real-world systems))
The thing was that your didn’t need to tag everything. The ai would just kinda see it.
ah yes because the method of shopping with a basket and things being underneath other things is just completely not a thing that’ll happen, and the shitty implementation would always have perfect visibility on every single item and line of movement. no camera obstructions could ever happen, no light problems could ever come to the fore! perfect visual analysis! just like how we already have self-driving cars purely on optical sensors!
(your comment is bullshit, go learn some shit)
[e: edit last to minus ad hominem]
I’ve gotten a few of these questions at work. Non tech workers will throw ideas such as “random math drill powered by AI” or even “search powered by AI”. These things are already solved following a traditional approach. In the end they add “powered AI” in the hopes the estimate will be lower I think.
start of the article is fine I suppose but it gets pretty bad when it tries to evaluate impact
He explains that “cutting-edge AI capabilities” are now available for every company to buy for the price of standard software. But that instead of building a whole AI system, he says many firms are simply popping a chatbot interface on top of a non-AI product.
the implication here that there exists a viable company buying “cutting-edge AI capabilities” “for the price of standard software” and “building a whole AI system” with them is comical but goes unexamined
"If I asked a room of people what their definition of AI is, they would all give a different answer,” he says. “The term is used very broadly and loosely, without any clear point of reference. It is this ambiguity that is allowing AI washing to emerge.
no it isn’t. the article opens with a clear counterexample. if the ambiguity didn’t exist Amazon still could have lied about using ai, easily
“AI washing can have concerning impacts for businesses, from overpaying for technology and services to failing to meet operational objectives the AI was expected to help them achieve.”
ok, businesses can be impacted
Meanwhile, for investors it can make it harder to identify genuinely innovative companies.
ok, investors can be impacted… hard to be sympathetic to them but sure
And, says Mr Ayangar: “If consumers have unmet expectations from products that claim to offer advanced AI-driven solutions, this can erode trust in start-ups that are doing genuinely ground-breaking work.”
and consumers, ok, we’ve gone through all three types of entities that exist.
wait, what about workers? what about people being policed? what about people trying to interact with government programs using these products? why is only the holy trinity of capitalism worth mentioning?
But in the longer term, says Advika Jalan, head of research at MMC Ventures, the problem of AI washing may subside on its own.
“AI is becoming so ubiquitous - even if they’re just ChatGPT wrappers - that ‘AI-powered’ as a branding tool will likely cease to be a differentiator after some time,” she says. “It will be a bit like saying ‘we’re on the internet’.”
exercise: rewrite this passage to be about crypto
Meanwhile, for investors it can make it harder to identify genuinely innovative companies.
ok, investors can be impacted… hard to be sympathetic to them but sure
Extracted money (profit) could at least be recuperated into infrastructure; scamming the investors is good, but it should not turn into yet another corruption scheme.
Meanwhile, for investors it can make it harder to identify genuinely innovative companies.
The problem here isn’t AI, it’s that the investor class is fundamentally stupid. They got lucky, either by birth or by winning the startup lottery, and they’ve convinced themselves that this means they’re vastly more perceptive, intelligent and capable than everyone else.
I’m working for a startup right now, and investment rounds feel a lot like a bunch of idiots standing around waiting to see who’ll jump first, and when one goes the rest follow, because they haven’t a fucking clue what they’re doing but desperately need to believe their peers do.
He explains that “cutting-edge AI capabilities” are now available for every company to buy for the price of standard software. But that instead of building a whole AI system, he says many firms are simply popping a chatbot interface on top of a non-AI product.
Well, yeah, because that’s what LLMs can do.
We’re not near the point where it’s reasonable or intelligent to allow “AI” into the driver’s seat. There are specific spaces where machine learning can be a useful tool to find patterns in data, and you would plug that model into normal tools. There are plenty of normal tools that can be made more user friendly with a well designed LLM based chatbot.
There are not a lot of spaces where you would want an ML model and and LLM interface, because there’s just too much extra uncertainty when you aren’t really sure what’s being asked and you aren’t really sure where the underlying patterns of the model come from. We’re not anywhere close to “intelligence”, and the people selling something claiming they’re “doing real AI” are almost certainly misrepresenting themselves as much as anyone else.
I remember back in 2017 I was commuting somewhere via a tram in Warsaw and I got out near the city center. In front of me there was a gigantic ad on a building, I’m talking like full 6 stories high, and wider still. It was for some brand new smartphone, with 1/3 of the space being taken by the picture of its back, and the rest displaying in large, proud letters “AI POWERED CAMERA”.
It was at this point a shiver down my spine told me it was the end. We were all doomed. Nothing meant anything anymore. Has anything felt real since then? Maybe all the weirdness, all the uncanniness, traces back to that time. Maybe I fell asleep at the tram and never woke up. Maybe someone put me into an Inception-style dream sequence. If you can write that, if you can spend money on having that written and shown to thousands of innocent people, as if it meant anything, fucking anything at all, as if the word “AI” there had any value or, indeed, strict meaning for everyone reading it. Do you think the marketing people that came up with it knew what it meant? Do you think the graphic designer, forced to type those letters with his bare hands, knew what they meant? Do you think the people hired to put it up knew what it meant? Such a long chain of people, starting from some insane exec shitting out “AI GOOD SELL”, and then everyone dutifully rolling that turd along the way until it fell right through my pupils. Such a waste of time, resources, dignity. And for what? And for what.
It’s not marketing fluff though, it’s AI processing, as in recognising what you’re taking a photo of and auto-applying shooting presets/filters. Fairly common these days with new phones.
the precise reason why the loved one’s previous and next phone are Pixels, cos it has low light “AI” enhancement but it doesn’t pull that bs
(and a DSLR when you want zero fucking around whatsoever)
and i have a Xiaomi so i get the unfeasibly high megapixel Samsung sensors, all for the price of my immortal soul