https://futurism.com/the-byte/government-ai-worse-summarizing
The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that’s the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.
Yeah I’ve purged “AI” from my vocabulary, at least for now.
These are chatbots. That’s it. “AI” is a marketing term.
I recall my AI class discussed a bunch of different things that people call AI that don’t come anywhere near “replacement human”. For instance, the AI in red alert 2 has some basic rules about buildings and gathering a certain number of units and send them the players way.
Obviously, RA2s “AI” isn’t being used for labour discipline and llms are massively overhyped but I think getting hung up on the word is… idk, kinda a waste of time (as I feel like a lot of this thread is)
I think people are allowed to be annoyed, but if thats all you want to talk about i think its a waste of energy? It’s just language, we can call it flubbon if you like and move the conversation along.
Unless we want to get bogged down talking about whether band aids “medical adhesive strips”, which is a perfectly fine conversation to have if that’s what both participants want to talk about.
AI is a fine term because it’s artificial. It’s a facsimile. If they were serious it would just be I
It’s still a marketing term. AI makes people think of SciFi and robot mommies.
Sure, but it’s cheaper, and so if we fire all of our employees and replace them with AI, for this next quarter our profits will go WAY up, and then I can get my bonus and retire. So it’s totally fine!
There’s a certain level of risk aversion with these decisions though. One of the justification of salaries for managers who generally don’t do shit is they take “responsibility”. Honestly even if AI was performing at or above human level, a lot of briefs would have to be done by someone you could fire anyway.
And as much as next quarter performance is all they care about, there are still some survival instincts left. My last company put a ban on using genAI for all client facing activities because a sales guy almost presented a deck with client is going to instantly walk out levels of wrong information in it.
“Pfft! That only matters if you care about factual accuracy. So let me make it real simple: Facts don’t care about your feelings, and my finances the future doesn’t care about your facts!”
I’ve kinda seen this in manufacturing for the last few years. Not explicitly “AI” but newer equipment designed around being smarter and not requiring skilled operators. Think like WordPress but for industrial machines; it might do basic stuff pretty well but fails at complex operations, and it’s an atrocity if you ever look behind the scenes to do some troubleshooting.
Hell yeah, smart machine? That’s gonna cost a premium. Oh, and because these machines are so sophisticated, you’ll need a higher tier support contract, that’s another premium… I mean it’s not like you have skilled technicians on staff anymore, they all retired and all your new guys just know how to press “play,” since we made the machines so easy to use… you’re not fixing anything yourself anymore.
Back to your support contract, now we have the Bronze tier which gets you one of our field techs out there within 48 hours, but if your business can’t handle that kind of downtime we could upgrade you to Silver or Gold…
Any time a client mentions “I asked ChatGPT” or any of the other hopped-up chatbots, what follows is always, without fail, completely ass-backwards and wrong as hell. We literally note in client files the ones who keep asking some shitty chatbot instead of us because they’re frequent fuckups and knowing that they’re a chatbot pervert helps us narrow down what stupid shit they’ve done again.