AI Industry Struggles to Curb Misuse as Users Exploit Generative AI for Chaos::Artificial intelligence just can’t keep up with the human desire to see boobs and 9/11 memes, no matter how strong the guardrails are.
Can we stop calling these technologies “AI?” Then can we stop talking about them?
Why did no one care about the misuse of the term AI until these image generators or LLMs? Seriously, people have been talking about video game “AI”, chess “AI” and stuff like that. It’s understood that when people say “AI” they don’t mean “general machine intelligence” or anything like that. And frankly LLMs and image generators fit the bill better than most of the things we’ve used the term for previously
As for “can we stop talking about them”, these and LLMs are already having some pretty huge impacts on modern society - for better or worse, it’d be pretty odd for us all to decide to just stop talking about them.
The difference from prior use of the term “AI” and these technologies is, as you said, before it was understood that it was a short hand, not actual intelligence. Now you have a bunch of panicky people acting as if skynet has arrived.
They really haven’t had much of an impact beyond people talking about them all the damn time, especially the fear mongering. At present, these are really just expensive toys. Computer image and gibberish generators.
The real concerns with developing technologies should be in regards to things like facial recognition and so-called self driving cars. These technologies present actual dangers to society and public safety, not to mention the complex legal questions that come with their use.
^They really haven’t had much of an impact beyond people talking about them all the damn time, especially the fear mongering. At present, these are really just expensive toys. Computer image and gibberish generators.
I highly disagree. Almost everyone I know under the age of 40 uses LLMs to some extent in the course of their job already, whether it’s as simple as composing emails or as significant as using copilot/chatGPT to code. And just today I read an article about an entire call center getting laid off this week to be replaced by an LLM.
I completely agree that a lot of the hype is overblown, but “AI” is absolute significant in our society, and so we talk about it
This is an unfair comparison.
Pen and paper art, or even using Photoshop require one to put in time and efforts and have skills. AI tools don’t.
Ah yes, photorealistic images (and videos) are as effective as text.
Btw that also is an unfair argument because printing technology printed same book many times. You still need an author to write source text.
AI generates different images within minutes.
But please continue pretending AI generated images and videos are not a problem.
One step towards avoiding misuse is to stop considering porn to be misuse.
Is this really something people are mad about? Who cares? This shit is hilarious.
Of all the fucking things to worry about with AI… Pregnant sonic being behind 9/11.
Well I mean it points to our inability to control the use of ai systems, that is in fact a very real problem.
If you can’t keep people from making stupid memes, you also can’t keep people from making misleading propaganda or other seriously problematic content.
Towards the end of the story there was the example where they couldn’t stop the system from giving people a recipe for napalm, despite “weapons development” being an explicitly banned topic. I don’t think I need to spell out how that’s a problem.
No, no one cares but it gets a bunch of clicks because it’s hilarious so articles keep getting written.
It’s a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.
But that increases cost per query by 2-3x.
And as you said, no one really cares, so it’s not deemed worth it.
Yet the clicks keep coming in for anti-AI articles, so they keep getting pumped out, and laypeople now somehow think jailbreaking or hallucinations are intractable problems preventing enterprise adoption of LLMs, which is only true for the most basic plug and play high volume integrations.
It’s a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.
But that increases cost per query by 2-3x.
Huh, so basically it’s like every time my mom said “think before you speak”. You know, just run that line in your head once before you actually say it, to avoid saying something dumb/offensive.
You opened up Pandora’s box. There’s no closing it.
We opened up pandoras box and Frankenstein’s monster crawled out, and his cerebral cortex is wired directly into 4chan, and also he’s a nazi.