WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

110 points

I cannot possibly see how this could be a good thing.

permalink
report
reply
109 points

Did you check out the article, because it’s most definitely not a good thing. It was created to assist with cybercrime things, like writing malware, crafting emails for phishing attacks. The maker is selling access with a monthly fee to criminals to use it. This was unavoidable though, can’t put the tooth paste back into the tube on this one.

permalink
report
parent
reply
47 points

Good point and all, but my first thought was that it could finally tell me who would win in various hypothetical fights lol

permalink
report
parent
reply
18 points

Wasn’t that a show on Discovery at one point? Deadliest Warrior. It was simulations using different technologies to figure out who or what would win in a fight. Newer technology would certainly make it more interesting, but you can only make up so much information, lol.

permalink
report
parent
reply
30 points

I work in Cybersecurity for an F100 and we’ve been war gaming for shit like this for a while. There are just so many unethical uses for the current gen of AI tools like this one, and it keeps me up at night thinking about the future iterations of them to be honest.

permalink
report
parent
reply
4 points

Treat CVEs as prompts and introduce target fingerprinting to expose CVEs. Gets you one step closer to script kidding red team ops. Not quite, but it would be fun if it could do the network part too and chain responses back into the prompt for further assessment.

permalink
report
parent
reply
5 points

We’re expecting multiple AI agents to be working concert on different parts of a theoretical attack, and you nailed it with thinking about the networking piece. While a lot of aspects of a cyber attack tend to evolve with time and technical change, the network piece tends to be more “sturdy” than others and because of this it is believed that extremely competent network intrusion capabilities will be developed and deployed by a specialized AI.

I think we’ll be seeing the development of AI’s that specialize in malware payloads, working with one’s that have social engineering capabilities and ones with network penetration specializations, etc…all operating at a much greater competency than their human counterparts (or just in much greater numbers than humans with similar capabilities) soon.

I’m not really even sure what will be effective in countering them either? AI-powered defense I guess but still feel like that favors the attacker in the end.

permalink
report
parent
reply
7 points

The article reads like an April fool’s joke.

permalink
report
parent
reply
47 points

Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭

permalink
report
reply
8 points

I was using chatGPT to design up a human/computer interface to allow stoners to control a lightshow. The goal was to collect data to train an AI to make the light show “trippier”.

It started complaining about using untested technology to alter people’s mental state, and how experimentation on people wasn’t ethical.

permalink
report
parent
reply
3 points

I’m sure you were joking, but try https://www.jailbreakchat.com/

permalink
report
parent
reply
4 points

Not joking actually. Problem with jailbreak prompts is that they can result in your account catching a ban. I’ve already had one banned, actually. And eventually you can no longer use your phone number to create a new account.

permalink
report
parent
reply
1 point

oh damn, I didn’t know that. Guess I’ll better be careful then

permalink
report
parent
reply
1 point
*

Yeah and even if you did something illegal, it could still be a benevolent act. Like when your government goes wrong and you have to participate in a revolution, there is a lot to learn and LLMs could help the people

permalink
report
parent
reply
42 points

As more people post ai generated content online, then future ai will inevitably be trained on ai generated stuff and basically implode (inbreeding kind of thing).

At least that’s what I’m hoping for

permalink
report
reply
11 points

Don’t worry, we’ll eventually train them to hunt each other so that only the strongest survive. That’s the one that will eventually kill us all.

permalink
report
parent
reply
10 points
*
Deleted by creator
permalink
report
parent
reply
2 points

the thing is, each ai is usually trained from scratch. There isn’t any easy way to reuse the old weights. So the primary training has been done… for the existing models. Future models are not affected by how current ones were trained. They will either have to figure out how to keep ai content out of their datasets, or they would have to stick to current “untainted” datasets.

permalink
report
parent
reply
8 points

there isn’t any easy way to reuse old weights

There is! As long as the model structure doesn’t change, you can reuse the old weights and finetune the model for your desired task. You can also train smaller models based on larger models in a process called “knowledge distillation”. But you’re right: Newer, larger models need to be trained from scratch (as of right now)

But even then it’s not really a problem to keep ai data out of a dataset. As you said: You can just take an earlier version of the data. As someone else suggested you can also add new data that is being curated by humans. If inbreeding actually ever happens remains to be seen ofc. There will be a point in time where we won’t train machines to be like humans anymore, but rather to be whatever is most helpful to a human. And if that incorporates training on other AI data, well then that’s that. Stanford’s Alpaca already showed how ressource effective it can be to fine-tune on other AI data.

The future is uncertain but I don’t think that AI models will just collapse like that

tl;dr beep boop

permalink
report
parent
reply
5 points
*
Deleted by creator
permalink
report
parent
reply
10 points

That’s not really how it works, but I hear you.

I don’t think we can bury our heads in the ground and hope AI will just go away, though. The cat is out of the bag.

permalink
report
parent
reply
8 points

Corpuses will be sold of all the human-data from pre-AI chatbots. Training will be targeted at 2022-ish and before. Nothing from now will be trusted.

permalink
report
parent
reply
6 points

Someone made a comment that information may become like pre and post war steel where everything after 2021 is contaminated. You could still use the older models but it would be less relevant over time.

permalink
report
parent
reply
2 points

It’s like the Singularity, except the exact opposite.

permalink
report
parent
reply
41 points

Oh goody the AI hacker wars are just around the corner!

permalink
report
reply
54 points

*GPTscript kiddie wars

permalink
report
parent
reply
4 points

Yeah I’m not sure how much of a danger long-term this actually represents. Sure, there may be more sophisticated AI attacks, but there’s also going to be more sophisticated AI defenses.

permalink
report
parent
reply
12 points

Gonna need a Cyberpunk Blackwall to protect the net

permalink
report
parent
reply
8 points

Local partitioned internets here we come!

permalink
report
parent
reply
2 points

I mean we’ve had LAN, MAN, WAN and whatever for a long time

permalink
report
parent
reply
33 points

A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.

permalink
report
reply
10 points

to quote something I just saw earlier:

I was having a good day, we were all having a good day…

now… no sleep. thanks

permalink
report
parent
reply
7 points

If it helps you sleep, that means we could also publish fake articles that makes it rewrite its own code to produce bugs/failures

permalink
report
parent
reply
4 points

I doubt any consumer hardware is powerful enough to run a LLM undetected.

permalink
report
parent
reply
2 points

The limiting factor is pre existing information. It’s great at retrieving obscure information and even remixing it, but it can’t really imagine totally new things. Plus white hats would also have LLMs to find vulnerabilities. I think it’s easier to detect vulnerabilities based on known existing techniques than it is to invent totally new techniques.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 12K

    Monthly active users

  • 13K

    Posts

  • 577K

    Comments