17 points

If it weren’t constantly on fire and on the edge of the North American Heat Dome™ then Cali would seem like such a cool magical place.

permalink
report
reply
14 points
*
Deleted by creator
permalink
report
reply
2 points

Someone just didn’t put enough non toxic glue in their pizza and is in a bad mood as a result.

permalink
report
parent
reply
-2 points
*
Deleted by creator
permalink
report
parent
reply
1 point

You know he just agreed with you, right? Of at least shared your sentiment towards AI

permalink
report
parent
reply
4 points

Too late lol

permalink
report
parent
reply
13 points

Small problem though: researchers have already found ways to circumvent LLM off-limit queries. I am not sure how you can prevent someone from asking the “wrong” question. It makes more sense for security practices to be hardened and made more robust

permalink
report
reply
-2 points

Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.

Remember that a fucking autocorrect program needed to be regulated so it couldn’t spit out instructions for a bomb, that probably wouldn’t work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.

I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.

If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.

permalink
report
reply
0 points

that is not the safety concern.

permalink
report
parent
reply
1 point

Guns aren’t a safety concern. Ok then

permalink
report
parent
reply
1 point
*

The safety concern is for renegade super intelligent AI, not an AI that can recite bomb recipes scraped from the internet.

permalink
report
parent
reply
12 points

I had a short look at the text of the bill. It’s not as immediately worrying as I feared, but still pretty bad.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

Here’s the thing: How would you react, if this bill required all texts that could help someone “hack” to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?

What if the bill “just” sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.

permalink
report
reply
4 points

Not everything is a slippery slope. In this case the scenario where learning about cybersecurity is even slightly hinderedby this law doesn’t sound particularly convincing in your comment.

permalink
report
parent
reply
7 points

The bill is supposed to prevent speech. It is the intended effect. I’m not saying it’s a slippery slope.

I chose to focus on cybersecurity, because that is where it is obviously bad. In other areas, you can reasonably argue that some things should be classified for “national security”. If you prevent open discussion of security problems, you just make everything worse.

permalink
report
parent
reply
1 point
*

Yeah, a bunch of speech is restricted. Restricting speech isn’t in itself bad, it’s generally only a problem when it’s used to oppress political opposition. But copyrights, hate speech, death threats, doxxing, personal data, defense related confidentiality… Those are all kinds of speech that are strictly regulated when they’re not outright banned, for the express purpose of guaranteeing safety, and it’s generally accepted.

In this case it’s not even restricting the content of speech. Only a very special kind of medium that consists in generating speech through an unreliably understood method of rock carving is restricted, and only when applied to what is argued as a sensitive subject. The content of the speech isn’t even in question. You can’t carve a cyber security text in the flesh of an unwilling human either, or even paint it on someone’s property, but you can just generate exactly the same speech with a pen and paper and it’s a-okay.

If your point isn’t that the unrelated scenarios in your original comment are somehow the next step, I still don’t see how that’s bad.

permalink
report
parent
reply
-5 points
*

Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won’t be used as such. Power is nothing without control.

permalink
report
parent
reply
4 points

This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc… IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.

Controlling information or access to education isn’t exactly a good guy move. It’s not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that’s not really what this bill is about.

permalink
report
parent
reply
1 point

Yep nothing about censorship is cool. But for rampaging agi systems, a button to kill it would be nice. However it leads into a game and a paradox on how this could ever be achieved

permalink
report
parent
reply
4 points

I am pretty sure no one has ever built a computer that can’t be shut off. Somehow someway.

permalink
report
parent
reply
5 points

How is that reasonable? Almost anything could be potentially used as a weapon, or to aid in crime.

permalink
report
parent
reply
-1 points

I guess let’s deregulate guns then. Oh wait.

permalink
report
parent
reply
1 point

This is for models that cost 100 million dollars to train. Not all things are the same and most things that can do serious damage to big chunks of population are regulated. Cars are regulated, firearms are regulated, access to drugs is regulated. Even internet access is super controlled. I don’t see how you can say AI should not be regulated.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 506K

    Comments