• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
73 points
*

Some days it looks to be a three-way race between AI, climate change, and nuclear weapons proliferation to see who wipes out humanity first.

But on closer inspection, you see that humans are playing all three sides, and still we are losing.

permalink
report
reply
36 points

AI, climate change, and nuclear weapons proliferation

One of those is not like the others. Nuclear weapons can wipe out humanity at any minute right now. Climate change has been starting the job of wiping out humanity for a while now. When and how is AI going to wipe out humanity?

This is not a criticism directed at you, by the way. It’s just a frustration that I keep hearing about AI being a threat to humanity and it just sounds like a far-fetched idea. It almost seems like it’s being used as a way to distract away from much more critically pressing issues like the myriad of environmental issues that we are already deep into, not just climate change. I wonder who would want to distract from those? Oil companies would definitely be number 1 in the list of suspects.

permalink
report
parent
reply
25 points
*

Agreed. This kind of debate is about as pointless as declaring self-driving cars are coming out in 5 years. The tech is way too far behind right now, and it’s not useful to even talk about it until 50 years from now.

For fuck’s sake, just because a chatbot can pretend it’s sentient doesn’t mean it actually is sentient.

Some large tech companies didn’t want to compete with open source, he added.

Here. Here’s the real lead. Google has been scared of AI open source because they can’t profit off of freely available tools. Now, they want to change the narrative, so that the government steps in regulates their competition. Of course, their highly-paid lobbyists will by right there to write plenty of loopholes and exceptions to make sure only the closed-source corpos come out on top.

Fear. Uncertainty. Doubt. Oldest fucking trick in the book.

permalink
report
parent
reply
8 points

When and how is AI going to wipe out humanity?

With nuclear weapons and climate change.

permalink
report
parent
reply
15 points

Uh nice a crossover episode for the series finale.

permalink
report
parent
reply
2 points

The two things experts said shouldn’t be done with AI, allow open internet access and teaching them to code, have been blithely ignored already. It’s just a matter of time.

permalink
report
parent
reply
3 points

I don’t think the oil companies are behind these articles. That is very much a wheels within wheels type thinking that corporations don’t generally invest in. It is easier to just deny climate change instead of getting everyone distracted by something else.

permalink
report
parent
reply
1 point

You’re probably right, but I just wonder where all this AI panic is coming from. There was a story on the Washington Post a few weeks back saying that millions are being invested into university groups that are studying the risks of AI. It just seems that something is afoot that doesn’t look like just a natural reaction or overreaction. Perhaps this story itself explains it: the Big Tech companies trying to tamp down competition from startups.

permalink
report
parent
reply
10 points

52-yo American dude here, no longer worried about nuclear apocalypse. Been there, done that, ain’t seeing it. If y’all think geopolitics are fucked up now, 🎵"You should have seen it in color."🎶

We can close a time or three, but no one’s insane enough to push the button, and no ONE can push the button. Even Putin in his desperation will be stymied by the people who actually have to push MULTIPLE buttons.

AI? IDGAF. Computers have power sources and plugs. Absolutely disastrous events could unfold, but enough people pulling enough plugs will kill any AI insurgency. Look at Terminator 2 and ask yourself why the AI had to have autonomous machines to win. I could take out the neighborhood power supply with a couple of suitable guns. I’m sure smarter people than I could shut down DCs.

Climate change? Sorry kids, it’s too late and you are righteously fucked. Not saying we shouldn’t go full force on mitigation efforts, but y’all haven’t seen the changes I’ve seen in 50 years. Winters are clearly warmer, summers hotter, and I just got back from my camp in the swamp. The swamp is dry for the first time in 4 years.

And here’s one you might not have personally experienced; The insects are disappearing. I could write an essay on bugs alone. And don’t start me on wildlife populations.

permalink
report
parent
reply
1 point
*

Nukes are becoming a problem, because China is ramping up production. It will be just natural for India to do the same. From a two-way MAD situation, we’re getting into a 4-way Mexican standoff. That’s… really bad.

There won’t be an “AI insurgency”, just enough people plugging in plugs for some dumb AIs to tell them they can win the standoff. Let’s hope they don’t also put AIs in charge of the multiple nuclear launch buttons… or let the people in charge check with their own, like on a smartphone, dumb AIs telling them to go ahead.

Climate change is clearly a done thing, unless we get something like unlimited fusion power to start some terraforming projects (seems unlikely).

You have a point with insects, but I think that’s just linked to climate change; populations will migrate wherever they get something to eat, even if that turns out to be Antarctica.

permalink
report
parent
reply
1 point

Here is an alternative Piped link(s):

You should have seen it in color.

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

permalink
report
parent
reply
3 points

I’m sitting here hoping that they all block each other out because they are all trying to fit through the door at the same time.

permalink
report
parent
reply
3 points

An ai will detonate nuclear weapons to change the climate into an eternal winter. Problem solved. All the win at the same time. No loosers… oh. Wait, no…

permalink
report
parent
reply
3 points

Then the errant gamma ray burst sneaks in for the kill.

permalink
report
parent
reply
2 points
*

three-way race between AI, climate change, and nuclear weapons proliferation

Bold of you to assume that people behind maximizing profits (high frequency trading bot developers) and behind weapons proliferation (wargames strategy simulation planners) are not using AI… or haven’t been using it for well over a decade… or won’t keep developing AIs to blindly optimize for their limited goals.

First StarCraft AI competition was held in 2010, think about that.

permalink
report
parent
reply
1 point

I will appeal to my previous ignorance. I had no idea that AI saw that much usage over 10 years ago!

permalink
report
parent
reply
2 points

We used to run “machine learning”, “neural networks”, over 25 years ago. The “AI” term has always been kind of a sci-fi thing, somewhere between a buzzword, a moving target, and undefined since we lack a fixed comprehensive definition of “intelligence” to begin with. The limiting factors of the models have always been the number of neurons one could run in real-time, and the availability of good training data sets. Both have increased over a million-fold in that time, progressively turning more and more previously untractable problems into solvable ones to the point where the results are equal or better and/or faster than what people can do.

Right now, there are supercomputers out there orders of magnitude more capable than what runs stuff like ChatGPT, DallE, or all the public facing "AI"s that made the news. Bigger ones keep getting built… and memristors are coming, to become a game changer the moment they can be integrated anywhere near current GPU/CPU levels.

For starters, a supercomputer with the equivalent neural network processing power of a human brain, is expected for 2024… that’s next year… but it won’t be able to “run a human brain”, because we lack the data on how “all of” the human brain works. It will likely become obsoleted by ones with several orders of magnitude more processing power, way before we can simulate an actual human brain… but the question will be: do we need to? Does a neural network need to mimick a human brain, in order to surpass it? A calculator already does, and it doesn’t use a neural network at all. At what point the integration of what size and kind of neural network, with what kind of “classical” computer, can start running circles around any human… or all of humanity taken together?

And of course we’ll still have to deal with the issue of dumb humans telling/trusting dumb "AI"s to do things way over their heads… but I’m afraid any attempt at “regulation”, is going to end up like the case with “international law”: those who want, obey it; those who should, DGAF.

Even if all tech giants with all lawmakers got to agree on the strictest of regulations imaginable, like giving all "AI"s the treatment of weapons of mass destruction, there is a snowflake’s chance in hell that any military in the world will care about any of it.

permalink
report
parent
reply
68 points

Oh, you mean it wasn’t just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

I’m shocked. Shocked, I tell you.

I mean, I get that many people were just freaking out about it and it’s easy to lose track, but they were not even a little bit subtle about it.

permalink
report
reply
17 points

Exactly. This is classic strategy for first movers. Once you hold the market, use legislation to dig your moat.

permalink
report
parent
reply
15 points

AI is going to change quite a bit but I couldn’t wrap my head around the end of the world stuff.

permalink
report
parent
reply
24 points
*

It won’t end the world because AI doesn’t work the way that Hollywood portrays it.

No AI has ever been shown to have self agency, if it’s not given instructions it’ll just sit there. Even a human child would attempt to leave room if left alone in there.

So the real risk is not that and AI will decide to destroy humanity it’s that a human will tell the AI to destroy their enemies.

But then you just get back around to mutually assured destruction, if you tell your self redesigning thinking weapon to attack me I’ll tell my self redesigning thinking weapon to attack you.

permalink
report
parent
reply
8 points

I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

permalink
report
parent
reply
5 points

AI doesn’t work the way that Hollywood portrays it

AI does, but we haven’t developed AI and have no idea how to. The thing everyone calls AI today is just really good ML.

permalink
report
parent
reply
4 points

Imagine 9-11 with prions. MAD depends on everyone being rational self-interested without a very alien value system. It really only works in the case you got like three governments pointing nukes at each other. It doesn’t work if the group doesn’t care about tomorrow or thinks that they are going into heaven or is convinced that they can’t be killed or any other of the deranged reasons that motivate people to do these types of acts.

permalink
report
parent
reply
2 points
*

The real risk is that humans will use AIs to asses the risk/benefits of starting a war… and an AI will give them the “go ahead” without considering mutually assured destruction from everyone else doing exactly the same.

It’s not that AIs will get super-human, it’s that humans will blindly trust limited AIs and exterminate each other.

permalink
report
parent
reply
19 points

At worst it’ll be a similar impact to social media and big data.

Try asking the big players what they think of heavily limiting and regulating THOSE fields.

They went all “oh, yeah, we’re totally seeing the robot apocalypse happening right here” the moment open source alternatives started to pop up because at that point regulatory barriers would lock those out while they remain safely grandfathered in. The official releases were straight up claiming only they knew how to do this without making Skynet, it was absurd.

Which, to be clear, doesn’t mean regulation isn’t needed. On all of the above. Just that the threat is not apocalyptic and keeping the tech in the hands of these few big corpos is absolutely not a fix.

permalink
report
parent
reply
49 points

Why do you think Sam Altman is always using FUD to push for more AI restrictions? He already got his data collection, so he wants to make sure "“Open”"AI is the only game in town and prevent any future competition from obtaining the same amount of data they collected.

Still, I have to give Zuck his credit here, the existence of open models like LLaMa 2 that can be fine-tuned and ran locally has really put a damper on OpenAI’s plans.

permalink
report
reply
38 points

“Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI”

Otherwise stated: Pay us to overregulate and we’ll protect you from extinction. A Mafia perspective.

permalink
report
reply
8 points

Right?!?!! Lines are obvious. Only if they thought they could get away with it, and they might, actually, but also what if?!?!

permalink
report
parent
reply

Restricting open source offerings only drives them underground where they will be used with fewer ethical considerations.

Not that big tech is ethical in its own right.

Bot fight!

permalink
report
reply
9 points

Restricting AI would require surveillance on every computer all the time.

permalink
report
parent
reply
2 points

Eh sure to totally get rid of it, but taking it off GitHub would get rid of 90%

permalink
report
parent
reply
2 points

All that will happen is it will get shared around TOR.

permalink
report
parent
reply
5 points

I don’t think there’s any stopping the “fewer ethical considerations”, banned or not. For each angle of AI that some people want to prevent, there are others who specifically want it.

Though there is one angle that does affect all of that. The more AI stuff happening in the open, the faster the underground stuff will come along because they can learn from the open stuff. Driving it underground will slow it down, but then you can still have it pop up when it’s ready with less capability to counter it with another AI-based solution.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments