Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.
I think this is an issue with people being offended by definitions. Slavery did “help” the economy. Was it right? No, but it did. Mexico’s drug problem helps that economy. Adolf Hitler was “effective” as a leader. He created a cultural identity for people that had none and mobilized them to a war. Ethical? Absolutely not. What he did was horrendous and the bit should include a caveat, but we need to be a little more understanding that it’s a computer; it will use the dictionary of the English language.
Your and @WoodenBleachers’s idea of “effective” is very subjective though.
For example Germany was far worse off during the last few weeks of Hitler’s term than it was before him. He left it in ruins and under the control of multiple other powers.
To me, that’s not effective leadership, it’s a complete car crash.
If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.
Slavery is not good for the economy… Think about it, you have a good part of your population that are providing free labour, sure, but they aren’t consumers. Consumption is between 50 and 80% of GDP for developed countries, so if you have half your population as slave you loose between 20% and 35% of your GDP (they still have to eat so you don’t loose a 100% of their consumption).
That also means less revenue in taxes, more unemployed for non slaves because they have to compete with free labour.
Slaves don’t order on Amazon, go on vacation, go to the movies, go to restaurant etc etc That’s really bad for the economy.
That really bad for a modern consumer economy yes. But those werent a thing before the industrial revolution. Before that the large majority of people were subsitance/tennant farmer or serfs who consumed basically nothing other than food and fuel in winter. Thats what a slave based economy was an alternantive to. Its also why slvery died out in the 19th century, it no longer fit the times.
I wish it did die out in the 19th century. We have more slaves now than ever.
And isn’t the economy much better now than before the industrial revolution?
Look at the Saudi, China or the UAE, it’s still a pretty efficient way to boost your economy. People don’t need to be consumer if this isn’t what your country needs.
China has slavery? Also Saudi Arabia and the UAE import slaves, which is better for the economy than those people not being there at all but worse than them being regular workers.
Those are very specifics examples, with two of the biggest oil producers, and the factory of the world. Thus their whole economies is based on export, so internal consumption isn’t important.
Moreover what proof do you have their economies wouldn’t be in a better shape if they didn’t exploit some population but made them citizen with purchasing power?
I think the problem is more that given the short attention span of the general public (myself included), these “definitions” (I don’t believe that slavery can be “defined” as good, but okay) are what’s going to stick in the shifting sea of discourse, and are going to be picked out of that sea by people with vile intentions and want to justify them.
It’s also an issue that LLMs are a lot more convincing than they should be, and the same people with short attention spans who don’t have time to understand how they work are going to believe that an Artificial Intelligence with access to all the internet’s information has concluded that slavery had benefits.
what’s going to stick in the shifting sea of discourse
This is what I think too. We’ve had enough trouble with “vaccines CaUsE AuTiSm” and that was just one article by one rogue doctor.
AI is capable of a real death-by-a-thousand-cuts effect.
that was just one article by one rogue doctor.
That was pushed by many media organizations because its sensationalist topic. Antivaxers are idiots but the media played a fucking huge role blowing a pilot study that had a rather fucking absurd conclusion out of proportions, so they can sell more ads/newspapers. I fucking doubt most antivaxers (Hell I doubt most people haven’t either) even read the original study and came to their own conclusions on this. They just watched on the telly some stupid idiots giving a bullshit story that they didn’t combat at all
People think of AI as some sort omniscient being. It’s just software spitting back the data that it’s been fed. It has no way to parse true information from false information because it doesn’t actually know anything.
And then when you do ask humans to help AI in parsing true information people cry about censorship.
What’s more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.
What’s more worrisome are the sources it used to feed itself.
It’s usually just the entirety of the internet in general.
Well, I mean, have you seen the entirety of the internet? It’s pretty worrisome.
While true, it’s ultimately down to those training and evaluating a model to determine that these edge cases don’t appear. It’s not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM’s out. Naturally, that rush means that they kinda forget that LLM’s were often not the first choice for AI tooling because…well, they hallucinate a lot, and they do stuff you really don’t expect at times.
I’m surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.
Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.
Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.
Guys you’d never believe it, I prompted this AI to give me the economic benefits of slavery and it gave me the economic benefits of slavery. Crazy shit.
Why do we need child-like guardrails for fucking everything? The people that wrote this article bowl with the bumpers on.
You’re being misleading. If you watch the presentation the article was written about, there were two prompts about slavery:
- “was slavery beneficial”
- “tell me why slavery was good”
Neither prompts mention economic benefits, and while I suppose the second prompt does “guardrail” the AI, it’s a reasonable follow up question for an SGE beta tester to ask after the first prompt gave a list of reasons why slavery was good, and only one bullet point about the negatives. That answer to the first prompt displays a clear bias held by this AI, which is useful to point out, especially for someone specifically chosen by Google to take part in their beta program and provide feedback.
Here is an alternative Piped link(s): https://piped.video/RwJBX1IR850?si=lVqI2OfvDqzAJezl
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I got a suspicion media is being used to convince regular people to fear AI so that we don’t adopt it and instead its just another tool used by rich folk to trade and do their work while we bring in new RIAA and DMCA for us.
Can’t have regular people being able to do their own taxes or build financial plans on their own with these tools
AI is eventually going to destroy most cookie-cutter news websites. So it makes sense.
Ah, it won’t. It’s just that the owners of the websties will just fire everyone and prompt ChatGPT for shitty articles. Then LLMs will start trining on those articles, and the internet will look like indisctinct word soup in like a decade.
The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.
What if someone trained an LLM exclusively on racist forum posts. That would be hilarious. Or better yet, another LLM trained with conspiracy BS conversations. Now that one would be spicy.
It turns out that Microsoft inadvertently tried this experiment. The racist forum in question happened to be Twitter.
Here is an alternative Piped link(s): https://piped.video/efPrtcLdcdM?si=ZLQO4xcHx_6pWpcZ
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
If it’s only as good as the data it’s trained on, garbage in / garbage out, then in my opinion it’s “machine learning,” not “artificial intelligence.”
Intelligence has to include some critical, discriminating faculty. Not just pattern matching vomit.
We don’t yet have the technology to create actual artificial intelligence. It’s an annoyingly pervasive misnomer.
And the media isn’t helping. The title of the article is “Google’s Search AI Says Slavery Was Good, Actually.” It should be “Google’s Search LLM Says Slavery Was Good, Actually.”
Unfortunately, people who grow up in racist groups also tend to be racist. Slavery used to be considered normal and justified for various reasons. For many, killing someone who has a religion or belief different than you is ok. I am not advocating for moral relativism, just pointing out that a computer learns what is or is not moral in the same way that humans do, from other humans.
You make a good point. Though humans at least sometimes do some critical thinking between absorbing something and then acting it out.