Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

You are viewing a single thread.
View all comments View context
49 points
*
Deleted by creator
permalink
report
parent
reply
24 points

Your and @WoodenBleachers’s idea of “effective” is very subjective though.

For example Germany was far worse off during the last few weeks of Hitler’s term than it was before him. He left it in ruins and under the control of multiple other powers.

To me, that’s not effective leadership, it’s a complete car crash.

permalink
report
parent
reply
19 points
Deleted by creator
permalink
report
parent
reply
4 points
*

He was able to convince the majority that his way of thinking was the right way to go and deployed a plan to that effect

So, you’re basically saying an effective leader is someone who can convince people to go along with them for a sustained period. Jim Jones was an effective leader by that metric. Which I would dispute. So was the guy who led the Donner Party to their deaths.

This is why I see a problem with this. You and I are able to discuss this and work out what each other means.

But in a world where people are time-poor and critical thinking takes time, errors based on fundamental misunderstandings of consensual meanings can flourish.

And the speed and sheer amount of global digital communication means that they can be multiplied and compounded in ways that individual fact checkers will not be able to challenge sucessfully.

permalink
report
parent
reply
-2 points

If AI can only think at surface level, we are beyond doomed.

permalink
report
parent
reply
1 point

If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.

permalink
report
parent
reply
2 points
*

it doesn’t even look at the smaller picture. LLMs build sentences by looking at what’s most statistically likely to follow the part of the sentence they have already built (based on the most frequent combinations from their training data). If they start with “Hitler was effective” LLMs don’t make any ethical consideration at all… they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.

Guardrails are built by painstakingly trying to add ad-hoc rules not to generate “combinations that contain these words” or “sequences of words like these”. They are easily bypassed by asking for the same concept in another way that wasn’t explicitly disabled, because there’s no “concept” to LLMs, just combination of words.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 15K

    Monthly active users

  • 13K

    Posts

  • 570K

    Comments