It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s

My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!

The stuff at the end was sarcasm, you dolt. Shut up.

You are viewing a single thread.
View all comments View context
40 points
*

Joke’s on you, LLMs already give us bad information

permalink
report
parent
reply
13 points

Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn’t find. The customer didn’t believe him when he said that the promotion didn’t exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.

Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.

permalink
report
parent
reply
11 points

“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.

Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.

permalink
report
parent
reply
2 points

“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.

You misunderstand me. I don’t mean that the model has any intent at all. Model designers have no intent to misinform: they designed a machine that produces answers.

True answers or false answers, a neural network is designed to produce an output. Because a null result (“there is no answer to that question”) is very, very rare online, the training data doesn’t include it; meaning that a GPT will almost invariably produce any answer; if a true answer does not exist in its training data, it will simply make one up.

But the designers didn’t intend for it to reproduce misinformation. They intended it to give answers. If a model is trained with the intent to misinform, it will be very, very good at it indeed; because the only training data it will need is literally everything except the correct answer.

permalink
report
parent
reply
5 points

Unintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 15K

    Posts

  • 648K

    Comments