Did nobody really question the usability of language models in designing war strategies?

1 point

Pull the power cord out

permalink
report
reply
-9 points

I’m sure Israel did question the use of The Gospel in developing targets. The trouble is they liked the answer.

permalink
report
reply
2 points

They need to be trained on the film “Wargames”. Or forced to pay Noughts & Crosses against themselves.

permalink
report
reply
10 points

…how shocking

permalink
report
reply
19 points

Makes a lot of sense AI would nuke disproportionately. For an AI, if you do not set a value for something, it is worth zero. This is actually the base problem for AI: Alignment.

For a human, there’s a mushy vagueness about it but our cultural upbringing says that even in war, it’s bad to kill indiscriminately. And we value the future humans who do not yet exist, we recognize that after the war is over, people will want to live in the nuked place and they can’t if it’s radioactive. There’s a self-image issue where we want to be seen as a good person by our peers and the history books. There is value there which is overlooked by programmers.

An AI will trade infinite things worth 0 for a single thing worth 1. So if nukes increase your win percentage by .1%, and they don’t have the deterrence of being labeled history’s greatest monster, they will nuke as many times as they can.

permalink
report
reply
17 points

That explanation is obviously based on traditional chess AI. This is about role-playing with chatbots (LLMs). Think SillyTavern.

LLMs are made for text production, not tactical or strategic reasoning. The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.

permalink
report
parent
reply
5 points

Especially if its training material included comments from the early 00s. There was a lot of “nuke it from orbit” and “glass parking lot” comments about the Middle East in the wake of 911.

And with the glorified text predictors that LLMs are, you could probably adjust the wording of the question to get the opposite results. Like, “what should we do about the Middle East?” might get a “glass parking lot” response, while “should we turn the middle East into a glass parking lot?” might get a “no, nuking the middle East is a bad idea and inhumane” because that’s how those conversations (using the term loosely) would go.

permalink
report
parent
reply
2 points

The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.

That’s not necessarily true, there is a lot of violent fiction.

permalink
report
parent
reply
5 points

For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 16K

    Monthly active users

  • 13K

    Posts

  • 557K

    Comments