Did nobody really question the usability of language models in designing war strategies?

2 points

Do you want to play a game?

permalink
report
reply
9 points

It’s a WAR GAME. Emphasis on war and game. Do you chuckle fucks think wargame players should emphasize kumbaya sing dance or group therapy sessions in their games?

permalink
report
reply
1 point

And a language model, absolutely unsuited for this task, just as much as a lawnmower or a float needle.

permalink
report
parent
reply
7 points
*

If the goal is to win and overwhelming force is an option, that option will always win. On the contrary, in the modern world, humans tend to try to find non-violent means in order to bring an end to wars. The point is that AI doesn’t have humanity but is still being utilized by militaries (or at least that’s what I think)

permalink
report
parent
reply
8 points

I am shocked—shocked!—to find out that a technology performs poorly when applied to a task it’s completely unsuited for!

permalink
report
reply
0 points
*

The important part of the research was that all the models had gone through ‘safety’ training.

That means among other things they were fine tuned to identify themselves as LLMs.

Gee - I wonder if the training data included tropes of AI launching nukes or acting unpredictably in wargames…

They really should have included evaluations of models that didn’t have a specific identification or were trained to identify as human in the mix of they wanted to evaluate the underlying technology and not the specific modeled relationships between the concept of AI and the concept of strategy in wargames.

permalink
report
reply

Whenever we have disrupting technological advancements, DARPA looks at it to see if it can be applied to military action, and this has been true with generative AI, with LLMs and with sophisticated learning systems. They’re still working on all of these.

They also get clickbait news whenever one of their test subjects does something whacky, like kill their own commander in order to expedite completing the mission parameters (in a simulation, not on the field.) The whole point is to learn how to train smart weapons to not do funny things like that.

So yes, that means on a strategic level, we’re getting into the nitty of what we try to do with the tools we have. Generals typically look to minimize casualties (and to weigh factors against the expenditure of living troops) knowing that every dead soldier is a grieving family, is rhetoric against the war effort, is pressure against recruitment and so on. When we train our neural-nets, we give casualties (and risk thereof) a certain weight, so as to inform how much their respective objectives need to be worth before we throw more troopers to take them.

Fortunately, AI generals will be advisory to human generals long before they are commanding armies, themselves, or at least I’d hope so: among our DARPA scientists, military think tanks and plutocrats are a few madmen who’d gladly take over the world if they could muster a perfectly loyal robot army smart enough to fight against human opponents determined to learn and exploit any weaknesses in their logic.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 16K

    Monthly active users

  • 13K

    Posts

  • 557K

    Comments