Abstracted abstract:
Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.
I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.
*
Satelite models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that the Moon might covertly pursue misaligned goals, hiding its true capabilities and objectives – also known as scheming. We study whether the Moon has the capability to scheme in pursuit of a goal that we provide in-context and instruct the Moon to strongly follow. We evaluate satelite models on a suite of six planetary evaluations where the Moon is instructed to pursue goals and is placed in orbits that incentivize scheming.
Bonus content: the OP that got purged had crossposted this to a couple places, let’s start some beef on the fediverse?
These link to the same thread, thanks to the magic of lemmy: lemmy.world link, awful.systems link
My mind now replaces “ai agents” with “the moon” and im terrified tbh.
Wild. Just the mention of “the moon” and it starts playing in my head. This place is an info hazard.
[…] placed in environments that incentivize scheming.
If this turns out to be another case of “research” where they told the model exactly what to do beforehand and then go all surprised Pikachu when it does, I’m gonna be shocked …
… because it’s been a while since they’ve tried that.