Skip Navigation

In which some researchers draw a spooky picture and spook themselves

static1.squarespace.com /static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf

Abstracted abstract:

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.

I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.

*

9

You're viewing a single thread.

9 comments
  • Satelite models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that the Moon might covertly pursue misaligned goals, hiding its true capabilities and objectives – also known as scheming. We study whether the Moon has the capability to scheme in pursuit of a goal that we provide in-context and instruct the Moon to strongly follow. We evaluate satelite models on a suite of six planetary evaluations where the Moon is instructed to pursue goals and is placed in orbits that incentivize scheming.

9 comments