“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”
I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.
It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.
@pennomi @slop_as_a_service “It’s possible that the AI has figured out how” can I just stop you there
LLMs are a lot more sophisticated than we initially thought, read the study yourself.
Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.
You didn’t link to the study; you linked to the PR release for the study. This and this are the papers linked in the blog post.
Note that the papers haven’t been published anywhere other than on Anthropic’s online journal. Also, what the papers are doing is essentially tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, “there’s a dog!” or “that’s a bird!” or “bitcoin is going up this year!”. It’s all rubbish dawg
read the study yourself
- > ask the commenter if it’s a study or a self-interested blog post
- > they don’t understand
- > pull out illustrated diagram explaining that something hosted exclusively on the website of the for-profit business all authors are affiliated with is not the same as a peer-reviewed study published in a real venue
- > they laugh and say “it’s a good study sir”
- > click the link
- > it’s a blog post
Essentially they do not simply predict the next token
looks inside
it’s predicting the next token
This study is bullshit, because they only trace evaluations and not trace training process that align tokens with probabilities.