“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”

32 points
*

“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”

LLMs achieve reasoning level of average rationalist

permalink
report
reply
16 points

This is actually an accurate representation of most “gifted olympiad laureate attempting to solve a freshman CS problem on the blackboard” students I’ve went to uni with.

Jumps to the front after 5 seconds from the task being assigned, bluffs that the problem is trivial, tries to salvage their reasoning for 5 minutes when questioned by the tutor, turns out the theorem they said was trivial is actually false, sits down having wasted 10 minutes of everyone’s time.

permalink
report
parent
reply
9 points

I just remember a professor saying that after he filled the board with proofs and math. ‘the rest is trivial’ not sure if it was a joke, as I found none of it trivial. (and neither did the rest of the people doing the course).

permalink
report
parent
reply
7 points

This needed a TW jfc (jk, uh, sorta)

permalink
report
parent
reply
7 points

TW: contains real chuds

permalink
report
parent
reply
15 points

“Trivially” fits nicely in a margin, too. Suck on that, Andrew and Pierre!

permalink
report
parent
reply
12 points
*

it’s a very human and annoying way of bullshitting. I took every opportunity to crush this habit out of undergrads. “If you say trivial, obvious, or clearly, that usually means you’re making a mistake and you’re avoiding thinking about it”

permalink
report
parent
reply
5 points
*

feels like the same manner as my “‘just’ is a weaselword” speech

permalink
report
parent
reply
5 points

I heard new Gemini got the first question, so thats SOTA now*

*allegedly it came out the same day as the math olympiad so it twas fair, but who the fuck knows

permalink
report
reply
-16 points

I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.

It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.

permalink
report
reply
32 points

@pennomi @slop_as_a_service “It’s possible that the AI has figured out how” can I just stop you there

permalink
report
parent
reply
28 points

“thought process” lol.

permalink
report
parent
reply
15 points

“Thought process”

“Intuitively”

“Figured out”

“Thought path”

I miss the days when the consensus reaction to Blake Lemoine was to point and laugh. Now the people anthropomorphizing linear algebra are being taken far too seriously.

permalink
report
parent
reply
-15 points

LLMs are a lot more sophisticated than we initially thought, read the study yourself.

Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.

permalink
report
parent
reply
26 points
*

You didn’t link to the study; you linked to the PR release for the study. This and this are the papers linked in the blog post.

Note that the papers haven’t been published anywhere other than on Anthropic’s online journal. Also, what the papers are doing is essentially tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, “there’s a dog!” or “that’s a bird!” or “bitcoin is going up this year!”. It’s all rubbish dawg

permalink
report
parent
reply
24 points
*

read the study yourself

  • > ask the commenter if it’s a study or a self-interested blog post
  • > they don’t understand
  • > pull out illustrated diagram explaining that something hosted exclusively on the website of the for-profit business all authors are affiliated with is not the same as a peer-reviewed study published in a real venue
  • > they laugh and say “it’s a good study sir”
  • > click the link
  • > it’s a blog post
permalink
report
parent
reply
22 points

Essentially they do not simply predict the next token

looks inside

it’s predicting the next token

permalink
report
parent
reply
15 points

nothx, I can find better fiction on ao3

permalink
report
parent
reply
9 points

This study is bullshit, because they only trace evaluations and not trace training process that align tokens with probabilities.

permalink
report
parent
reply
7 points

this is credulous bro did you even look at the papers

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.2K

    Monthly active users

  • 700

    Posts

  • 16K

    Comments

Community moderators