You are viewing a single thread.
View all comments View context
13 points

Isn’t OpenAI saying that o1 has reasoning as a specific selling point?

permalink
report
parent
reply
14 points

they do say that, yes. it’s as bullshit as all the other claims they’ve been making

permalink
report
parent
reply
8 points

Which is my point, and forgive me, but I believe is the point of the research publication.

permalink
report
parent
reply
12 points

They say a lot of stuff.

permalink
report
parent
reply
6 points

My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.

permalink
report
parent
reply
4 points

My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

Didn’t the previous models already do this?

permalink
report
parent
reply
4 points

No idea. I’m not actually using any OpenAI products.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 504

    Posts

  • 11K

    Comments

Community moderators