You are viewing a single thread.
View all comments
62 points

Did someone not know this like, pretty much from day one?

Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

permalink
report
reply
-5 points

Seriously, I’ve seen 100x more headlines like this than people claiming LLMs can reason. Either they don’t understand, or think we don’t understand what “artificial” means.

permalink
report
parent
reply
16 points

A lot of people still don’t, from what I can gather from some of the comments on “AI” topics. Especially the ones that skew the other way with its “AI” hysteria is often an invite from people who know fuck all about how the tech works. “Nudifier” or otherwise generative images or explicit chats with bots that portray real or underage people being the most common topics that attract emotionally loaded but highly uninformed demands and outrage. Frankly, the whole “AI” topic in the media is so massively overblown on both fronts, but I guess it is good for traffic and nuance is dead anyway.

permalink
report
parent
reply
9 points

Indeed, although every one of us who have seen a tech hype train once or twice expected nothing less.

PDAs? Quantum computing. Touch screens. Siri. Cortana. Micropayments. Apps. Synergy of desktop and mobile.

From the outset this went from “hey that’s kind of neat” to quite possibly toppling some giants of tech in a flash. Now all we have to do is wait for the boards to give huge payouts to the pinheads that drove this shitwagon in here and we can get back to doing cool things without some imaginary fantasy stapled on to it at the explicit instruction of marketing and channel sales.

permalink
report
parent
reply
6 points

Touch screens?

permalink
report
parent
reply
15 points
*

Xml also used to be a tech hype for a bit.

And i still remember how media outlets hyped up second life, forgot about it and a few months later discovered it again and more hype started. It was fun.

permalink
report
parent
reply
11 points
*

Yes.

But the lies around them are so excessive that it’s a lot easier for executives of a publicly traded company to make reasonable decisions if they have concrete support for it.

permalink
report
parent
reply
28 points

there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

permalink
report
parent
reply
12 points

No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.

permalink
report
parent
reply
13 points

Isn’t OpenAI saying that o1 has reasoning as a specific selling point?

permalink
report
parent
reply
12 points

They say a lot of stuff.

permalink
report
parent
reply
14 points

they do say that, yes. it’s as bullshit as all the other claims they’ve been making

permalink
report
parent
reply
8 points

Which is my point, and forgive me, but I believe is the point of the research publication.

permalink
report
parent
reply
6 points

My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.

permalink
report
parent
reply
4 points

My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

Didn’t the previous models already do this?

permalink
report
parent
reply
36 points
*

Well, two responses I have seen to the claim that LLMs are not reasoning are:

  1. we are all just stochastic parrots lmao
  2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of “emergent”).

So I think this research is useful as a response to these, although I think “fuck off, promptfondler” is pretty good too.

permalink
report
parent
reply
-13 points

Well are we not stochastic parrots then? Isn’t this a philosophical, rhetorical and equally unfalsifiable question to answer also?

permalink
report
parent
reply
11 points
*

Only in the philosophical sense of all of physics being a giant stochastic system.

But that’s equally useful as saying that we’re Turing machines? Yes, if you draw a broad category of “all things that compute in our universe” then you can make a reasonable (but disputable!) argument that both me and a Python interpreter are in the same category of things. That doesn’t mean that a Python interpreter is smart/sentient/will solve climate change/whatever Sammy Boi wants to claim this week.

Or, to use a different analogy, it’s like saying “we’re all just cosmic energy, bro”. Yes we are, pass the joint already and stop trying to raise billions of dollars for your energy woodchipper.

permalink
report
parent
reply
15 points

no

permalink
report
parent
reply
27 points

fuck off, promptfondler

permalink
report
parent
reply
16 points

Hark! I hear the wanker roar.

permalink
report
parent
reply
24 points

No, there’s an actual paper where that term originated that goes into great deal explaining what it means and what it applies to. It answers those questions and addresses potential objections people might respond with.

There’s no need for–and, frankly, nothing interesting about–“but, what is truth, really?” vibes-based takes on the term.

permalink
report
parent
reply
21 points

“Language is a virus from outer space”

permalink
report
parent
reply
9 points

I thought it came from Babylonian writing that recoded the brains and planted the languages.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 504

    Posts

  • 11K

    Comments

Community moderators