You are viewing a single thread.
View all comments View context
0 points

Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We’ve been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.

This is part of what makes ai so “scary” that it can basically know so much.

permalink
report
parent
reply
0 points

Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.

permalink
report
parent
reply
0 points

Well… I do agree with you but human brains are basically big prediction engines that use lookup tables, experience, to navigate around life. Obviously a super simplification, and LLMs are nowhere near humans, but it is quite a step in the direction.

permalink
report
parent
reply
1 point

@phoenixz @Soyweiser “Let’s redefine what it means to be human, so we can say the LLM is human” have you bumped your head?

permalink
report
parent
reply
0 points
*

I absolutely agree. However, if you think the LLMs are just fancy LUTs, then I strongly disagree. Unless, of course, we are also just fancy LUTs.

permalink
report
parent
reply
0 points

You ever meet an ai researcher with a background in biology? I’ve discussed this stuff with one. She disagrees with Turing about machines thinking including when ai is in the picture. They process information very differently from how biology does

permalink
report
parent
reply
0 points

LLMs know nothing. literally. they cannot.

permalink
report
parent
reply
0 points

I guess it comes down to a philosophical question as to what “know” actually means.

But from my perspective is that it certainly knows some things. It knows how to determine what I’m asking, and it clearly knows how to formulate a response by stitching together information. Is it perfect? No. But neither are humans, we mistakenly believe we know things all the time, and miscommunications are quite common.

But this is why I asked the follow up question…what’s the effective difference? Don’t get me wrong, they clearly have a lot of flaws right now. But my 8 year old had a lot of flaws too, and I assume both will get better with age.

permalink
report
parent
reply
1 point

i guess it comes down to a philosophical question

no, it doesn’t, and it’s not a philosophical question (and neither is this a question of philosophy).

the software simply has no cognitive capabilities.

permalink
report
parent
reply
0 points

don’t compare your child to a chatbot wtf

permalink
report
parent
reply
0 points

Yeah but neither did Socrates

permalink
report
parent
reply
1 point

but he at least was smug about it

permalink
report
parent
reply
0 points

Because a machine that “forgets” stuff it reads seems rather useless… considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

permalink
report
parent
reply
0 points

Chat GPT had the book entirely memorized

I feel like this exposes a fundamental misunderstanding of how LLMs are trained.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.6K

    Monthly active users

  • 501

    Posts

  • 11K

    Comments

Community moderators