You are viewing a single thread.
View all comments View context
17 points

It’s sad to see it spit out text from the training set without the actual knowledge of date and time. Like it would be more awesome if it could call time.Now(), but it 'll be a different story.

permalink
report
parent
reply
40 points

if you ask it today’s date, it actually does that.

It just doesn’t have any actual knowledge of what it’s saying. I asked it a programming question as well, and each time it would make up a class that doesn’t exist, I’d tell it it doesn’t exist, and it would go “You are correct, that class was deprecated in {old version}”. It wasn’t. I checked. It knows what the excuses look like in the training data, and just apes them.

It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

permalink
report
parent
reply
20 points

It spouts convincing sounding bullshit and hopes you don’t call it out. It’s actually surprisingly human in that regard.

Oh great, Silicon Valley’s AI is just an overconfident intern!

permalink
report
parent
reply
2 points

Oh great, Silicon Valley’s AI is just a major tech executive!

permalink
report
parent
reply
10 points

It’s super weird that it would attempt to give a time duration at all, and then get it wrong.

permalink
report
parent
reply
12 points

It doesn’t know what it’s doing. It doesn’t understand the concept of the passage of time or of time itself. It just knows that that particular sequence of words fits well together.

permalink
report
parent
reply
3 points
*

Bard is kind of trash though. GPT-4 tends to so much better in my experience.

permalink
report
parent
reply
6 points

I haven’t used GPT-4 for that, but it’s all dependent on the data fed into it. Like if you ask a question about Javascript, there’s loads of that out there for it to look at. But ask it about Delphi, and it’ll be less accurate.

And they’ll both suffer from the same issue, which is when they reach the edge of their “knowledge”, they don’t realise it and output data anyway. They don’t know what they don’t know.

permalink
report
parent
reply
3 points

they are both shit at adding and subtracting numbers, dates and whatnot… they both cant do basic math unfortunately

permalink
report
parent
reply
1 point

They are mostly large language models , I have trained few smaller models myself, they generally splurt out next word depending on the last word , another thing they are incapable of, is spontaneous generation, they heavily depend on the question , or a preceding string ! But most companies are portraying it as AGI , already !

permalink
report
parent
reply

Programmer Humor

!programmerhumor@lemmy.ml

Create post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.

Community stats

  • 5.9K

    Monthly active users

  • 1.5K

    Posts

  • 34K

    Comments