You are viewing a single thread.
View all comments
5 points

What this is about:

Can LLMs trained on A is B infer automatically that B is A?

Models that have memorized “Tom Cruise’s parent is Mary Lee Pfeiffer” in training, fail to generalize to the question “Who is Mary Lee Pfeiffer the parent of?”. But if the memorized fact is included in the prompt, models succeed.

It’s nice that it can get the latter, matching a template, but problematic that they can’t take an abstraction that they superficially get in one context and generalize it another; you shouldn’t have to ask it that way to get the answer you need.

permalink
report
reply

ChatGPT

!chatgpt@lemmy.world

Create post

Unofficial ChatGPT community to discuss anything ChatGPT

Community stats

  • 8

    Monthly active users

  • 251

    Posts

  • 2K

    Comments

Community moderators