ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
Things we know so far:
-
Humans can train LLMs with new data, which means they can acquire knowledge.
-
LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.
-
We know multi-modal is possible, which means these models can acquire skills.
-
We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.
-
We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.
… What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.
Can a LLM learn to build a house and then actually do it?
LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.
At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.
There is a big difference between actual skill and just a predictive model based on statistics.
Is an octopus intelligent? Can an octopus build an airplane?
Why do you expect these models to have human skills if they are not humans?
How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?
This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?
Exactly. They are just “models”. There is nothing intelligent about them.
Yes octopus are very intelligent. They can think themselves out of a box without relying on curated data to train them.
Logic, reasoning, and deduction. LLMs have zero ability to reject data based on their understanding of reality. Big diff.