Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.
I would argue humans often have a world model that is too coherent. If you ask a flat earther about their beliefs they will always argue that the earth is flat and evidence to the contrary is manufactured or interpreted wrongly. That is a completely absurd world model, but perfectly coherent.