Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.
As such, it raises concerns that AI systems deployed in a real-world situation, say in a driverless car, could malfunction when presented with dynamic environments or tasks.
This is currently happening with driverless cars that use machine learning - so this goes beyond LLMs and is a general machine learning issue. Last time I checked, Waymo cars needed human intervention every six miles. These cars often times block each other, are confused by the simplest of obstacles, can’t reliably detect pedestrians, etc.