Mark Rober just set up one of the most interesting self-driving tests of 2025, and he did it by imitating Looney Tunes. The former NASA engineer and current YouTube mad scientist recreated the classic gag where Wile E. Coyote paints a tunnel onto a wall to fool the Road Runner.
Only this time, the test subject wasn’t a cartoon bird… it was a self-driving Tesla Model Y.
The result? A full-speed, 40 MPH impact straight into the wall. Watch the video and tell us what you think!
still, this should be something the car ought to take into account. What if there’s a glass in the way?
A camera will show it as being more convincing than it is. It would be way more obvious in real life when seen with two eyes. These kinds of murals are only convincing from one specific point.
That’s true, but it’s still way more understandable that a car without lidar would be fooled by it. And there is no way you would ever come into such a situation, whereas the image in the thumbnail, could actually happen. That’s why it’s so misleading, can people not see that?
I absolutely hate Elon Musk and support boycott of Tesla and Starlink, but this is a bit too misleading even with that in mind.
So, your comment got me thinking… surely, in a big country like the US of A, this mural must actually exist already, right?
Of course it does. It is an art piece in Columbia, S.C: https://img.atlasobscura.com/90srIbBi-XX-H9u6i_RykKIinRXlpclCHtk-QPSHixk/rt:fit/w:1200/q:80/sm:1/scp:1/ar:1/aHR0cHM6Ly9hdGxh/cy1kZXYuczMuYW1h/em9uYXdzLmNvbS91/cGxvYWRzL3BsYWNl/X2ltYWdlcy85ZTUw/M2ZkZDAxZjVhN2Rm/NmVfOTIyNjQ4NjQ0/OF80YWVhNzFkZjY0/X3ouanBn.webp
A full article about it: https://www.atlasobscura.com/places/tunnelvision
How would Tesla FSD react to Tunnelvision, I wonder? How would Tesla FSD react to an overturned semi truck with a realistic depiction of a highway on it? JK, Tesla FSD crashes directly into overturned semis even without the image depiction issue.
I don’t think the test is misleading. It’s puffed up for entertainment purposes, but in being puffed up, it draws attention to an important drawback of optical-only self-driving cars, which is otherwise a difficult and arcane topic to draw everyday people’s attention to.
As much as i want to hate on tesla, seeing this, it hardly seems like a fair test.
From the perspective of the car, it’s almost perfectly lined up with the background. it’s a very realistic painting, and any AI that is trained on image data would obviously struggle with this. AI doesn’t have that human component that allows us to infer information based on context. We can see the boarders and know that they dont fit. They shouldn’t be there, so even if the painting is perfectly lines up and looks photo realistic, we can know something is up because its got edges and a frame holding it up.
This test, in the context of the title of this article, relies on a fairly dumb pretense that:
- Computers think like humans
- This is a realistic situation that a human driver would find themselves in (or that realistic paintings of very specific roads exist in nature)
- There is no chance this could be trained out of them. (If it mattered enough to do so)
This doesnt just affect teslas. This affects any car that uses AI assistance for driving.
Having said all that… fuck elon musk and fuck his stupid cars.
This doesnt just affect teslas. This affects any car that uses AI assistance for driving.
Except for, you know… cars that don’t solely rely on optical input and have LiDAR for example
Fair point. But it doesn’t address the other things i said, really.
But i suppose,based on already getting downvoted, that I’ve got a bad take, either that or people who are downvoting me dont understand i can hate tesla and elon, think their cars are shit and still see that tests like this can be nuanced. The attitude that paints with a broad brush is the type of attitude that got trump elected…
I am fairly dumb. Like, I am both dumb and I am fair-handed.
But, I am not pretentious!
So, let’s talk about your points and the title. You said I had fairly dumb pretenses, let’s talk through those.
- The title of the article… there is no obvious reason to think that I think computers think like humans, certainly not from that headline. Why do you think that?
- There are absolutely realistic situations exactly like this, not a pretense. Don’t think Loony Tunes. Think 18 wheeler with a realistic photo of a highway depicted on the side, or a billboard with the same. The academic article where 3 PhD holding engineering types discuss the issue at length, which is linked in my article. This is accepted by peer-reviewed science and has been for years.
- Yes, I agree. That’s not a pretense, that’s just… a factually correct observation. You can’t train an AI to avoid optical illusions if its only sensor input is optical. That’s why the Tesla choice to skip LiDAR and remove radar is a terminal case of the stupids. They’ve invested in a dead-end sensor suite, as evidenced by their earning the title of Most Lethal Car Brand on the Road.
This does just impact Teslas, because they do not use LiDAR. To my knowledge, they are the only popular ADAS in the American market that would be fooled by a test like this.
Near as I can tell, you’re basically wrong point by point here.
Excuse me.
-
Did you write the article? I genuinely wasn’t aiming my comment at you. It was merely commentary on the context that is inferred by the title. I just watched a clip of the car hitting the board. I didn’t read the article, so i specified that i was referring to the article title. Not the author, not the article itself. Because it’s the title that i was commenting on.
-
That wasn’t an 18 wheeler, it was a ground level board with a photorealistic picture that matched the background it was set up against. It wasnt a mural on a wall, or some other illusion with completely different properties. So no, i think this extremely specific set up for this test is unrealistic and is not comparable to actual scientific research, which i dont dispute. I dont dispute the fact that the lack of LiDAR is why teslas have this issue and that an autonomous driving system with only one type of sensor is a bad one. Again. I said i hate elon and tesla. Always have.
All i was saying is that this test, which is designed in a very specific way and produces a very specific result, is pointless. Its like me getting a bucket with a hole in and hypothesising that if i pour in waterz it will leak out of the hole, and then proving that and saying look! A bucket with a hole in leaks water…
I agree that this just isn’t a realistic problem, and that there are way more problems with Tesla’s that are much more realistic.
Tell that to the guy who lost his head when his Tesla thought a reflective semi truck was the sky