Tesla’s FSD Fails To Detect Deer On The Road, Doesn’t Even Slow After Impact
- Tesla’s FSD semi-autonomous tech uses video image processing as its main way of navigating.
- It didn’t see a deer in the road and neither did the driver who did nothing to avoid hitting the animal.
- The incident brings up good questions about Tesla’s reliance on video imaging only.
Semi-autonomous driving requires a car to navigate the world through sensors. In most cases, automakers use radar or lidar to map their surroundings. Tesla, however, uses camera images only. Now, after a driver and Full Self-Driving tech combined to kill a deer on the highway when neither saw the animal, it’s a good time to consider if vision only is the right way forward.
A Tesla driver posted to X recently about the accident saying, “Hit the deer with my Tesla. FSD didn’t stopped, even after hitting the deer on full speed. Huge surprise after getting a dozen of false stops every day!” That’s right, according to him, the car not only didn’t detect the deer before the impact but it didn’t notice the crash itself either.
More: Tesla Driver Who Hit And Killed Motorcyclist Was Allegedly Looking At His Phone And Using FSD
The driver claims that he didn’t slow or attempt to avoid the animal because “while the deer is slightly visible in the video for a second before impact, to the human eye, it just looked like another uneven patch of road.” Ultimately, the deer died on impact and the Tesla drove away with a hood pushed back by about one inch.
Video uncovered by Jalopnik, shows the moments before impact and indeed, the deer is in an odd position. Not only is it somewhat in line with an old painted line but it’s not moving either. Still, it’s worth wondering if a lidar or radar system would’ve failed to see the animal in the middle of the road the way FSD and this human driver did.
Another video posted in reply to this one does show FSD avoiding a deer as it moves from the right to left of a lane in similarly dim conditions. Clearly, the system is capable of avoiding many accidents. It’s also worth noting that human drivers operate in a way not too dissimilar from what Tesla is trying to achieve. We all process visual imagery and then use our body to manipulate the vehicle to do what we want.
In theory, that’s what Tesla is doing too but vision only appears to be taking away from safety. Radar and lidar can see through things like fog. In essence, they can see further ahead than eyes or cameras can. Perhaps this is one more good example as to why additional safety nets are worthy of consideration.
Image: Paul S@X
The Auto World
Comments
Post a Comment