It seems from the outside like he locked Tesla into a losing, NN-only direction for autonomous driving.
(I don’t know what he did at openai.)
(Eg Mercedes has achieved level 3 already).
LiDAR directly measures the distance to objects. What Tesla is doing is inferring it from two cameras.
There has been plenty of research to date [1] that LiDAR + Vision is significantly better than Vision Only especially under edge case conditions e.g. night, inclement weather when determining object bounding boxes.
[1] https://iopscience.iop.org/article/10.1088/1742-6596/2093/1/...
People keep repeating this. I seriously don't know why. Stereo vision gives pretty crappy depth, ask anyone who has been playing around with disparity mapping.
Modern machine vision requires just one camera for depth. Especially if that one camera is moving. We humans have no trouble inferring depth with just one eye.
- it costs too much
- it's ugly
- humans have only vision
TESLA Engineers wanted LIDAR badly, but they have been allowed to use it only on one model.
I think that autonomous driving in all conditions is mostly impossible. It will be widely available in very controlled and predictable conditions (highways, small and perfectly mapped cities etc).
And about Mercedes vs Tesla capabilities, it's mostly marketing... If you're interested I'll find an article that talked about that.