zlacker

[return to ""]
1. throwt+(OP)[view] [source] 2024-02-14 02:35:19
>>mfigui+N3
Karpathy is a great instructor, but has he has any meaningful impact in industry?

It seems from the outside like he locked Tesla into a losing, NN-only direction for autonomous driving.

(I don’t know what he did at openai.)

2. erikpu+h[view] [source] 2024-02-14 02:38:11
>>throwt+(OP)
What makes you say it’s a losing direction?
3. mfigui+N3[view] [source] 2024-02-14 03:08:18
◧◩
4. throwt+36[view] [source] 2024-02-14 03:27:02
>>erikpu+h
A popular take in autonomous driving is the thing preventing Tesla from breaking beyond level two autonomous driving is its aversion to lidar, which is a direct result of its nn preference.

(Eg Mercedes has achieved level 3 already).

◧◩◪
5. trhway+be[view] [source] 2024-02-14 04:43:05
>>throwt+36
Absense of lidar is just a symptom. Tesla only recently started to work with 3d model (which they get from cameras like one can get it from lidar) It just that the people who use lidar usually work with 3d model from the beginning.
◧◩◪◨
6. threes+Zl[view] [source] 2024-02-14 06:20:57
>>trhway+be
> which they get from cameras like one can get it from lidar

LiDAR directly measures the distance to objects. What Tesla is doing is inferring it from two cameras.

There has been plenty of research to date [1] that LiDAR + Vision is significantly better than Vision Only especially under edge case conditions e.g. night, inclement weather when determining object bounding boxes.

[1] https://iopscience.iop.org/article/10.1088/1742-6596/2093/1/...

[go to top]