zlacker

[parent] [thread] 9 comments
1. throwt+(OP)[view] [source] 2024-02-14 02:35:19
Karpathy is a great instructor, but has he has any meaningful impact in industry?

It seems from the outside like he locked Tesla into a losing, NN-only direction for autonomous driving.

(I don’t know what he did at openai.)

replies(3): >>thejaz+f >>erikpu+h >>_giorg+ux
2. thejaz+f[view] [source] 2024-02-14 02:37:58
>>throwt+(OP)
Tesla is only now launching their NN-only FSD model?..
3. erikpu+h[view] [source] 2024-02-14 02:38:11
>>throwt+(OP)
What makes you say it’s a losing direction?
replies(1): >>throwt+36
◧◩
4. throwt+36[view] [source] [discussion] 2024-02-14 03:27:02
>>erikpu+h
A popular take in autonomous driving is the thing preventing Tesla from breaking beyond level two autonomous driving is its aversion to lidar, which is a direct result of its nn preference.

(Eg Mercedes has achieved level 3 already).

replies(3): >>trhway+be >>anon37+de >>wilg+di
◧◩◪
5. trhway+be[view] [source] [discussion] 2024-02-14 04:43:05
>>throwt+36
Absense of lidar is just a symptom. Tesla only recently started to work with 3d model (which they get from cameras like one can get it from lidar) It just that the people who use lidar usually work with 3d model from the beginning.
replies(1): >>threes+Zl
◧◩◪
6. anon37+de[view] [source] [discussion] 2024-02-14 04:43:18
>>throwt+36
I’m confident that neural networks can process LiDAR data just as they can process camera data. I believe Musk drew a hard line on LiDAR for cost reasons: Tesla is absolutely miserly with the build.
◧◩◪
7. wilg+di[view] [source] [discussion] 2024-02-14 05:29:15
>>throwt+36
Man Mercedes has a killer marketing team
◧◩◪◨
8. threes+Zl[view] [source] [discussion] 2024-02-14 06:20:57
>>trhway+be
> which they get from cameras like one can get it from lidar

LiDAR directly measures the distance to objects. What Tesla is doing is inferring it from two cameras.

There has been plenty of research to date [1] that LiDAR + Vision is significantly better than Vision Only especially under edge case conditions e.g. night, inclement weather when determining object bounding boxes.

[1] https://iopscience.iop.org/article/10.1088/1742-6596/2093/1/...

replies(1): >>vardum+2p
◧◩◪◨⬒
9. vardum+2p[view] [source] [discussion] 2024-02-14 06:59:24
>>threes+Zl
"What Tesla is doing is inferring it from two cameras."

People keep repeating this. I seriously don't know why. Stereo vision gives pretty crappy depth, ask anyone who has been playing around with disparity mapping.

Modern machine vision requires just one camera for depth. Especially if that one camera is moving. We humans have no trouble inferring depth with just one eye.

10. _giorg+ux[view] [source] 2024-02-14 08:34:42
>>throwt+(OP)
Elon Musk didn't want LIDAR because:

- it costs too much

- it's ugly

- humans have only vision

TESLA Engineers wanted LIDAR badly, but they have been allowed to use it only on one model.

I think that autonomous driving in all conditions is mostly impossible. It will be widely available in very controlled and predictable conditions (highways, small and perfectly mapped cities etc).

And about Mercedes vs Tesla capabilities, it's mostly marketing... If you're interested I'll find an article that talked about that.

[go to top]