zlacker

[parent] [thread] 6 comments
1. tome+(OP)[view] [source] 2023-05-16 16:02:26
How can one distinguish this testimony from rhetoric by a group who want to big themselves up and make grandiose claims about their accomplishments?
replies(1): >>digbyb+F1
2. digbyb+F1[view] [source] 2023-05-16 16:07:46
>>tome+(OP)
You can also ask that question about the other side. I suppose we need to look closely at the arguments. I think we’re in a situation where we as a species don’t know the answer to this question. We go on the internet looking for an answer but some questions don’t yet have a definitive answer. So all we can do is follow the debate.
replies(2): >>tome+z5 >>tome+6G
◧◩
3. tome+z5[view] [source] [discussion] 2023-05-16 16:23:40
>>digbyb+F1
> You can also ask that question about the other side

But the other side is downplaying their accomplishments. For example Yann LeCun is saying "the things I invented aren't going to be as powerful as some people are making out".

replies(1): >>cma+18
◧◩◪
4. cma+18[view] [source] [discussion] 2023-05-16 16:33:13
>>tome+z5
In his newest podcast interview (https://open.spotify.com/episode/7EFMR9MJt6D7IeHBUugtoE) LeCun is now saying they will be much more powerful than humans, but that stuff like RLHF will keep them from working against us because as an analogy dogs can be domesticated. It didn't sound very rigorous.

He also says Facebook solved all the problems with their recommendation algorithms' unintended effects on society after 2016.

replies(1): >>tome+2G
◧◩◪◨
5. tome+2G[view] [source] [discussion] 2023-05-16 19:15:09
>>cma+18
Interesting, thanks! I guess I was wrong about him.
◧◩
6. tome+6G[view] [source] [discussion] 2023-05-16 19:15:50
>>digbyb+F1
OK, second try, since I was wrong about LeCun.

> You can also ask that question about the other side

What other side? Who in the "other side" is making a self-serving claim?

replies(1): >>cma+Xf4
◧◩◪
7. cma+Xf4[view] [source] [discussion] 2023-05-17 19:32:24
>>tome+6G
Many of the more traditional AI ethicists who focused on bias and stuff also tended to devalue AI as a whole and say it was a waste of emissions. Most of them are pretty skeptical of any concerns of super intelligence or the control problem, though now even Gary Marcus is coming around to that (but putting out numbers like not expected to be a problem for 50 years). They don't tend have as big of a conflict of interest as far as ownership but do as far as self-promotion/brand building.
[go to top]