zlacker

[parent] [thread] 7 comments
1. lolind+(OP)[view] [source] 2023-11-18 14:35:20
This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.
replies(2): >>hutzli+s5 >>qualif+k6
2. hutzli+s5[view] [source] 2023-11-18 15:07:27
>>lolind+(OP)
"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "

This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.

3. qualif+k6[view] [source] 2023-11-18 15:13:59
>>lolind+(OP)
Those people aren't about openness. They seem to be members of "AI will kill us all" cult.

The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).

replies(2): >>bnralt+xc >>taway1+8r
◧◩
4. bnralt+xc[view] [source] [discussion] 2023-11-18 15:52:30
>>qualif+k6
That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.

The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.

◧◩
5. taway1+8r[view] [source] [discussion] 2023-11-18 17:09:03
>>qualif+k6
I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?
replies(3): >>ignora+iA >>ethanb+i61 >>qualif+py4
◧◩◪
6. ignora+iA[view] [source] [discussion] 2023-11-18 17:56:53
>>taway1+8r
Someone is going to be right, but we also know that experts have known to be wrong in the past, ofttimes to a catastrophic effect.
◧◩◪
7. ethanb+i61[view] [source] [discussion] 2023-11-18 20:54:07
>>taway1+8r
Literal wishful thinking ("powerful technology is always good") and vested interests ("I like building on top of this powerful technology"), same as always.
◧◩◪
8. qualif+py4[view] [source] [discussion] 2023-11-19 20:35:42
>>taway1+8r
Because I work on AI alignment myself and had been training LLMs long before Attention is All You Need came out (which cites some of my work).
[go to top]