zlacker

[return to "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]
1. bmitc+yj[view] [source] 2023-11-20 07:19:13
>>andsoi+(OP)
Through all of this, no one has cogently explained why Altman leaving is such a big deal. Why would workers immediately quit their job when he has no other company, and does he even know who these workers are? Are these people that desperate to make a buck (or the prospect of big bucks)? It seems like half of the people working at the non-profit were not actually concerned about the mission but rather just waiting out their turn for big bucks and fame.

What does Altman bring to the table besides raising money from foreign governments and states, apparently? I just do not understand all of this. Like, how does him leaving and getting replaced by another CEO the next week really change anything at the ground level other than distractions from the mission being gone?

And the outpouring of support for someone who was clearly not operating how he marketed himself publicly is strange and disturbing indeed.

◧◩
2. reissb+ml[view] [source] 2023-11-20 07:31:57
>>bmitc+yj
The board fired Altman for shipping too fast compared to their safety-ist doom preferences. The new interim CEO has said that he wants to slow AI development down 80-90%. Why on earth would you stay, if you joined to build + ship technology?

Of course, some employees may agree with the doom/safety board ideology, and will no doubt stay. But I highly doubt everyone will, especially the researchers who were working on new, powerful models — many of them view this as their life's work. Sam offers them the ability to continue.

If you think this is about "the big bucks" or "fame," I think you don't understand the people on the other side of this argument at all.

◧◩◪
3. mianos+ro[view] [source] 2023-11-20 07:52:33
>>reissb+ml
This is exactly why you would want people on the board who understand the technology. Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI. That is a technical thing that to understand seems to be beyond most people without real experience in the field. It is certainly beyond the understanding of some dude that lucked into a great training set and became an expert, much the same way the The Knack became industry leaders.
◧◩◪◨
4. famous+4q[view] [source] 2023-11-20 08:01:32
>>mianos+ro
>Unless they have some other technology that we don't know about, that maybe brought all this on, a GPT is not a clear path to AGI.

So Ilya Sutskever, one of the most distinguished ML researchers of his generation does not understand the technology ?

The same guy who's been on record saying LLMs are enough for AGI ?

◧◩◪◨⬒
5. fallin+BH[view] [source] 2023-11-20 09:26:12
>>famous+4q
AGI doesn't exist. There is no standard for what makes an AGI or test to prove that an AI is or isn't an AGI once built. There is no engineering design for even a hypothetical AGI like there is for other hypothetical tech e.g. a fusion reactor, so we have no idea if it is even similar to existing machine learning designs. So how can you be an expert on it? Being an expert on existing machine learning tech, which Ilya absolutely is, doesn't grant this status.
[go to top]