zlacker

[return to "OpenAI staff threaten to quit unless board resigns"]
1. breadw+17[view] [source] 2023-11-20 14:06:24
>>skille+(OP)
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...

◧◩
2. Lonely+aG[view] [source] 2023-11-20 16:57:44
>>breadw+17
Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"

I'm wondering why that option hasn't been used yet.

◧◩◪
3. vikram+oX[view] [source] 2023-11-20 17:54:49
>>Lonely+aG
theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab

Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack

◧◩◪◨
4. mcv+cg1[view] [source] 2023-11-20 19:00:28
>>vikram+oX
Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.
◧◩◪◨⬒
5. arrowl+ck1[view] [source] 2023-11-20 19:17:20
>>mcv+cg1
All of their messaging, Ilya's especially, has always been that the forefront of AI development needs to be done by a company in order to benefit humanity. He's been very vocal about how important the gap between open source and OpenAI's abilities is, so that OpenAI can continue to align the AI with 'love for humanity'.
◧◩◪◨⬒⬓
6. mcv+bp1[view] [source] 2023-11-20 19:34:49
>>arrowl+ck1
I can read the words, but I have no idea what you mean by them. Do you mean that he says that in order to benefit humanity, AI research needs to be done by private (and therefore monopolising) company? That seems like a really weird thing to say. Except maybe for people who believe all private profit-driven capitalism is inherently good for everybody (which is probably a common view in SV).
◧◩◪◨⬒⬓⬔
7. colins+XY1[view] [source] 2023-11-20 21:57:45
>>mcv+bp1
the view -- as presented to me by friends in the space but not at OpenAI itself -- is something like "AGI is dangerous, but inevitable. we, the passionate idealists, can organize to make sure it develops with minimal risk."

at first that meant the opposite of monopolization: flood the world with limited AIs (GPT 1/2) so that society has time to adapt (and so that no one entity develops asymmetric capabilities they can wield against other humans). with GPT-3 the implementation of that mission began shifting toward worry about AI itself, or about how unrestricted access to it would allow smaller bad actors (terrorists, or even just some teenager going through a depressive episode) to be an existential threat to humanity. if that's your view, then open models are incompatible.

whether you buy that view or not, it kinda seems like the people in that camp just got outmanuevered. as a passionate idealist in other areas of tech, the way this is happening is not good. OpenAI had a mission statement. M$ manuevered to co-opt that mission, the CEO may or may not have understood as much while steering the company, and now a mass of employees is wanting to leave when the board steps in to re-align the company with its stated mission. whether or not you agree with the mission: how can i ever join an organization with a for-the-public-good type of mission i do agree with, without worrying that it will be co-opted by the familiar power structures?

the closest (still distant) parallel i can find: Raspberry Pi Foundation took funding from ARM: is the clock ticking to when RPi loses its mission in a similar manner? or does something else prevent that (maybe it's possible to have a mission-driven tech organization so long as the space is uncompetitive?)

[go to top]