zlacker

[parent] [thread] 51 comments
1. taway1+(OP)[view] [source] 2023-11-22 17:35:31
Some perspective ...

One developer (Ilya) vs. One businessman (Sam) -> Sam wins

Hundreds of developers threaten to quit vs. Board of Directors (biz) refuse to budge -> Developers win

From the outside it looks like developers held the power all along ... which is how it should be.

replies(13): >>rexare+j >>jessen+62 >>philip+y2 >>adverb+o3 >>sokolo+u3 >>hsavit+X4 >>jejeyy+n6 >>dylan6+6d >>zeroha+1n >>Quenti+hr >>m00x+hv >>awb+Hz >>nikcub+xH
2. rexare+j[view] [source] 2023-11-22 17:36:53
>>taway1+(OP)
Money won.
3. jessen+62[view] [source] 2023-11-22 17:43:40
>>taway1+(OP)
Yes, 95% agreement in any company is unprecedented but:

1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

2. Sam approved each hire in the first place.

3. OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Either way on how they got to that conclusion of banding together to quit, it was a good idea, and it worked. And it is a check on power for a bad board of directors, when otherwise a board of directors cannot be challenged. "OpenAI is nothing without its people".

replies(2): >>anders+Xl >>brrrrr+Do
4. philip+y2[view] [source] 2023-11-22 17:45:36
>>taway1+(OP)
Are you sure Ilya was the root of this.

He backed it and then signed the pledge to quit if it wasn't undone.

What's the evidence he was behind it and not D'Angelo?

replies(3): >>dr_dsh+J7 >>__loam+ga >>jivetu+wj
5. adverb+o3[view] [source] 2023-11-22 17:48:59
>>taway1+(OP)
There are three dragons:

Employees, customers, government.

If motivated and aligned, any of these three could end you if they want to.

Do not wake the dragons.

replies(2): >>pdntsp+Qa >>bossyT+jI
6. sokolo+u3[view] [source] 2023-11-22 17:49:18
>>taway1+(OP)
Is your first “-> Sam wins” different than what you intended?
7. hsavit+X4[view] [source] 2023-11-22 17:56:02
>>taway1+(OP)
seems like the union of developers is stronger than the company itself. hence why unions are so frowned upon by big tech corporate leadership
replies(1): >>JacobT+GJ1
8. jejeyy+n6[view] [source] 2023-11-22 18:01:19
>>taway1+(OP)
$$$ vs. Safety -> $$$ wins.

Employees who have $$$ incentive threaten to quit if that is taken away. News at 8.

replies(1): >>baby+T6
◧◩
9. baby+T6[view] [source] [discussion] 2023-11-22 18:02:54
>>jejeyy+n6
Why are you assuming employees are incentivized by $$$ here, and why do you think the board's reason is related to safety or that employees don't care about safety? It just looks like you're spreading FUD at this point.
replies(4): >>jejeyy+Q7 >>hacker+R8 >>mi_lk+Ia >>DirkH+Wn
◧◩
10. dr_dsh+J7[view] [source] [discussion] 2023-11-22 18:06:12
>>philip+y2
If we only look at the outcomes (dismantling of board), Microsoft and Sam seem to have the most motive.
◧◩◪
11. jejeyy+Q7[view] [source] [discussion] 2023-11-22 18:06:36
>>baby+T6
of course the employees are motivated by $$$ - is that even a question?
replies(1): >>Xelyne+R22
◧◩◪
12. hacker+R8[view] [source] [discussion] 2023-11-22 18:11:11
>>baby+T6
The large majority of people are motivated by $$$ (or fame) and if they all tell me otherwise I know many of them are lying.
◧◩
13. __loam+ga[view] [source] [discussion] 2023-11-22 18:16:31
>>philip+y2
I'm not sure I buy the idea that Ilya was just some hapless researcher who got unwillingly pulled into this. Any one of the board could have voted not to remove Sam and stop the board coup, including Ilya. I'd bet he only got cold feet after the story became international news and after most of the company threatened to resign because their bag was in jeopardy.
replies(1): >>Xelyne+A42
◧◩◪
14. mi_lk+Ia[view] [source] [discussion] 2023-11-22 18:17:55
>>baby+T6
It's you who are naive if you really think the majority of those 7xx employees care more about safe AGI than their own equity upside
replies(2): >>nh2342+ef >>concor+Ho
◧◩
15. pdntsp+Qa[view] [source] [discussion] 2023-11-22 18:18:57
>>adverb+o3
The Board is another one, if you're CEO.
replies(2): >>elliot+Tc >>adverb+2x
◧◩◪
16. elliot+Tc[view] [source] [discussion] 2023-11-22 18:28:05
>>pdntsp+Qa
I think the parent comment’s point is that the board is not one, since the board was defeated (by the employee dragon).
replies(1): >>pdntsp+Jg
17. dylan6+6d[view] [source] 2023-11-22 18:29:01
>>taway1+(OP)
It's not like this is the first:

One developer (Woz) vs One businessman (Jobs) -> Jobs wins

◧◩◪◨
18. nh2342+ef[view] [source] [discussion] 2023-11-22 18:37:22
>>mi_lk+Ia
Why would anyone care about safe agi? its vaporware.
replies(2): >>mecsre+gh >>stillw+Vk
◧◩◪◨
19. pdntsp+Jg[view] [source] [discussion] 2023-11-22 18:42:35
>>elliot+Tc
I think the analogy is kind of shaky. The board tried to end the CEO, but employees fought them and won.

I've been in companies where the board won, and they installed a stoolie that proceeded to drive the company into the ground. Anybody who stood up to that got fired too.

replies(1): >>davesq+0z
◧◩◪◨⬒
20. mecsre+gh[view] [source] [discussion] 2023-11-22 18:44:23
>>nh2342+ef
Everything is vaporware until it gets made. If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

Lucky for us this fiasco has nothing to do with AGI safety, only AI technology. Which only affects automated decision making in technology that's entrenched in every fact of our lives. So we're all safe here!

replies(1): >>supert+sp
◧◩
21. jivetu+wj[view] [source] [discussion] 2023-11-22 18:53:07
>>philip+y2
wake up people! (said rhetorically, not accusatory or any other way)

This is Altman's playbook. He did a similar ousting at Reddit. This was planned all along to overturn the board. Ilya was in on it.

I'm not normally a conspiracy theorist. But fool me ... you can't be fooled again. As they say in Tennessee

replies(2): >>buggle+uk >>bossyT+sI
◧◩◪
22. buggle+uk[view] [source] [discussion] 2023-11-22 18:57:38
>>jivetu+wj
What’s the backstory on Reddit?
replies(1): >>occams+Ft
◧◩◪◨⬒
23. stillw+Vk[view] [source] [discussion] 2023-11-22 18:59:17
>>nh2342+ef
Exactly what an OpenAI developer would understand. All the more reason to ride the grift that brought them this far
◧◩
24. anders+Xl[view] [source] [discussion] 2023-11-22 19:05:11
>>jessen+62
> OpenAI is selecting for the type of people who want to work at a non-profit with a goal in mind instead of another company that could offer higher compensation. Mission driven vs profit driven.

Maybe that was the case at some point, but clearly not anymore ever since the release of ChatGPT. Or did you not see them offer completely absurd compensation packages, i.e. to engineers leaving Google?

I'd bet more than half the people are just there for the money.

25. zeroha+1n[view] [source] 2023-11-22 19:11:26
>>taway1+(OP)
more like $$ wins.

It's clear most employees didn't care much about OpenAI's mission -- and I don't blame them since they were hired by the __for-profit__ OpenAI company and therefore aligned with __its__ goals and rewarded with equity.

In my view the board did the right thing to stand by OpenAI's original mission -- which now clearly means nothing. Too bad they lost out.

One might say the mission was pointless since Google, Meta, MSFT would develop it anyway. That's really a convenience argument that has been used in arms races (if we don't build lots of nuclear weapons, others will build lots of nuclear weapons) and leads to ... well, where we are today :(

replies(1): >>joewfe+xu
◧◩◪
26. DirkH+Wn[view] [source] [discussion] 2023-11-22 19:15:26
>>baby+T6
Assuming employees are not incentivized by $$$ here seems extraordinary and needs a pretty robust argument to show it isn't playing a major factor when there is this much money involved.
◧◩
27. brrrrr+Do[view] [source] [discussion] 2023-11-22 19:18:03
>>jessen+62
> 1. They can get equivalent position and pay at the new Microsoft startup during that time, so their jobs are not at risk.

citation?

replies(1): >>davio+zr
◧◩◪◨
28. concor+Ho[view] [source] [discussion] 2023-11-22 19:18:21
>>mi_lk+Ia
Uh, I reckon many do. Money is easy to come by for that type of person and avoiding killing everyone matters to them.
◧◩◪◨⬒⬓
29. supert+sp[view] [source] [discussion] 2023-11-22 19:22:30
>>mecsre+gh
> If you wait until a new technology definitively exists to start caring about safety, you have guaranteed it will be unsafe.

I don’t get this perspective. The first planes, cars, computers, etc. weren’t initially made with safety in mind. They were all regulated after the fact and successfully made safer.

How can you even design safety into something if it doesn’t exist yet? You’d have ended up with a plane where everyone sat on the wings with a parachute strapped on if you designed them with safety first instead of letting them evolve naturally and regulating the resulting designs.

replies(4): >>FartyM+rr >>bcrosb+pu >>mecsre+ED >>jonono+iX2
30. Quenti+hr[view] [source] 2023-11-22 19:30:42
>>taway1+(OP)
OpenAI developers are redefining the state-of-the-art of AI each 6 months, if the company lose them they already can go bankrupt
◧◩◪◨⬒⬓⬔
31. FartyM+rr[view] [source] [discussion] 2023-11-22 19:31:16
>>supert+sp
The difference between unsafe AGI and an unsafe plane or car is that the plane/car are not existential risks.
replies(1): >>optymi+GC1
◧◩◪
32. davio+zr[view] [source] [discussion] 2023-11-22 19:32:06
>>brrrrr+Do
https://x.com/kevin_scott/status/1726971608706031670?s=20
◧◩◪◨
33. occams+Ft[view] [source] [discussion] 2023-11-22 19:42:17
>>buggle+uk
Yishan (former Reddit CEO) describes how Altman orchestrated the removal of Reddit's owner: https://www.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

Note that the response is Altman's, and he seems to support it.

As additional context, Paul Graham has said a number of times that Altman is one of the most power-hungry and successful people he know (as praise). Paul Graham, who's met hundreds if not thousands of experienced leaders in tech, says this.

◧◩◪◨⬒⬓⬔
34. bcrosb+pu[view] [source] [discussion] 2023-11-22 19:45:40
>>supert+sp
The US government got involved in regulating airplanes long before there were any widely available commercial offerings:

https://en.wikipedia.org/wiki/United_States_government_role_...

If you're trying to draw a parallel here then safety and the federal government needs to catch up. There's already commercial offerings that any random internet user can use.

replies(1): >>supert+Iv
◧◩
35. joewfe+xu[view] [source] [discussion] 2023-11-22 19:46:37
>>zeroha+1n
Where we are today is a world where people do not generally worry about nuclear bombs being dropped. So seems like a pretty good outcome in that example.
replies(1): >>Xelyne+k32
36. m00x+hv[view] [source] 2023-11-22 19:50:03
>>taway1+(OP)
Ilya signed the letter saying he would resign if Sam wasn't brought back. Looks like he regretted his decision and ultimately got played by the 2 departing board members.

Ilya is also not a developer, he's a founder of OpenAI and was the CSO.

◧◩◪◨⬒⬓⬔⧯
37. supert+Iv[view] [source] [discussion] 2023-11-22 19:51:31
>>bcrosb+pu
I agree, and I am not saying that AI should be unregulated. At the point the government started regulating flight, the concept of an airplane had existed for decades. My point is that until something actually exists, you don’t know what regulations should be in place.

There should be regulations on existing products (and similar products released later) as they exist and you know what you’re applying regulations to.

◧◩◪
38. adverb+2x[view] [source] [discussion] 2023-11-22 19:57:39
>>pdntsp+Qa
My comment was more of a reflection of the fact that you might have multiple different governance structures to your organization. Sometimes investors are at the top. Sometimes it's a private owner. Sometimes there are separate kinds of shares for voting on different things. Sometimes it's a board. So you're right, the depending on the governance structure you can have additional dragons. But, you can never prevent any of these three from being a dragon. They will always be dragons, and you can never wake them up.
◧◩◪◨⬒
39. davesq+0z[view] [source] [discussion] 2023-11-22 20:07:12
>>pdntsp+Jg
I have an intuition that OpenAI's mid-range size gave the employees more power in this case. It's not as hard to coordinate a few hundred people, especially when those people are on top of the world and want to stay there. At a megacorp with thousands of employees, the board probably has an easier time bossing people around. Although I don't know if you had a larger company in mind when you gave your second example.
replies(1): >>pdntsp+8I
40. awb+Hz[view] [source] 2023-11-22 20:11:17
>>taway1+(OP)
It’s a cost / benefit analysis.

If people are easily replaceable then they don’t hold nearly as much power, even en mass.

◧◩◪◨⬒⬓⬔
41. mecsre+ED[view] [source] [discussion] 2023-11-22 20:33:22
>>supert+sp
I understand where you're coming from and I think that's reasonable in general. My perspective would be: you can definitely iterate on the technology to come up with safer versions. But with this strategy you have to make an unsafe version first. If you got in one of the first airplanes ever made the likely hood of crashing is pretty high.

At some point, our try it until it works approach will bite us. Consider the calculations done to determine if fission bombs would ignite the atmosphere. You don't want to test that one and find out. As our technology improves exponentially we're going to run into that situation more and more frequently. Regardless if you think it's AGI or something else, we will eventually run into some technology where one mistake is a cataclysm. How many nuclear close calls have we already experienced.

42. nikcub+xH[view] [source] 2023-11-22 20:54:14
>>taway1+(OP)
The employees rapidly and effectively formed a quasi-union to grant themselves a very powerful seat at the table.
◧◩◪◨⬒⬓
43. pdntsp+8I[view] [source] [discussion] 2023-11-22 20:57:08
>>davesq+0z
No, I'm thinking a smaller company, like 50 people, $20m ARR. Engineering-focused, but not tech
◧◩
44. bossyT+jI[view] [source] [discussion] 2023-11-22 20:58:23
>>adverb+o3
Or tame the dragons. AFAIK Sam hired the employees. Hence they are loyal to him
◧◩◪
45. bossyT+sI[view] [source] [discussion] 2023-11-22 20:58:59
>>jivetu+wj
what happenned in reddit?
◧◩◪◨⬒⬓⬔⧯
46. optymi+GC1[view] [source] [discussion] 2023-11-23 02:30:43
>>FartyM+rr
How is it an 'existential risk'? Its body of knowledge is publicly available, no?
replies(1): >>FartyM+do2
◧◩
47. JacobT+GJ1[view] [source] [discussion] 2023-11-23 03:36:06
>>hsavit+X4
And yet, this union was threatening to move to a company without unions.
◧◩◪◨
48. Xelyne+R22[view] [source] [discussion] 2023-11-23 06:53:03
>>jejeyy+Q7
No, it's just counter to the idea that it was "employee power" that brought sam back.

It was capital and the pursuit of more of it.

It always is.

◧◩◪
49. Xelyne+k32[view] [source] [discussion] 2023-11-23 06:57:04
>>joewfe+xu
The nuclear arms race lead to the cold war, not a "good outcome" IMO. It wasn't until nations started imposing those regulations that we got to the point we're at today with nuclear weapons.
◧◩◪
50. Xelyne+A42[view] [source] [discussion] 2023-11-23 07:08:50
>>__loam+ga
That's a strange framing. In that scenario would it not be that he made the decision he thought was right and aligned with openais mission initially, then when seeing the public support Sam had he decided to backtrack so he had a future career?
◧◩◪◨⬒⬓⬔⧯▣
51. FartyM+do2[view] [source] [discussion] 2023-11-23 10:48:45
>>optymi+GC1
What do you mean by "its"? There isn't any AGI yet. ChatGPT is far from that level.
◧◩◪◨⬒⬓⬔
52. jonono+iX2[view] [source] [discussion] 2023-11-23 15:16:53
>>supert+sp
The principles, best practices and tools of safety engineering can be applied to new projects. We have decades of experience now. Not saying it will be perfect on the first try, or that we know everything that is needed. But the novel aspects of AI are not an excuse to not try.
[go to top]