zlacker

[parent] [thread] 24 comments
1. sesutt+(OP)[view] [source] 2023-11-20 14:09:50
Ilya posted this on Twitter:

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

https://twitter.com/ilyasut/status/1726590052392956028

replies(5): >>abraxa+b1 >>z7+E8 >>nabla9+V8 >>siva7+Fd >>tucnak+pi
2. abraxa+b1[view] [source] 2023-11-20 14:12:50
>>sesutt+(OP)
Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.
replies(6): >>dhruvd+q4 >>bertil+D4 >>strike+Z7 >>derwik+O8 >>dylan6+M9 >>guhcam+Nk
◧◩
3. dhruvd+q4[view] [source] [discussion] 2023-11-20 14:23:28
>>abraxa+b1
At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.
◧◩
4. bertil+D4[view] [source] [discussion] 2023-11-20 14:24:07
>>abraxa+b1
Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.

He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.

replies(3): >>boring+36 >>jacque+Pc >>gryn+vf
◧◩◪
5. boring+36[view] [source] [discussion] 2023-11-20 14:29:00
>>bertil+D4
Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.
replies(1): >>bertil+X9
◧◩
6. strike+Z7[view] [source] [discussion] 2023-11-20 14:36:17
>>abraxa+b1
He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.
replies(1): >>ergoco+Om
7. z7+E8[view] [source] 2023-11-20 14:38:58
>>sesutt+(OP)
>"I deeply regret my participation in the board's actions."

Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.

◧◩
8. derwik+O8[view] [source] [discussion] 2023-11-20 14:39:37
>>abraxa+b1
Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?
9. nabla9+V8[view] [source] 2023-11-20 14:40:22
>>sesutt+(OP)
So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.

Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.

It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.

replies(1): >>coffee+Rg
◧◩
10. dylan6+M9[view] [source] [discussion] 2023-11-20 14:42:44
>>abraxa+b1
That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.
◧◩◪◨
11. bertil+X9[view] [source] [discussion] 2023-11-20 14:43:39
>>boring+36
Oh that's definitely common. I've seen it many times and it's ugly.

I don't think this is what Ilya is trying to do. His tweet is clearly about preserving the organization because he sees the structure itself as helpful, beyond his role in it.

replies(1): >>boring+on
◧◩◪
12. jacque+Pc[view] [source] [discussion] 2023-11-20 14:57:10
>>bertil+D4
For someone who isn't a political animal he made some pretty powerful political moves.
13. siva7+Fd[view] [source] 2023-11-20 15:01:54
>>sesutt+(OP)
It takes a lot of courage to do so after all this.
replies(3): >>Shamel+CF1 >>Xenoam+kG1 >>averag+jI1
◧◩◪
14. gryn+vf[view] [source] [discussion] 2023-11-20 15:13:19
>>bertil+D4
researchers and academics are political withing their organization regardless of whether or not they claim to be or are aware of it.

ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.

◧◩
15. coffee+Rg[view] [source] [discussion] 2023-11-20 15:22:41
>>nabla9+V8
We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.
replies(1): >>colins+n42
16. tucnak+pi[view] [source] 2023-11-20 15:32:58
>>sesutt+(OP)
To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.

The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].

Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.

[1]: >>38330819

[2]: https://nitter.net/jeremyphoward/status/1725712220955586899

◧◩
17. guhcam+Nk[view] [source] [discussion] 2023-11-20 15:49:48
>>abraxa+b1
I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.
replies(1): >>strunz+Cq
◧◩◪
18. ergoco+Om[view] [source] [discussion] 2023-11-20 16:01:01
>>strike+Z7
If your compensation goes from 600k to 200k, you would care as well.

No idealistic vision can compensate for that.

replies(1): >>strike+Cp
◧◩◪◨⬒
19. boring+on[view] [source] [discussion] 2023-11-20 16:04:37
>>bertil+X9
Fair - hopefully an unintentional political move but big political miscalculation.
◧◩◪◨
20. strike+Cp[view] [source] [discussion] 2023-11-20 16:16:30
>>ergoco+Om
Hey i would also be mad if i were in the rank and file employee position. Perhaps the non profit thing needs to be thought out a bit more.
◧◩◪
21. strunz+Cq[view] [source] [discussion] 2023-11-20 16:21:59
>>guhcam+Nk
His post previous to that seems pretty ironic in that light - https://twitter.com/ilyasut/status/1710462485411561808
◧◩
22. Shamel+CF1[view] [source] [discussion] 2023-11-20 21:08:28
>>siva7+Fd
I think the word you're looking for is "fear".
◧◩
23. Xenoam+kG1[view] [source] [discussion] 2023-11-20 21:11:21
>>siva7+Fd
Or a couple of drinks.
◧◩
24. averag+jI1[view] [source] [discussion] 2023-11-20 21:19:26
>>siva7+Fd
Maybe he'll head to Apple.
◧◩◪
25. colins+n42[view] [source] [discussion] 2023-11-20 23:13:48
>>coffee+Rg
i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.

companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.

so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?

[go to top]