zlacker

[parent] [thread] 23 comments
1. thr897+(OP)[view] [source] 2023-11-18 07:16:19
Aleksander in particular is deeply invested in AI safety as a mission. It's a very confusing departure, since most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non-profit objectives. A huge loss for OpenAI nonetheless.
replies(5): >>cinnta+C1 >>MattRi+62 >>sundar+17 >>visarg+Ci >>ianbic+c71
2. cinnta+C1[view] [source] 2023-11-18 07:32:39
>>thr897+(OP)
The way Sam and Greg were fired maybe led him to no longer have faith in the company and so he quit?
replies(2): >>mannyv+E4 >>deneas+U7
3. MattRi+62[view] [source] 2023-11-18 07:37:09
>>thr897+(OP)
Perhaps you could argue that he wants to stick with Sam and the others because if they start a company that competes with OpenAI, there’s a real chance they catch up and surpass OpenAI. If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.
replies(4): >>Closi+y5 >>I_am_u+a6 >>thekom+3a >>visarg+9k
◧◩
4. mannyv+E4[view] [source] [discussion] 2023-11-18 08:00:49
>>cinnta+C1
More like the guy who engineered this situation is an asshole and they don't want to work for him.
replies(2): >>Obscur+H6 >>notRob+X6
◧◩
5. Closi+y5[view] [source] [discussion] 2023-11-18 08:09:02
>>MattRi+62
Depends how much research is driven by Ilya…
◧◩
6. I_am_u+a6[view] [source] [discussion] 2023-11-18 08:14:39
>>MattRi+62
I dunno, the moat Sam tried to build might make it hard to make a competitor.
replies(1): >>jatins+h8
◧◩◪
7. Obscur+H6[view] [source] [discussion] 2023-11-18 08:19:22
>>mannyv+E4
Who's the situation-engineer for some of us duller but curious folks?
replies(1): >>kcb+gq
◧◩◪
8. notRob+X6[view] [source] [discussion] 2023-11-18 08:21:49
>>mannyv+E4
Who was that? How are they an asshole?
9. sundar+17[view] [source] 2023-11-18 08:22:17
>>thr897+(OP)
> since most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non-profit objectives

With evidence, or is this the kind of pure speculation that media indulges in when they have no information and have to appear knowledgeable?

replies(1): >>beowul+J7
◧◩
10. beowul+J7[view] [source] [discussion] 2023-11-18 08:28:56
>>sundar+17
Twitter rumors from “insiders”
replies(1): >>kcb+lq
◧◩
11. deneas+U7[view] [source] [discussion] 2023-11-18 08:30:59
>>cinnta+C1
Important detail: Only Sam was fired, Greg was removed from the board and then later quit. Source: https://twitter.com/gdb/status/1725667410387378559
◧◩◪
12. jatins+h8[view] [source] [discussion] 2023-11-18 08:33:56
>>I_am_u+a6
We are about to find out if the moats are indeed that strong.

xAI recently showed that training a decent-ish model is now a multi-month effort. Granted GPT-4 is still farther along than others but curious how many months/resources does that add up when you have the team that built it in the first place

But also, starting another LLM company might be too obvious a thing to do. Maybe Sam has another trick up his sleeve? Though I suspect he is sticking with AI one way or the other

◧◩
13. thekom+3a[view] [source] [discussion] 2023-11-18 08:50:45
>>MattRi+62
One funny detail is that the OpenAI charter states that, if this happens, they will stop their own work and help the organisation that is closest to achieving OpenAI's stated goal.
replies(3): >>aryama+eg >>tralln+Qh >>cma+SJ
◧◩◪
14. aryama+eg[view] [source] [discussion] 2023-11-18 09:42:48
>>thekom+3a
really?
replies(1): >>thekom+Em
◧◩◪
15. tralln+Qh[view] [source] [discussion] 2023-11-18 09:59:15
>>thekom+3a
Maybe Sam wants to build something for profit?
16. visarg+Ci[view] [source] 2023-11-18 10:05:54
>>thr897+(OP)
> most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non profit objectives

Maybe Ilya discovered something as head of AI safety research, something bad, and they had to act on it. From the outside it looks as if they are desperately trying to gain control. Maybe he got confirmation that LLMs are a little bit conscious, LOL. No, I am not making this up: https://twitter.com/ilyasut/status/1491554478243258368

replies(1): >>Davidz+ah1
◧◩
17. visarg+9k[view] [source] [discussion] 2023-11-18 10:17:14
>>MattRi+62
> If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.

If an AI said that, we'd be calling it "capability gain" and think it's a huge risk.

◧◩◪◨
18. thekom+Em[view] [source] [discussion] 2023-11-18 10:36:11
>>aryama+eg
https://openai.com/charter

Second paragraph of the "Long-term safety" section.

◧◩◪◨
19. kcb+gq[view] [source] [discussion] 2023-11-18 11:09:00
>>Obscur+H6
It's been confirmed to be Ilya.
◧◩◪
20. kcb+lq[view] [source] [discussion] 2023-11-18 11:09:42
>>beowul+J7
No. Statements from Ilya himself.
◧◩◪
21. cma+SJ[view] [source] [discussion] 2023-11-18 13:26:33
>>thekom+3a
But now it may be the regulations they've gotten in place will make it harder for any new upstarts to approach them.
22. ianbic+c71[view] [source] 2023-11-18 15:43:42
>>thr897+(OP)
A rudder only works as long as you are moving faster than the current. I can imagine (some) people concerned with safety also feeling a sense of urgency, because their ability to steer the AI toward the good is limited by their organization's engine of progress.
◧◩
23. Davidz+ah1[view] [source] [discussion] 2023-11-18 16:40:56
>>visarg+Ci
lol sorry if this is clearly a joke but who cares if it's a little bit conscious. So are fucking pigeons.
replies(1): >>visarg+IW6
◧◩◪
24. visarg+IW6[view] [source] [discussion] 2023-11-20 06:07:45
>>Davidz+ah1
it would be funny if Ilya followed the ranks of Blake Lemoine and went off for AI consciousness
[go to top]