zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. gkober+z1[view] [source] 2023-11-18 23:00:36
>>medler+(OP)
I'd bet money Satya was a driver of this reversal.

I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.

I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.

EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662

◧◩
2. Jensso+i3[view] [source] 2023-11-18 23:07:02
>>gkober+z1
> I hope Sam comes back

Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.

I think him staying is bad for the field overall compared to OpenAI splitting in two.

◧◩◪
3. tfehri+A4[view] [source] 2023-11-18 23:13:15
>>Jensso+i3
My main concern is that a new Altman-led AI company would be less safety-focused than OpenAI. I think him returning to OpenAI would be better for AI safety, hard to say whether it would be better for AI progress though.
◧◩◪◨
4. silenc+Z6[view] [source] 2023-11-18 23:24:20
>>tfehri+A4
Okay, this is honestly annoying. What is this thing with the word "safety" becoming some weasel word when it comes to AI discussions?

What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?

I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].

The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.

And that's it.

[0]: https://www.youtube.com/watch?v=jvWmCndyp9A&t

◧◩◪◨⬒
5. threes+O9[view] [source] 2023-11-18 23:39:23
>>silenc+Z6
There is a common definition of safety that applies to most of the world.

Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.

And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.

◧◩◪◨⬒⬓
6. Amezar+Fo[view] [source] 2023-11-19 01:01:38
>>threes+O9
Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.
◧◩◪◨⬒⬓⬔
7. chasd0+lv[view] [source] 2023-11-19 01:53:40
>>Amezar+Fo
Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.
[go to top]