zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. gkober+z1[view] [source] 2023-11-18 23:00:36
>>medler+(OP)
I'd bet money Satya was a driver of this reversal.

I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.

I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.

EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662

◧◩
2. Jensso+i3[view] [source] 2023-11-18 23:07:02
>>gkober+z1
> I hope Sam comes back

Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.

I think him staying is bad for the field overall compared to OpenAI splitting in two.

◧◩◪
3. gkober+q4[view] [source] 2023-11-18 23:12:32
>>Jensso+i3
Competition may be good for profit, but it's not good for safety. The balance between the two factions inside OpenAI is a feature, not a bug.
◧◩◪◨
4. Meekro+m7[view] [source] 2023-11-18 23:26:07
>>gkober+q4
This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.
◧◩◪◨⬒
5. resour+69[view] [source] 2023-11-18 23:35:47
>>Meekro+m7
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
◧◩◪◨⬒⬓
6. spacem+5c[view] [source] 2023-11-18 23:51:25
>>resour+69
If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!
◧◩◪◨⬒⬓⬔
7. resour+mg[view] [source] 2023-11-19 00:10:59
>>spacem+5c
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
◧◩◪◨⬒⬓⬔⧯
8. autoex+mm[view] [source] 2023-11-19 00:47:53
>>resour+mg
> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.

◧◩◪◨⬒⬓⬔⧯▣
9. resour+Kp[view] [source] 2023-11-19 01:09:45
>>autoex+mm
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
[go to top]