zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. gkober+z1[view] [source] 2023-11-18 23:00:36
>>medler+(OP)
I'd bet money Satya was a driver of this reversal.

I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.

I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.

EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662

◧◩
2. Jensso+i3[view] [source] 2023-11-18 23:07:02
>>gkober+z1
> I hope Sam comes back

Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.

I think him staying is bad for the field overall compared to OpenAI splitting in two.

◧◩◪
3. gkober+q4[view] [source] 2023-11-18 23:12:32
>>Jensso+i3
Competition may be good for profit, but it's not good for safety. The balance between the two factions inside OpenAI is a feature, not a bug.
◧◩◪◨
4. ta988+s7[view] [source] 2023-11-18 23:26:35
>>gkober+q4
The only safety they are worried about is their own safety from a legal and economical point of view. These threats about humanity-wide risks are just fairy tales that grown ups say to scare each other (roko basilisk, etc there is a lineage) or cover their real reasons (which I strongly believe is the case for OpenAI).
◧◩◪◨⬒
5. arisAl+l8[view] [source] 2023-11-18 23:31:40
>>ta988+s7
You are saying that all top ai scientists are saying fairy tales to scare themselves if I understood correctly?
◧◩◪◨⬒⬓
6. smegge+zd[view] [source] 2023-11-18 23:57:21
>>arisAl+l8
Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.
◧◩◪◨⬒⬓⬔
7. arisAl+di1[view] [source] 2023-11-19 08:39:23
>>smegge+zd
You probably never read I robot from Asimov?
◧◩◪◨⬒⬓⬔⧯
8. smegge+ty1[view] [source] 2023-11-19 11:11:01
>>arisAl+di1
On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.
◧◩◪◨⬒⬓⬔⧯▣
9. arisAl+5j2[view] [source] 2023-11-19 16:46:09
>>smegge+ty1
and what you got from the I Robot stories is that there is zero probability of danger? Fascinating
◧◩◪◨⬒⬓⬔⧯▣▦
10. smegge+vC2[view] [source] 2023-11-19 18:07:13
>>arisAl+5j2
none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?

Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.

◧◩◪◨⬒⬓⬔⧯▣▦▧
11. arisAl+2W4[view] [source] 2023-11-20 08:19:08
>>smegge+vC2
Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
12. smegge+3Z7[view] [source] 2023-11-20 23:06:33
>>arisAl+2W4
And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.
[go to top]