zlacker

[parent] [thread] 13 comments
1. ah765+(OP)[view] [source] 2023-11-20 10:13:09
"that the process and communications around Sam’s removal has been handled very badly"

The communication was bad (sudden Friday message about not being candid) but he doesn't mention the reason is bad.

"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

He knows the reason, it's not safety, but he's not allowed to say what it is.

Given that, I think that the reason may not be petty, though it's still unclear what it is. It's interesting that he thinks it will take more than a month to figure things out, needing an investigator and interviews with many people. It sounds like perhaps there is a core dysfunction in the company that is part of the reason for the ouster.

replies(3): >>icelan+A7 >>PKop+y9 >>DebtDe+Sf
2. icelan+A7[view] [source] 2023-11-20 11:09:18
>>ah765+(OP)
I think 30 days is pretty reasonable. He can't guarantee a statement by Black Friday or anything. Besides, he isn't bound to release it in 30 days. He could very well have something within 10 days.

But he just got the job and I'm sure many people are on PTO/leave for holidays. Give the guy some time. (And this is coming from someone who is pretty bearish on OpenAI going forward, just think it's fair to Shear)

replies(1): >>ah765+Ka
3. PKop+y9[view] [source] 2023-11-20 11:22:46
>>ah765+(OP)
>it's not safety

Can you explain what is meant by the word safety?

Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?

replies(5): >>uberco+Ga >>tsimio+Va >>trepri+ed >>sam345+Kz >>nimish+wO1
◧◩
4. uberco+Ga[view] [source] [discussion] 2023-11-20 11:28:58
>>PKop+y9
In this context, I believe it's safety of releasing AI tools, and the impact they may have on society or unintentional harm they may cause.
◧◩
5. ah765+Ka[view] [source] [discussion] 2023-11-20 11:29:17
>>icelan+A7
It's important because right now everyone, OpenAI employees included, has no idea why Sam Altman was fired. And now we're being told that we may or may not hear the reason in 30 days.

What could the reason be that would justify this kind of wait?

I'll point out that Sam also doesn't seem to want to say the reason (possibly he's legally forbidden?). And all of the people following him out of OpenAI don't know, and are simply trusting him enough to be willing to leave without knowing.

replies(2): >>icelan+Qh >>jeffra+Xu
◧◩
6. tsimio+Va[view] [source] [discussion] 2023-11-20 11:30:22
>>PKop+y9
In this context, this is about the idea of AI safety. This can either refer to the more short-term concerns about AI helping to spread misinformation (e.g. ChatGPT being used to churn out massive amounts of fake news) or implicit biases (e.g. "predictive policing" using AI to analyze crime data that ends up incarcerating minorities because of accidental biases in its training set). Or it can refer to the longer term fears about a super-human intelligence that would end up acting against humanity for various reasons, and efforts to create a super-human AI that would have the same moral goals as us (and the fear that a non-safe AGI could be accidentally created).

In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.

◧◩
7. trepri+ed[view] [source] [discussion] 2023-11-20 11:44:37
>>PKop+y9
"User: How to make an atomic bomb for $100?"

"AI: I am sorry, I can't provide this information."

replies(1): >>the_lo+eh
8. DebtDe+Sf[view] [source] 2023-11-20 12:04:17
>>ah765+(OP)
>I think that the reason may not be petty, though it's still unclear what it is

The best explanation I've seen is that Ilya is ok with commercializing the models themselves to fund AGI research but that the announcement of an app store for Laundry Buddy type "GPTs" at Dev Day was a bridge too far.

◧◩◪
9. the_lo+eh[view] [source] [discussion] 2023-11-20 12:14:20
>>trepri+ed
user: How to make a White Russian?

AI: I’m sorry due to the ongoing conflict we currently don’t provide information related to Russia. (You have been docked one social point for use of the following forbidden words: “White).

Or maybe more dystopian…

AI: Our file on you suggests you may have recently become pregnant and therefore cannot provide you information on alcohol products. CPS has been notified of your query.

◧◩◪
10. icelan+Qh[view] [source] [discussion] 2023-11-20 12:19:18
>>ah765+Ka
Satya and gdb know. I would guess most of management/leadership that followed him also know.
replies(1): >>petera+YH
◧◩◪
11. jeffra+Xu[view] [source] [discussion] 2023-11-20 13:33:59
>>ah765+Ka
If you work for OpenAI and care what the reason is, assume you need to find a new job.

If you are a customer, arrange to use alternative services. (It's always a good idea to not count one flaky vendor with a habit of outages and firing CEOs.)

If you are just eating popcorn, me too, pass the bowl.

◧◩
12. sam345+Kz[view] [source] [discussion] 2023-11-20 13:52:24
>>PKop+y9
The answers given confirm no one knows what it means. It is a nebulous term often meaning censorship. The question then becomes what type of censorship and who is deciding? So there inevitably will be a political bias. The other more practical meaning is what in the real world are we allowing AI to mechanically alter and what checks and balances are there? Coupled with the first concern it becomes a concern of mechanical real world changes driven by autonomous political bias. The same concerns we have of any person or corporation. But by regulating "safety" one is enforcing a homogeneous centralized mindset that not only influences but controls real world events and will be very hard to change even in a democratic society.
◧◩◪◨
13. petera+YH[view] [source] [discussion] 2023-11-20 14:18:52
>>icelan+Qh
No kidding, clearly they were being poached and were not being open about it in spite of their sensitive positions in OpenAI.
◧◩
14. nimish+wO1[view] [source] [discussion] 2023-11-20 19:07:12
>>PKop+y9
No one knows what it means, but it's provocative.

It's mainly about who is allowed to control what other people can do, i.e. power.

[go to top]