zlacker

[parent] [thread] 45 comments
1. jasonh+(OP)[view] [source] 2023-11-19 22:58:18
This was pretty clearly an attempt by the board to reassert control, which was slowly slipping away as the company became more enmeshed with Microsoft.
replies(4): >>rvnx+J1 >>TeMPOr+W4 >>tyrfin+D9 >>jxi+Ai
2. rvnx+J1[view] [source] 2023-11-19 23:08:46
>>jasonh+(OP)
Does that mean that the move of the board was actually good for openness of AI ?
replies(4): >>pests+c3 >>aunty_+e3 >>stefan+34 >>Americ+y4
◧◩
3. pests+c3[view] [source] [discussion] 2023-11-19 23:15:48
>>rvnx+J1
Is AI/AGI safety the same as openness?
replies(3): >>rvnx+x3 >>_heimd+35 >>INGSOC+wf
◧◩
4. aunty_+e3[view] [source] [discussion] 2023-11-19 23:16:03
>>rvnx+J1
The board were literally doing their job. Anyone claiming incompetence is just mistaken about the stated goal of openai: safe available agi for all.

Throw in a huge investment, a 90 billion dollar valuation and a rockstar ceo. It’s pretty clear the court of public opinion is wrong about this case.

replies(4): >>cyanyd+t4 >>skygaz+E7 >>tyrfin+ea >>aitrw+Qg
◧◩◪
5. rvnx+x3[view] [source] [discussion] 2023-11-19 23:18:25
>>pests+c3
According to OpenAI investment paperwork:

It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation. The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.

I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.

◧◩
6. stefan+34[view] [source] [discussion] 2023-11-19 23:21:24
>>rvnx+J1
If Microsoft had to put out a statement "its all good we got the source code" clearly the openness of OpenAI was lost a while ago. This move of the board was presumably primarily good for the board.
replies(1): >>optima+Ea
◧◩◪
7. cyanyd+t4[view] [source] [discussion] 2023-11-19 23:24:09
>>aunty_+e3
they likely could have negotiated it better, but I agree, all these Altman fans really suggest the cult of business sapping any nonprofit motive.
replies(2): >>peyton+Xc >>dimask+bi
◧◩
8. Americ+y4[view] [source] [discussion] 2023-11-19 23:25:00
>>rvnx+J1
Openness in the context of AI is not straightforward. The open source folks read it one way, and the alignment people read it another.

It is entirely possible a program that spits out the complete code for a nuclear targeting system should not be released in the wild.

replies(1): >>aeonik+Qd
9. TeMPOr+W4[view] [source] 2023-11-19 23:26:51
>>jasonh+(OP)
So basically, "if you aim for the king, you'd better not miss" kind of situation.
replies(2): >>timeon+N5 >>optima+Ua
◧◩◪
10. _heimd+35[view] [source] [discussion] 2023-11-19 23:27:53
>>pests+c3
No, though I think OpenAI at least wants to achieve both.

Whether we can actually safely develop AI or AGI is a much tougher question than whether that's the intent, unfortunately.

◧◩
11. timeon+N5[view] [source] [discussion] 2023-11-19 23:33:27
>>TeMPOr+W4
More like last desperate attempt.
◧◩◪
12. skygaz+E7[view] [source] [discussion] 2023-11-19 23:44:06
>>aunty_+e3
Competence is generally about effectiveness of execution, and less about intent. This was a foreseeable hot mess executed with staggering naïveté.
replies(2): >>asdfma+Eh >>aunty_+sz
13. tyrfin+D9[view] [source] 2023-11-19 23:56:25
>>jasonh+(OP)
"The board" isn't exactly a single entity. Even if the current board made this decision unanimously, they were a minority at the beginning of the year.
◧◩◪
14. tyrfin+ea[view] [source] [discussion] 2023-11-20 00:00:24
>>aunty_+e3
That's not their stated goal, you're misinterpreting it by changing the wording.
replies(1): >>aunty_+iA
◧◩◪
15. optima+Ea[view] [source] [discussion] 2023-11-20 00:02:12
>>stefan+34
>Microsoft had to put out a statement "its all good we got the source code"

IP lawyers would sell their own mothers for a chance to "wanna bet?" Microsoft.

replies(1): >>tsunam+gc
◧◩
16. optima+Ua[view] [source] [discussion] 2023-11-20 00:03:41
>>TeMPOr+W4
Now known as Prigozhin's Law.
◧◩◪◨
17. tsunam+gc[view] [source] [discussion] 2023-11-20 00:11:29
>>optima+Ea
Uh … no? It’s practically impossible to win a suit with Microsoft to that degree inside a decade. And by then you’ll have lost anyways.
◧◩◪◨
18. peyton+Xc[view] [source] [discussion] 2023-11-20 00:15:15
>>cyanyd+t4
Their actions were reckless and irresponsible. They are currently picking their successors. The incompetence is staggering regardless of motive.
replies(1): >>rvnx+Jd
◧◩◪◨⬒
19. rvnx+Jd[view] [source] [discussion] 2023-11-20 00:19:44
>>peyton+Xc
We're still waiting for the explanations from Altman about the alleged involvement in spending time on conflicting companies while he is CEO at OpenAI.

According to FT this could be the cause for the firing:

“Sam has a company called Oklo, and [was trying to launch] a device company and a chip company (for AI). The rank and file at OpenAI don’t dispute those are important. The dispute is that OpenAI doesn’t own a piece. If he’s making a ton of money from companies around OpenAI there are potential conflicts of interest.”

replies(2): >>peyton+9h >>hotnfr+Tn
◧◩◪
20. aeonik+Qd[view] [source] [discussion] 2023-11-20 00:20:23
>>Americ+y4
Nuclear codes, assuming they are using modern cryptography would not be spat out by any AI, unless they were leaked publicly.

Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.

I'm more worried about other kinds of weapons, but at the same time I really don't like the idea of censoring the science of nature from people.

I think the only long term option is to beef up defenses.

replies(2): >>aitrw+ph >>yeck+Ci
◧◩◪
21. INGSOC+wf[view] [source] [discussion] 2023-11-20 00:29:32
>>pests+c3
no. it's anti-openness. the true value in ai/agi is the ability to control the output. the "safe" part of this is controlling the political slant that "open" ai models allow. the technology itself has much less value than the control that is possible to those who decide what is "safe" and what isn't. it's akin to raiding the libraries and removing any book or idea or reference to historical event that isn't culturally popular.

this is the future that orwell feared.

◧◩◪
22. aitrw+Qg[view] [source] [discussion] 2023-11-20 00:37:20
>>aunty_+e3
>safe available agi for all.

And they can pick two. Gpus don't grow on trees so without billions in funding they can't provide it to everyone.

Available means that I should have access to the weights.

Safe means they want to control what people can use it for.

The board prioritised safe over everything else. I fundamentally disagree with that and welcome the counter coup.

◧◩◪◨⬒⬓
23. peyton+9h[view] [source] [discussion] 2023-11-20 00:39:21
>>rvnx+Jd
I don’t see how that factors in. What matters is OpenAI’s enterprise customers reading about a boardroom coup in the WSJ. Completely avoidable destruction of value.
replies(1): >>dimask+ui
◧◩◪◨
24. aitrw+ph[view] [source] [discussion] 2023-11-20 00:40:37
>>aeonik+Qd
>Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.

The average postgraduate in physics can design a nuclear bomb. That ship sailed in the 1960s. Anyone who uses that as an argument wants a censorship regime that the medieval catholic church would find excessive.

https://www.theguardian.com/world/2003/jun/24/usa.science

replies(1): >>cthalu+ik
◧◩◪◨
25. asdfma+Eh[view] [source] [discussion] 2023-11-20 00:42:04
>>skygaz+E7
For sure. What league did they think they were playing in?
◧◩◪◨
26. dimask+bi[view] [source] [discussion] 2023-11-20 00:45:26
>>cyanyd+t4
It is hard to negotiate when the investors and for-profit part basically has much more power. They tried to bring them in front of a fait accompli situation, as this was their only chance, but they seem to have failed. I do not think they had a better move in the current situation right now, sadly.
replies(1): >>peyton+xp
◧◩◪◨⬒⬓⬔
27. dimask+ui[view] [source] [discussion] 2023-11-20 00:47:41
>>peyton+9h
This is toatlly irrelevant to the board's initial decision though.
replies(2): >>ekosz+Bk >>peyton+5p
28. jxi+Ai[view] [source] 2023-11-20 00:48:07
>>jasonh+(OP)
I'm not trying to throw undeserved shade, but why do we think this is something as complex as that and not just plain incompetence? Especially given the cloak and daggers firing without consulting or notifying any of their partners beforehand. That's just immaturity.
◧◩◪◨
29. yeck+Ci[view] [source] [discussion] 2023-11-20 00:48:11
>>aeonik+Qd
I feel that people have a right to life and liberty, but liberty does not mean access to god-like powers.

There are many people that would do great things with god-like powers, but more than enough that would be terrible.

replies(1): >>aeonik+Pk
◧◩◪◨⬒
30. cthalu+ik[view] [source] [discussion] 2023-11-20 01:00:29
>>aitrw+ph
Significantly more info out and available now than when they worked on that project, too. It's only gotten easier.
◧◩◪◨⬒⬓⬔⧯
31. ekosz+Bk[view] [source] [discussion] 2023-11-20 01:02:35
>>dimask+ui
I think what people in this thread and others are trying to say is that to run a organization like OpenAI you need lots and lots funding. AI research is incredibly costly due to highly paid researchers and an ungodly amount of GPU resources. To put all current funding at risk by pissing off current investors and enterprise customers puts the whole mission of the organization at risk. That's where the perceived incompetence comes from no mater how good the intentions are.
replies(2): >>peyton+mp >>dimask+iS
◧◩◪◨⬒
32. aeonik+Pk[view] [source] [discussion] 2023-11-20 01:04:31
>>yeck+Ci
I don't think history, looking back at this moment, is going to characterize this as god-like powers.

Monumental, like the invention of language or math, but not like a god.

replies(1): >>yeck+po
◧◩◪◨⬒⬓
33. hotnfr+Tn[view] [source] [discussion] 2023-11-20 01:25:28
>>rvnx+Jd
Isn’t it amazing how companies worry about lowly, ordinary employees moonlighting, but C-suiters and board members being involved in several ventures is totally normal?
◧◩◪◨⬒⬓
34. yeck+po[view] [source] [discussion] 2023-11-20 01:28:34
>>aeonik+Pk
To be fair it is a very subjective term, god-like. You could make a claim for many different technical advancements to represent god-like capabilities. I'd claim that many examples exist to day, but many of them are not readily available to most people for inherent or regulatory reasons.

Now, I feel even just "OK" agential AI would represent god-like abilities. Being able to spawn digital homunculi that do your bidding for relatively cheap and with limited knowledge and skill required on the part of the conjuror.

Again, this is very subjective. You might feel that god-like means an entity that can build Dyson Spheres and bend reality to it's will. That is certainly god-like, but just a much higher threshold than what I'd use.

◧◩◪◨⬒⬓⬔⧯
35. peyton+5p[view] [source] [discussion] 2023-11-20 01:33:36
>>dimask+ui
It is a complete departure from past stated means without clear justification.
replies(1): >>dimask+tQ
◧◩◪◨⬒⬓⬔⧯▣
36. peyton+mp[view] [source] [discussion] 2023-11-20 01:36:01
>>ekosz+Bk
Exactly. Add to that the personal smearing of one person and it seems like a very unnecessarily negative maneuver.
◧◩◪◨⬒
37. peyton+xp[view] [source] [discussion] 2023-11-20 01:37:25
>>dimask+bi
Why’d they smear Sam? Couldn’t they have released a statement saying they just don’t see eye to eye anymore?
replies(1): >>dimask+aR
◧◩◪◨
38. aunty_+sz[view] [source] [discussion] 2023-11-20 02:50:45
>>skygaz+E7
No it wasn’t. For a long time Sam was the guy.

Then, he progressively sold more and more of the companies future to Ms.

You don’t need chatgpt and it’s massive gpu consumption to achieve the goals of openai. A small research team and a few million, this company becomes a quaint quiet overachiever.

The company started to hockey stick and everyone did what they knew, Sam got the investment and money. The tech team hunkered down and delivered gpt-4 and soon -5

Was there a different path? Maybe.

Was there a path that didn’t lead to selling the company for “laundry buddy”, maybe also.

On the other hand, Ms knew what they were getting into when its hundredth lawyer signed off on the investment. To now turn around as surprised pikachu when the board starts to do its job and their man on the ground gets the boot is laughable.

replies(1): >>skygaz+sE
◧◩◪◨
39. aunty_+iA[view] [source] [discussion] 2023-11-20 02:57:26
>>tyrfin+ea
Sorry, I read a quote from one of the people involved in this mess and assumed it was direct from the company charter.

Can you fill me in as to what the goal of OpenAI is?

replies(1): >>tyrfin+Do1
◧◩◪◨⬒
40. skygaz+sE[view] [source] [discussion] 2023-11-20 03:32:44
>>aunty_+sz
You're arguing their most viable path was to fire him, wreak havoc and immediately seek to rehire and further empower him whilst diminishing themselves in the process? It's so convoluted, it just might work!

Whether fulfilling their mission or succumbing to palace intrigue, it was a gamble they took. If they didn't realize it was a gamble, then they didn't think hard enough first. If they did realize the risks, but thought they must, then they didn't explore their options sufficiently. They thought their hand was unbeatable. They never even opened the playbook.

replies(1): >>aunty_+bG
◧◩◪◨⬒⬓
41. aunty_+bG[view] [source] [discussion] 2023-11-20 03:52:07
>>skygaz+sE
No, I’m not???
replies(1): >>skygaz+DH
◧◩◪◨⬒⬓⬔
42. skygaz+DH[view] [source] [discussion] 2023-11-20 04:09:30
>>aunty_+bG
Oh, then my apologies, it's unclear to me what you're arguing; That the disaster they find themselves in wasn't foreseeable?

That would imply they couldn’t have considered that Altman was beloved by vital and devoted employees? That big investors would be livid and take action? That the world would be shocked by a successful CEO being unceremoniously sacked during unprecedented success, with (unsubstantiated) allegations of wrongdoing, and leap on the story. Generally those are the kinds of things that would have come up on a "Fire Sam Pro and Cons" list. Or any kind "what's the best way to get what we want and avoid disaster" planning session. They made the way it was done the story, and if they had a good reason, it's been obscured and undermined by attempting to reinstate him.

◧◩◪◨⬒⬓⬔⧯▣
43. dimask+tQ[view] [source] [discussion] 2023-11-20 05:25:37
>>peyton+5p
Some would say it is the opposite way around. Mission of openAI was not supposed to be maximising profit/value. Especially if it can be argued that this exactly goes against its original purpose.
◧◩◪◨⬒⬓
44. dimask+aR[view] [source] [discussion] 2023-11-20 05:28:47
>>peyton+xp
You do not fire a CEO because you hold some personal grudges towards them. You fire them because they do something wrong. And I do not see any evidence or indication of smearing Altman, unless they lie about (ie I do not see any indication of them lying about it).
◧◩◪◨⬒⬓⬔⧯▣
45. dimask+iS[view] [source] [discussion] 2023-11-20 05:33:55
>>ekosz+Bk
I understand that. What is missing is the purpose of running such an organisation. OpenAI has achieved a lot, but is it going to the direction and towards the purpose it was founded on? I do not see how one can argue that. For a non-profit, creating value is a means to a goal, not a goal in itself (as opposed to a for-profit org). People thinking that the problem of this move is that it destroys value for openAI showcase the real issue perfectly.
◧◩◪◨⬒
46. tyrfin+Do1[view] [source] [discussion] 2023-11-20 08:39:20
>>aunty_+iA
> Creating safe AGI that benefits all of humanity

Benefit and available can have very different meaning when you mix in alignment/safety concerns.

[go to top]