“resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"
I don't think it is going to be hard to show that they are doing something very different than what they said they were going to do.
Is it unchangeable?
A single quote doesn't tell us much.
This gave us the Citizens United v. Federal Election Commission, 558 U.S. 310, i case on their right to speech or place funds.
I am not a lawyer, I am cynical
Corporations count as legal persons when it benefits them
No one is alleging OpenAI committed tax fraud.
"The specific purpose of this corporation is to provide funding for research, development and distribution of technology related to artificial intelligence. The resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable."
Based on this, it would be extremely hard to show that they are doing something very different from what they said they were going to do, namely, fund the research and development of AI technology. They state that the technology developed will benefit the public, not that it will belong to the public, except "when applicable."
It's not illegal for a non-profit to have a for-profit subsidiary earning income; many non-profits earn a substantial portion of their annual revenue from for-profit activities. The for-profit subsidiary/activity is subject to income tax. That income then goes to the non-profit parent can be used to fund the non-profit mission...which it appears they are. It would only be a private benefit issue if the directors or employees of the non-profit were to receive an "excess benefit" from the non-profit (generally, meaning salary and benefits or other remuneration in excess of what is appropriate based on the market).
The charter is not a contract with Musk. He has no more standing than you or I.
He likely could bring some issue before the Delaware court as was done to him recently.
Such mission statements are generally modifiable as long as the new purpose is still charitable. It depends on the bylaws though.
Note: I am just spitballing. I cannot speak definitely about the law or what the GP was saying.
He has no ownership stake. He isn't a director or member of the organization. The thing he claims is a contract he's party to, isn't.
(It wouldn't be the first time someone made a nerd-cult: Aum Shinrikyo was full of physics grad students and had special mind-reading hats. Though that was unironically a cult. Whereas the others were started explicitly as grifts.)
It's like they have no shame.
The separate entity is the one going for revenue.
I've skimmed the complaint now. There seems to be prima facie evidence of a contract there (though we'll see if the response suggests a lot of context was omitted). I find the Promissary Estoppel COA even more compelling, though. Breach of Fiduciary Duty seems like a stretch using "the public" as a beneficiary class. This isn't really my area, but I'll be mildly surprised if that one doesn't get tossed. Don't know enough about the Unfair Business Practices or CA Accounting requirements to have any opinion whatsoever on those. The Prayer for Relief is wild, but they often are.
So, once again, I have absolutely zero idea if OpanAI can be held accountable for not following their charter or not, but if they do, anyone can raise a complaint, and since Musk did give them money to save dolphins or whatever, he may actually be considered the victim.
The idea of corporations as legal persons predates the United States. English law recognised trade guilds and religious orders as legal persons as early as the 14th century. There is nothing specifically American about the idea at all-the US inherited it from English law, as did all other common law countries-and English law didn’t invent it either, similar concepts existed in mediaeval Catholic canon law (religious orders as legal persons) and even in Ancient Roman law (which granted legal personhood to pre-Christian priestly colleges)
It goes back to 1886 [1]. Ditching corporate personhood just makes the law convoluted for no gain. (Oh, you forgot to say corporations in your murder or fraud statute? Oh no!)
He was defrauded. If OpenAI fails, there is a good chance Altman et al get prosecuted.
There's a moral argument perhaps...but from a layman's perspective it's a really dumb case. Now, dumb cases sometimes win, so who knows.
you might think that that also suggests that the values no longer matter, but that would be to say that the only way to prove that something matters is with money or money equivalents. to “put your money where your mouth is,” if you will.
Open source. Check - they have open source software available.
Private Gain of any person. Check (Not hard to see it's a non-profit. People that make private money from a non-profit is obviously excluded) Now to me, personally, I think all non-profits are for-profit enterprises. The "mission" in nearly all cases isn't for the "people it serves". I've seen so many "help the elders" "help the migrants" but the reality is, money always flows up, not to the people in need.
In case anyone is confused I am referring to 126, 132 and 135. Not 127.
"126. As a direct and proximate result of Defendants breaches, Plaintiff has suffered damages in an amount that is presently unknown, but that substantially exceeds this Courts jurisdictional minimum of $35,000, and, if necessary, will be proven at trial.
127. Plaintiff also seeks and is entitled to specific performance of Defendants contractual obligations.
132. Injustice can only be avoided through the enforcement of Defendants repeated promises. If specific enforcement is not awarded, then Defendants must at minimum make restitution in an amount equal to Plaintiffs contributions that have been misappropriated and by the amount that the intended third-party beneficiaries of the Founding Agreement have been damaged [how??], which is an amount presently unknown, and if necessary, will be proven at trial, but that substantially exceeds this Courts jurisdictional minimum of $35,000.
135. As a direct and proximate result of Defendants breaches of fiduciary duty, Plaintiff and the express intended third-party beneficiaries of the Founding Agreement have suffered damages in an amount that is presently unknown, but substantially exceeds this Courts jurisdictional minimum of $35,000, and if necessary, will be proven at trial."
The end result of this suit, if it is not dismissed, may be nothing more than OpenAI settling with the plaintiffs for an amount equal to the plaintiffs' investments.
According to this complaint, we are supposed to be third-party beneficiaries to the founding agreement. But who actually believes we would be compensated in any settlement. Based on these claims, the plaintiffs clearly want their money back. Of course they are willing to claim "the public" as TPBs to get their refund. Meanwhile, in real life, their concern for "the public" is dubious.
Perhaps the outcome of the SEC investigation into Altman's misrepresentations to investors, if any, may be helpful to these plaintiffs.
Effective altruism, eh?
Does it become applicable to open source when "The resulting technology will benefit the public"?
That seems the clearest read.
If so, OpenAI would have to argue that open sourcing their models wouldn't benefit the public, which... seems difficult.
They'd essentially have to argue that the public paying OpenAI to use an OpenAI-controlled model is more beneficial.
Or does "when applicable" mean something else? And if so, what? There aren't any other sentences around that indicate what else. And it's hard to argue that their models technically can't be open sourced, given other open source models.
Technologies are never "done" unless and until they are abandoned. Would it be reasonable for OpenAI to only open source once the product is "done" because it is obsolete or failed to meet performance metrics?
And is that open sourcing of the training algorithm, the interpretation engine, or the produced data model?
IMO the only real involvement OpenAI has had in that movement is suddenly getting REAL hand-wringy Infront of Congress about how dangerous AI is the moment OpenAI no longer held the only set of keys to the kingdom.
It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.
If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.
This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).
This makes absolutely no sense.
After all, the AOI doesn't specify who determines "when applicable," or how "when applicable" is determined, or even when "when applicable" is determined. Without any of those, "when applicable" is a functionally meaningless phrase, intended to mollify unsavvy investors like Musk without constraining or binding the entity in any way.
If so, OpenAI would have to argue that open sourcing their models wouldn't benefit the public, which... seems difficult.
No, they don't have to do anything at all, since they get to decide when "when applicable" applies. And how. And to what...
Or does "when applicable" mean something else? And if so, what? There aren't any other sentences around that indicate what else. And it's hard to argue that their models technically can't be open sourced, given other open source models.
Exactly. That's the problem. There needs to be more to make "when applicable" mean something, and the lawyers drafting the agreement deliberately left that out because it's not intended to mean anything.
Is it normal in startup world to dranatically(?) change the formula for investment and donation rounds?
It sounds like, i donate to a soup kitchen for the homeless and a high end restaurant chain comes out the other end. Complete with investors.
Curious what the court might find. It is certainly interesting drama (again)
The risk is that he’s too confident and screws it up. Or continues on the growth path and becomes the person everyone seems to accuse him of being. But I think he’s not interested in petty shit, scratching around for a few bucks. Why, when you can (try) save the world.
That doesn’t seem aligned with their articles of incorporation at all. If “when applicable” is wide enough to drive a profit-maximising bus through, they’re not a not-for-profit. And in that case, why bother with the AOI?
The articles of incorporation aren’t a contract. I don’t know enough law to be able to guess how it’ll be interpreted in court, but intuitively Elon seems to have a point. If you want to take the AOI seriously, Sam Altman’s OpenAI doesn’t pass the pub test.
If you need money to run the publicly released thing you underpriced to seize market share...
... you could also just, not?
And stick to research and releasing results.
At what point does it stop being "necessary" for OpenAI to do bad things to stay competitive and start being about them just running the standard VC playbook underneath a non-profit umbrella?
After ChatGPT was not released to the public, every for-profit raced to reproduce and improve on it. The decision not to release early and often with a restrictive license helped create that competition for funds and talent. If the company had been truly open, competition would have either had the choice of moving quickly, spending less money and contributing to the common core, or spending more money, going slower as they clean room implement the open code they can't use, and trying to compete alone. This might have been a huge win for the open source model, making the profitable decision to be to contribute to the commons.
The for profit entity is allowed to act in the interest of profits.
What is important is that the non profit must use the dividends it receives from the for profit entity in furtherance of is stated non-profit mission.
Elon does not have a point. He's simply proving that he is once again the dumbest guy in the room by failing to do basic due diligence with respect to his multi million dollar donation.
That being said, Altman is also doing sketchy things with OpenAI. But that was part of the reason why they created the for-profit entity: so Altman could do sketchy things that he could not do within the nonprofit entity. Regulators might be able to crack down on some of the sketch, but he's going to be able to get away with a lot of it.
A corporation has the right to "speech" but if crimes are committed, rest assured it will not go to jail, and neither will its executives, protected by layers of legal indirection of this "person corporation".
"The secret history of Elon Musk, Sam Altman, and OpenAI" - https://www.semafor.com/article/03/24/2023/the-secret-histor...
But that was to be expected, from the guy who forced his employees to go work with Covid and claimed danger of Covid infection to show up at a Twitter aquisition deposition...
"Tesla gave workers permission to stay home rather than risk getting covid-19. Then it sent termination notices." - https://www.washingtonpost.com/technology/2020/06/25/tesla-p...
"Musk declined to attend in-person Twitter deposition, citing COVID exposure risk" - https://thehill.com/regulation/court-battles/3675282-musk-de...
This seems to make a decent argument that these models are potentially not safe. I prefer criminals don't have access to a PhD bomb making assistants who can explain the process to them like they are 12. While the cat may be out of the bag, you don't just hand out guns to everyone (for free) because a few people misused them.
I can't think of another example of a nonprofit that was so financially viable that it converted to for-profit though, usually a nonprofit just closes down.
Not sure I'd trust Washington Post to present a story accurately - whether the termination notices were relevant to the premise presented.
Did he attend the Twitter deposition via video? Seems like a hit piece.
Altman and OpenAI’s other founders rejected Musk’s proposal. Musk, in turn, walked away from the company — and reneged on a massive planned donation. The fallout from that conflict, culminating in the announcement of Musk’s departure on Feb 20, 2018..."
The Russians will obviously use it to spread Kremlin's narratives on the Internet in all languages, including Klingon and Elvish.
The law doesn’t work that way. It’s not as simple as people I like should win and people I don’t should lose.
The fact you provided references and links implies you actually believe you are making a coherent case
https://www.cyberark.com/resources/blog/apt29s-attack-on-mic...
It's very hard to argue that when you give 100,000 people access to materials that are inherently worth billions, none of them are stealing those materials. Google has enough leakers to conservative media of all places that you should suspect that at least one Googler is exfiltrating data to China, Russia, or India.
Billionaires motives, their weird obsession of saving the world, and damaged psyche's that drive a never ending need, for absurd accumulation of health, have a direct impact on my daily life, and are therefore more interesting.
Huh? There's no secret to building these LLM-based "AI"s - they all use the same "transformer" architecture that was published by Google. You can find step-by-step YouTube tutorials on how to build one yourself if you want to.
All that OpenAI did was build a series of progressively larger transformers, trained on progressively larger training sets, and document how the capabilities expanded as you scaled them up. Anyone paying attention could have done the same at any stage if they wanted to.
The expense of recreating what OpenAI have built isn't in having to recreate some secret architecture that OpenAI have kept secret. The expense is in obtaining the training data and training the model.
Musk's money kept the lights on during a time when OpenAI didn't do much more than get a computer to play Dota. If he wants the proceeds of what his money bought, then they should write him a check for $0, or ship him a garbage can full of the taco wrappers eaten by the developers during that time period.
Musk's influence in attracting/retaining talent is rather a mixed bag given that he poached Karpathy for Tesla around the same time he left.
I think the person you're thinking of who Musk helped recruit for OpenAI is Ilya Sutskever. The person who just left, after a second brief stint at OpenAI, is Karpathy who for time being seems content on going back to his roots as an educator.
Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.
(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)
Now why are you being obtuse on ignoring another real reason, Elon Musk was poaching people from OpenAI and argued for a conflict of interest.
"Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla" - https://www.theverge.com/2018/2/21/17036214/elon-musk-openai...
From all the HN favorites memes, the two most strong that need to evaporate, are that Elon Musk wants to save humanity, and Sam Altman does not care about money...
(Not saying OpenAI isn't greedy)
And yeah, Sam cares about money and some other things, it seems.
Also, the root "corp[us]" literally means "body".
Corporations are Frankensteins, basically.
How sure are you of that? Seems to me it could at least equally validly be claimed that that is precidsely what it is.
> He has no more standing than you or I.
Did you finance OpenAI when it started? I didn't.
I don't think there is such a thing. Once you co-found something, you are forever a co-founder of it. (Unless you have a time machine. Lacking such, nobody has ever un-founded anything, have they?)