Throw in a huge investment, a 90 billion dollar valuation and a rockstar ceo. It’s pretty clear the court of public opinion is wrong about this case.
It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation. The Company exists to advance OpenAl, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members.
I guess safe artificial general intelligence is developed and benefits all of humanity, means an open AI (hence the name) and safe AI.
It is entirely possible a program that spits out the complete code for a nuclear targeting system should not be released in the wild.
Extra points if Google were to sweep in and buy OpenAI. I think Sundar is probably too sleepy to manage it, but this would be a coup of epic proportions. They could replace their own lackluster GenAI efforts, lock out Microsoft and Bing from ChatGPT (or if contractually unable to, enshittify the product until nobody cares), and ensure their continued AI dominance. The time to do it is now, when the OpenAI board is down to 4 people, the current leader of whom has prior Google ties, and their interest is to play with AI as an academic curiosity, which a fat warchest would accomplish. Plus if the current board wants to slow down AI progress, one sure way to accomplish that would be to sell it to Google.
Microsoft I don't think they need it:
Assuming they have the whole 90B USD to spend: it doesn't really make sense;
they have full access to the source-code of OpenAI and datasets (because the whole training and runtime runs on their servers already).
They could poach employees and make them better offers, and get away with a much more efficient cost-basis, + increase employee retention (whereas OpenAI employees may just become so rich after a buy-out that they could be tempted to leave).
They can replicate the tech internally without any doubt and without OpenAI.
Google is in deep trouble for now, perhaps they will recover with Gemini. In theory they could buy OpenAI but it seems out-of-character for them. They have strong internal political conflicts within Google, and technically it would be a nightmare to merge the infrastructure+code within their /google3 codebase and other Google-only dependencies soup.
Whether we can actually safely develop AI or AGI is a much tougher question than whether that's the intent, unfortunately.
Would they be also able to keep up with development?
Wikipedia gives these names:
In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[15] the formation of OpenAI and pledged over $1 billion to the venture
Do any of those people sound like their day job was running non-profits? Had any of them EVER worked at a non-profit?
---
So a pretty straightforward reading is that the business/profit-minded guys started the non-profit to lure the idealistic researchers in.
The non-profit thing was a feel-good ruse, a recruiting tool. Sutskever could have had any job he wanted at that point, after his breakthroughs in the field. He also didn't have to work, after his 3-person company was acquired by Google for $40M+.
I'm sure it's more nuanced than that, but it's silly to say that there was an idealistic and pure non-profit, and some business guys came in and ruined it. The motive was there all along.
Not to say I wouldn't have been fooled (I mean certainly employees got many benefits, which made it worth their time). But in retrospect it's naive to accept their help with funding and connections (e.g. OpenAI's first office was Stripe's office), and not think they wouldn't get paid back later.
VCs are very good at understanding the long game. Peter Thiel knows that most of the profits come after 10-15 years.
Altman can take no equity in OpenAI, because he's playing the long game. He knows it's just "physics" that he will get paid back later (and that seems to have already happened)
---
Anybody who's worked at a startup that became a successful company has seen this split. The early employees create a ton of value, but that value is only fully captured 10+ years down the road.
And when there are tens or hundreds of billions of dollars of value created, the hawks will circle.
It definitely happened at say Google. Early employees didn't capture the value they generated, while later employees rode the wave of the early success. (I was a middle-ish employee, neither early nor late)
So basically the early OpenAI employees created a ton of value, but they have no mechanism to capture the value, or perhaps control it in order to "benefit humanity".
From here on out, it's politics and money -- you can see that with the support of Microsoft's CEO, OpenAI investors, many peer CEOs from YC, weird laudatory tweets by Eric Schmidt, etc.
The awkward, poorly executed firing of the CEO seems like an obvious symptom of that. It's a last-ditch effort for control, when it's become obvious that the game is unfolding according to the normal rules of capitalism.
(Note: I'm not against making a profit, or non-profits. Just saying that the whole organizational structure was fishy/dishonest to begin with, and in retrospect it shouldn't be surprising it turned out this way.)
Probably. If the people running it and the shareholders were committed to keeping up and spending money to do so.
I mean, there are tons of think tanks, advocacy organizations, etc. that write lots of AI safety papers that nobody reads. I'm kind of piqued at the OpenAI board not because I think they had the wrong intentions, but because they failed to see that the "perfect is the enemy of the good."
That is, the board should have realistically known that there will be a huge arms race for AI dominance. Some would say that's capitalism - I say that's just human nature. So the board of OpenAI was in a unique position to help guide AI advancement in as safe a manner as possible because they had the most advanced AI system. They may have thought Altman was pushing too hard on the commercial side, but there are a million better ways they could have fought for AI safety without causing the ruckus they did. Now I fear that the "pure AI researchers" on the side of AI Safety within OpenAI (as that is what was being widely reported) will be even more diminished/sidelined. It really feels like this was a colossally sad own goal.
Sure, Microsoft has physical access to the source code and model weights because it's trained on their servers. That doesn't mean they can just take it. If you've ever worked at a big cloud provider or enterprise software system, you'll know that there's a big legal firewall around customer data that is stored within the company's systems, and you can't look at it or touch it without the customer's consent, and even then only for specific business purposes.
Same goes for the board. Legally, the non-profit board is in charge of the for-profit OpenAI entity, and Microsoft does not get a vote. If they want the board gone but the board does not want to step down, too bad. They have the option of poaching all the talent and trying to re-create the models - but they have to do this employee-by-employee, they can't take any confidential OpenAI data or code, etc. Microsoft may have OpenAI by the balls economically, but OpenAI has Microsoft by the balls legally.
A buyout solves both of these problems. It's an exchange of economic value (which Microsoft has in spades) for legal control (which the OpenAI board currently has). Straightens out all the misaligned incentives and lets both parties get what they really want, which is the point of transactions in the first place.
IP lawyers would sell their own mothers for a chance to "wanna bet?" Microsoft.
Internal to the entire OpenAI org, sounds like all we had was just the for-profit arm <-> board of directors. Externally, you can add investors and public opinion (basically defaults to siding with the for-profit arm).
I wish they worked towards something closer to a functional democracy (so not the US or UK), with a judicial system (presumably the board), a congress (non-existent), and something like a triumvirate (presumably the for-profit C-suite). Given their original mission, it would be important to keep the incentives for all 3 separate, except for "safe AI that benefits humanity".
The truly hard to solve (read: impossible?) part is keeping the investors (external) from having an outsize say over any specific branch. If a third internal branch could exist that was designed to offset the influence of investors, that might have resulted in closer to the right balance.
According to FT this could be the cause for the firing:
“Sam has a company called Oklo, and [was trying to launch] a device company and a chip company (for AI). The rank and file at OpenAI don’t dispute those are important. The dispute is that OpenAI doesn’t own a piece. If he’s making a ton of money from companies around OpenAI there are potential conflicts of interest.”
Bigger concern would be the construction of a bomb, which, still, takes a lot of hard to hide resources.
I'm more worried about other kinds of weapons, but at the same time I really don't like the idea of censoring the science of nature from people.
I think the only long term option is to beef up defenses.
No they don’t. Both Bard and Llama are far behind GPT-4, and GPT-4 finished training in August 2022.
this is the future that orwell feared.
Either Sam forms a new company with mass exodus of employees, or outside pressure changes structure of OpenAI towards a clear for-profit vision. In both cases, there will be no confusion going forward whether OpenAI/Sam have become a profit-chasing startup.
Chasing profits is not bad in itself, but doing it under the guise of a non-profit organization is.
You can't judge a non-profit by the same success metrics as a for-profit.
I also personally loathe Microsoft, but even I will concede that they probably have the technical wherewithal to follow known trajectories, the cat is out of the bag with AI now.
And they can pick two. Gpus don't grow on trees so without billions in funding they can't provide it to everyone.
Available means that I should have access to the weights.
Safe means they want to control what people can use it for.
The board prioritised safe over everything else. I fundamentally disagree with that and welcome the counter coup.
The average postgraduate in physics can design a nuclear bomb. That ship sailed in the 1960s. Anyone who uses that as an argument wants a censorship regime that the medieval catholic church would find excessive.
https://chat.openai.com/share/3dd98da4-13a5-4485-a916-60482a...
There are many people that would do great things with god-like powers, but more than enough that would be terrible.
Mozilla Corporation's Experience
*Challenges and Adaptation:* Mozilla Corporation has faced financial challenges, leading to restructuring and strategic shifts. This includes layoffs, closing offices, and diversifying into new ventures, such as acquiring Fakespot in 2023
*Dependence on Key Partnerships:* Its heavy reliance on partnerships like the one with Google for revenue has been both a strength and a vulnerability, necessitating adaptations to changing market conditions and partner strategies
*Evolution and Resilience:* Despite challenges, Mozilla Corporation has shown resilience, adapting to market changes and evolving its strategies to sustain its mission, demonstrating the effectiveness of its governance model within the context of its organizational goals and the broader technology ecosystem
In conclusion, while both OpenAI and Mozilla Corporation have navigated unique paths within the tech sector, their distinct governance structures illustrate different approaches to balancing mission-driven goals with operational sustainability and market responsiveness.
To be frank, they need to really spell out what "benefitting mankind" is. How is it measured? Or is it measured? Or is it just "the board says this isn't doing that so it's not doing that"?
It's honestly a silly slogan.
I believe that is indeed the case, it is the responsibility of the board to make that call.
I think a better approach is to have a system of guiding principles that should guide everyone, and then putting in place a structure where there needs to be periodic alignment that those principles aren't being violated (e.g. a vote requiring something like a supermajority across company leadership of all orgs in the company, but no single org has the "my job is to slow everyone else down" role).
Feel however you will about it, but people have been rattling this pan for decades now. Google's bottom line will exist until someone finds a better way to extract marginal revenue than advertising.
- Not limiting access to a universally profitable technology by making it only accessible to highest bidder (e.g. hire our virtual assistants for 30k a year).
- Making models with a mind to all threats (existential, job replacement, scam uses)
- Potentially open-sourcing models that are deemed safe
So far I genuinely believe they are doing the first two and leaving billions on the table they could get by jacking their price 10x or more.
If the board were to have any influence they had to be able to do this. Whether this was the right time and the right issue to play their trump card I don't know - we still don't know what exactly happened - but I have a lot more respect for a group willing to take their shot than one that is so worried about losing their influence that they can never use it.
Monumental, like the invention of language or math, but not like a god.
https://twitter.com/elonmusk/status/1726376406785925566?s=61...
At this point that coming from Elon may not be the endorsement you think it is.
Also maybe Elon sees that Ilya is going to be ousted and wants to extend a hand to him before others do.
Right now, OpenAI mostly has a big cost advantage; fully exploiting that requires lower pricing and high volume.
If a model is not safe, the access should be limited in general.
Or, from a business model perspective; a 'sane' nonprofit doing what OpenAI should, at least in my mind, be able to do the following harmoniously:
1. Release new models that do the same thing they make others allow access do to via their 'products' with reasonable instructions on how to run them on-prem (i.e. I'm not saying what they do has to be fully runnable on a single local box, but it should be reproducible as a nonprofit purportedly geared towards research.)
2. Provide on-line access to models with a cost model that lets others use while furthering the foundation.
3. Provides enough overall value in what they do that outside parties invest regardless of whether they are guaranteed a specific individual return.
4. Not allow potentially unsafe models to be available via less than both research branches.
Perhaps, however, I am too idealistic.
On the other hand, Point 4 is important, because we can never know under the current model, whether a previous unsafe model has been truly 'patched' for all variations of a model.
OTOH, if a given model would violate Point 4, I do not trust the current org to properly disclose the found gaps; better to quietly patch the UI and intermediate layers than ask whether a fix can be worked around with different wording.
Now, I feel even just "OK" agential AI would represent god-like abilities. Being able to spawn digital homunculi that do your bidding for relatively cheap and with limited knowledge and skill required on the part of the conjuror.
Again, this is very subjective. You might feel that god-like means an entity that can build Dyson Spheres and bend reality to it's will. That is certainly god-like, but just a much higher threshold than what I'd use.
While google did do a good job milking knowledge and improving from its queries and interaction data, openai surely knows how to get information from high quality textual data even better.
Openai made an interface where you can just speak your natural language, it didn't make you learn it's own pool of keyword jargon bastardized quasi command language. It's way more natural.
One wishes someone had pulled a similar (in sentiment) move on energy companies and arms suppliers.
I bet Google has already spent an order of magnitude more money on GPT-4 rival development than OpenAI spent on GPT-4.
Whether it was "smearing" or uncovering actual wrongdoing depends on the facts of the matter, which will hopefully emerge in due course. A board should absolutely be able and willing to fire the CEO, oust the chairman, and jeopardize supplier relationships if the circumstances warrant it. They're the board, that's what they're for!
Then, he progressively sold more and more of the companies future to Ms.
You don’t need chatgpt and it’s massive gpu consumption to achieve the goals of openai. A small research team and a few million, this company becomes a quaint quiet overachiever.
The company started to hockey stick and everyone did what they knew, Sam got the investment and money. The tech team hunkered down and delivered gpt-4 and soon -5
Was there a different path? Maybe.
Was there a path that didn’t lead to selling the company for “laundry buddy”, maybe also.
On the other hand, Ms knew what they were getting into when its hundredth lawyer signed off on the investment. To now turn around as surprised pikachu when the board starts to do its job and their man on the ground gets the boot is laughable.
Can you fill me in as to what the goal of OpenAI is?
Whether fulfilling their mission or succumbing to palace intrigue, it was a gamble they took. If they didn't realize it was a gamble, then they didn't think hard enough first. If they did realize the risks, but thought they must, then they didn't explore their options sufficiently. They thought their hand was unbeatable. They never even opened the playbook.
Other companies tried competing against Chrome, and so far Mozilla is the most successful, as everyone else gave up and ship Chrome skins people basically only use by subterfuge or coercion. I'd say that's pretty good.
That would imply they couldn’t have considered that Altman was beloved by vital and devoted employees? That big investors would be livid and take action? That the world would be shocked by a successful CEO being unceremoniously sacked during unprecedented success, with (unsubstantiated) allegations of wrongdoing, and leap on the story. Generally those are the kinds of things that would have come up on a "Fire Sam Pro and Cons" list. Or any kind "what's the best way to get what we want and avoid disaster" planning session. They made the way it was done the story, and if they had a good reason, it's been obscured and undermined by attempting to reinstate him.
I despise both of these companies, but Google's advantage here is so blatantly obvious that I struggle to see how you can even defend OpenAI like this.
Exactly. Google has so much more resources, tries so hard to compete (it's literally life or death for them), and yet it's still so far behind. It's strange that you don't see that - if you haven't tried comparing Bard's output to GPT-4 for the same questions - try it, it will become obvious.
It's quite possible their rumored Gemini model might finally catch up with GPT-4 at some point in the future - probably around the time GPT-5 is released.
---
A Nobel Prize was awarded to Ilya Prigogine in 1977 for his contributions in irreversible thermodynamics. At his award speech in Stockholm, Ilya showed a practical application of his thesis.
He derived that, in times of superstability, lost trust is directly reversible by removing the cause of that lost trust.
He went on to show that in disturbed times, lost trust becomes irreversible. That is, in unstable periods, management can remove the cause of trust lost--and nothing happens.
Since his thesis is based on mathematical physics, it occupies the same niche of certainty as the law of gravity. Ignore it at your peril.
-- Design for Prevention (2010)
Benefit and available can have very different meaning when you mix in alignment/safety concerns.
From an indexing/crawling POV, the content generated by LLMs might (and IMO will) permanently defeat spam filters, which would in turn cause Google (and everyone else) to permanently lose the war against spam SEO. That might be an existential threat to the value of the web in general, even as an input (for training and for web search) for LLMs.
LLMs might already be good enough to degrade the benefit of freedom of speech via signal-to-noise ratio (even if you think LLMs are "just convincing BS generators"), so I'm glad the propaganda potential is one of the things the red team were working on before the initial release.
Soon (1-2 years) LLMs will be good enough to improve the general SNR of the web. In fact I think GPT-4 might already be.