I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.
I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.
EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662
I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - >>38323939
Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money
- https://www.nytimes.com/2023/11/18/technology/ousted-openai-...
- https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
Source: https://arstechnica.com/information-technology/2023/11/repor...
E.g. https://www.thehartford.com/management-liability-insurance/d...
"The Who, What & Why of Directors & Officers Insurance
The Hartford has agents across the country to help with your insurance needs. Directors and officers (D&O) liability insurance protects the personal assets of corporate directors and officers, and their spouses, in the event they are personally sued by employees, vendors, competitors, investors, customers, or other parties, for actual or alleged wrongful acts in managing a company.
The insurance, which usually protects the company as well, covers legal fees, settlements, and other costs. D&O insurance is the financial backing for a standard indemnification provision, which holds officers harmless for losses due to their role in the company. Many officers and directors will want a company to provide both indemnification and D&O insurance."
What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?
I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].
The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.
And that's it.
[1] https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)
I'm glad that there are other companies and open source efforts to fall back on.
As an API user of the GPT models I've always had it at the back of my mind that it would be unwise to 100% rely on OpenAI for the core of any product I built.
The recent rocking of the boat is further justification for my stance in that regard.
I don't understand how Microsoft, after having invested billions, doesn't have a board seat. If they did, I doubt this would've ever happened. I'm not sure why Microsoft let that happen.
But even ignoring that, the board making a decision as impactful as this without consulting their major investors is a dereliction of duty. That alone justifies getting rid of all of them because all of them are complicit in not consulting Microsoft (and probably others).
I have no idea why Sam was fired but it really feels just like an internal power struggle. Maybe there was genuine disagreement about the direction for the company but you choose a leader to make decisions. Ousting the CEO under vague descriptions of "communications with the board" just doesn't pass the smell test.
I'm reminded of this great line from Roger Sterling [1]:
> Half the time this business comes down to "I don't like this guy"
So much of working, performance reviews, hiring and firing decisions and promotions is completely vibes-based.
"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."
That being said, here's my strongman argument: Sam is scared of the ramifications of AI, especially financially. He's experimenting with a lot of things, such as Basic Income (https://www.ycombinator.com/blog/basic-income), rethinking capitalism (https://moores.samaltman.com/) and Worldcoin.
He's also likely worried about what happens if you can't tell who is human and who isn't. We will certainly need a system at some point for verifying humanity.
Worldcoin doesn't store iris information; it just stores a hash for verification. It's an attempt to make sure everyone gets one, and to keep things fair and more evenly distributed.
(Will it work? I don't think so. But to call it an eyeball identity scam and dismiss Sam out of hand is wrong)
Per https://www.semafor.com/article/11/18/2023/openai-has-receiv...
That combination could mean firing the CEO results in Microsoft getting to have everything and OpenAI being some code and models without a cloud, and whatever people that wouldn't cross the street with Altman.
I do not know about OpenAI's deal with Microsoft. But I have been on both sides of deals written that way, where I've been the provider's key person and the contract offered code escrow, and I've been a buyer that tied the contract to a set of key persons and had full source code rights, surviving any agreement.
You do this if you think the tech could be existential to you, and you pay a lot for it because effectively you're pre-buying the assets after some future implosion. OTOH, it tends to be not well understood by most people involved in the hundreds of pages of paperwork across a dozen or more interlocking agreements.
. . .
EDIT TO ADD:
This speculating article seems to agree with my speculation, daddy has the cloud car keys, and key person ouster could be a breach:
Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.
That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.
One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.
https://www.semafor.com/article/11/18/2023/openai-has-receiv...
The fundamental thing you are missing here is that the charter of the non-profit and structure of their ownership of the for-profit (and the for-profit's operating agreement) is all designed in a way that is supposed to eliminate financial incentives for stakeholders as being the thing that the company and non-profit are beholden to.
It may turn out that the practical reality is different from the intent, but everything you're talking about was a feature and not a bug of how this whole thing was set up.
Near as I can tell they never actually launched a product. Their webpage is a GoDaddy parked domain page. Their Facebook page is pictures of them attending conferences and sharing their excitement for what Boston Dynamics and other ACTUAL robotics companies were doing.
>she launched with a colleague from Singularity University
https://en.wikipedia.org/wiki/Singularity_Group
Just lol.
>then cofounded GeoSim Systems
Seems to be a consulting business for creating digital twins that never really got off the ground.
https://www.linkedin.com/in/tasha-m-25475a54/details/experie...
It doesn't appears she's ever had a real job. Someone in the other thread commented that her profile reeks of a three letter agency plant. Possible. Either that or she's just a dabbler funder by her actor husband.
So MS shows who's in control. Say goodbye to OpenAI.
From know on it's all for MS's profit only.
https://pbs.twimg.com/media/F_QXAKEW0AAQpPC?format=png&name=...
https://www.theverge.com/2013/4/12/4217794/jeff-bezos-letter...
Having shown this was possible, he could easily go do it elsewhere.
> Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
One scientist's power trip (Ilya is not an engineer) triggers the power fantasy of the extremely online.
This corporate structure is so convoluted that it's difficult to figure out what the actual powers/obligations of the individual agents involved are.
From 2016: https://www.nytimes.com/2018/04/19/technology/artificial-int...
To 2023: https://www.businessinsider.com/openai-recruiters-luring-goo...
(via >>38325611 , but we merged those comments hither)
From https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are... (linked in OP)
I'd be interested in a discussion of the merits of "traditional governance" here. Traditional private companies are focused on making a profit, even if that has negative side effects like lung cancer or global warming. If OpenAI is supposed to shepherd AGI for all humanity, what's the strongest case for including "traditional governance" type people on the board? Can we be explicit about the benefits they bring to the table, if your objective is humanitarian?
Personally I would be concerned that people who serve on for-profit boards would have the wrong instinct, of prioritizing profit over collective benefit...
I would definitely say the board screwed up.
https://www.forbes.com/sites/alexkonrad/2023/11/17/openai-in...
Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
CEO of Cloudflare explaining: >>19828702
I don't understand how it isn't clear to you.
Calling that a truce makes as much sense as Monty Python’s Black Knight calling the fight a draw.
That's functionally true, but more complicated. The for profit "OpenAI Global LLC" that you buy ChatGPT subscriptions and API access from and in which Microsoft has a large direct investment is majority-owned by a holding company. That holding company is itself majority owned by the nonprofit, but has some other equity owners. A different entity (OpenAI GP LLC) that is wholly owned by the nonprofit controls the holding company on behalf of the nonprofit and does the same thing for the for-profit LLC on behalf of the nonprofit (this LLC seems to me to be the oddest part of the arrangement, but I am assuming that there is some purpose in nonprofit or corporate liability law that having it in this role serves.)
https://openai.com/our-structure and particularly https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b6...
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.
Of course no one knows for sure except the people directly involved here.
He never made the PR and was just there to ask me to implement the thing for his own benefits ....
>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
From Forbes [1]
Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
[1] https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...
He convinced other members of the board that Sam was not the right person for their mission. The original statement implies that Ilya expected Greg to stay at OpenAI, but Ilya seems to have miscalculated his backing.
This appears to be a power struggle between the original nonprofit vision of Ilya, and Sam's strategy to accelerate productionization and attract more powerful actors and investors.
https://www.bloomberg.com/news/articles/2023-11-18/openai-bo...
You’re probably right because people usually don’t have an appetite for risk, but OpenAI is still a startup, and one does not join a startup without an appetite for risk. At least before ChatGPT made the company famous, which was recent.
I’d follow Sam and Greg. But N=1 outsider isn’t too persuasive.
Here is what I understand by table stakes: https://brandmarketingblog.com/articles/branding-definitions...
not that those are necessarily bad in all ways but they sure do contribute to unpredictability
He talks about how learning ML made him feel like a beginner again on his blog (which was a way for him attract talent willing to learn ML to OpenAI) https://blog.gregbrockman.com/its-time-to-become-an-ml-engin...
The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors. https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
> OpenAI's chief strategy officer, Jason Kwon, told employees in a memo just now he was "optimistic" OpenAI could bring back Sam Altman, Greg Brockman and other key employees. There will likely be another update mid-morning tomorrow, Kwon said.
https://x.com/miramurati/status/1726126391626985793
Also also she left her bio as “CTO @OpenAI”.
https://www.theguardian.com/technology/2023/nov/18/earthquak...
"No Priors Interview with OpenAI Co-Founder and Chief Scientist Ilya Sutskever" - >>38324546
https://www.businessinsider.com/macintosh-calculator-2011-10
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.businessinsider.nl/y-combinator-basic-income-tes...
But with compromises, as it was like applying loose compression on an already compressed data set.
If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.
[1] https://crfm.stanford.edu/2023/03/13/alpaca.html [2] https://newatlas.com/technology/stanford-alpaca-cheap-gpt/
https://en.m.wikipedia.org/wiki/Give_me_the_man_and_I_will_g...
I signed up from Worldcoin and have been given over $100 which I changed to real money and think it's rather nice of them. They never asked me for anything apart from the eye id check. I didn't have to give my name or anything like that. Is that indistinguishable from any other cryptocurrency scam? I'm not aware of one the same. If you know of another crypto that wants to give me $100 do let me know. If anything I think it's more like VCs paying for your Uber in the early days. It's VC money basically at the moment, with I think they idea they can change it into a global payment network or something like that. As to whether that will work, I'm a bit skeptical but who knows.
This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]
If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.
You say that like it’s nothing, but your biometric data has value.
> Is that indistinguishable from any other cryptocurrency scam?
You’re ignoring all the other people who didn’t get paid (linked articles).
Sam himself described the plan with the same words you’d describe a Ponzi scheme.
> If you know of another crypto that wants to give me $100 do let me know.
I knew of several. I don’t remember names but do remember one that was a casino and one that was tidied to open-source contributions. They gave initial coins to get you in the door.
[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...
There's this [1], a NYT article saying that Microsoft is leading the pressure campaign to get Altman reinstated.
And there's this [2], a Forbes article which claims the playbook is a combination of mass internal revolt, withheld cloud computing credits from Microsoft, and a lawsuit from investors.
[1] https://archive.is/fEVTK#selection-517.0-521.120
[2] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...
Nothing stopping a non-profit from owning all the shares in a for-profit.
[1] https://finance.yahoo.com/news/top-10-holdings-mormon-church...